Next-generation bionic eyes are practically here today. Imagine a blind person’s real-world conundrum trying to shop for one — they could schedule surgery for Nano Retina’s implant today and see their daughter’s wedding in 576-pixel clarity, but it would cost them their life’s savings. The Nano Retina 5000-pixel device could be ready tomorrow, or in another six months… and would be much more affordable. When the procedure involves assimilation of an electrode pincushion into the ganglionic tentacles of your retina, hardware upgrades are not as simple as popping in more RAM. What kind of decision matrix could be offered under such critical circumstances?
Cochlear implants, used to restore hearing, work phenomenally well when properly tuned and fitted. Most are refinements of the basic piece of hardware one might have sitting on their bookshelf — the graphic equalizer. The implant processes a single audio stream into bins of various sizes according to frequency, and then applies current to the corresponding frequency location in the cochlea, typically with a 16-spot linear electrode. The main function of these devices is to capture speech formants — the peaks in the frequency spectrum of the voice. The toughest challenge for the cochlear implant is to provide sound localization and source separation in noisy environments like a cocktail party.
Vision implants are much more complex. As any practiced photographer knows, the eye is more than a camera. The optic nerve does not feed the brain pixels. If you imagine your camera responding to auto-selected targets several times a second, gathering the full spectrum of light through its entire range of settings at each pause, and compressing the data onto a bandwidth- and energy-limited channel ideally matched to its receiver, you have some idea of what the retina accomplishes routinely.
The reason cochlear implants work so well is that the brain is just that good at making sense out of virtually any kind of signal it is given. If presented only with noise, or with nothing at all, the brain will eventually begin to manufacture hallucinations. If the implant signal contains even some distorted fragment of the original signal, it can be made to work convincingly. This is also the reason why retina implants can work without incorporating any knowledge of what the retina actually does in the healthy state.
These days researchers are trying to do a little better than the grainy images provided through our current implants. Signal processing techniques were developed in the Cold War era to track and target incoming missiles by extracting signals from noisy radar data. These same techniques are now used to convert the activity of groups of neurons in the motor cortex into a set of commands for moving a cursor, prosthetic device, or de-enervated limb in brain machine interfaces (BCIs). These methods and derivations of them can also be applied to incoming sensory data and can approximate what the retina actually does, without doing it in the same way.
Unfortunately, videos and TED talks are not the places where this kind of knowledge is typically transmitted in much depth. For that, one needs to look back to the work of the founding father of cybernetics, Norbert Wiener, and his eminently practical inspiration, Vito Volterra. After suggesting that helium be used instead of hydrogen in airships, to great success, Volterra shifted gears and came up with some methods to characterize complex systems. Wiener simplified Volterra’s equations and they are now widely used today in statistical techniques like linear regression analysis, and analysis of spike trains from neurons.
A single neuron in the brain of a blow fly can read input from its photoreceptors and command a wing muscle to change its flight path within about 30 milliseconds. That’s just time enough for a few spikes on a one-neuron chain, so the temporal structure of those spikes contains real information relevant both to the stimulus and motor imperative. It is therefore not just a coarse pulse frequency code. These Wiener equations, or more precisely, kernels, have been used to accurately represent the information in these spike trains and replace the neuron in simulated systems. To do so — even for a few spikes — requires intensive computation using reiterative numerical methods.
To attempt such a process for the million or more axons (the long connections between neurons) that constitute the output of the retina would be prohibitive. To get around this, researchers have further simplified the equations and can now do a decent job of reconstructing a stimulus, as long as the number of pixels or other kind of input chosen is limited. Rather than directly representing pixels, the processed responses of the ganglion cells in the retina can better be understood in terms of standard image-processing concepts like edge detection, and center-surround inhibition. These filters are built into the physical structure of the neuron’s dendritic tree. A project to create a connectome for the retina, known as Eyewire, is now looking to create a rough map of these details through a crowdsourced, online gaming effort.
Ultimately, this kind of analysis is a top-down approach and has its limitations. For the present time, it is the best we have. Neuromorphic chips and artificial neural networkscould replace these methods in the interim time until actual biologic equivalents can be grown for replacement retinas. Research on stem cell replacements for the “hair” cells in the cochlea, which do the actual sound mechano-transduction into electrical nerve activity, is making astounding progress that will hopefully soon be transferred to visual and motor systems.
MIT Technology Review has reported on a couple projects still in the early stages of development. At the Society of Neuroscience this meeting this November, Massoud Khraiche proposed using silicon nanowires to replace damaged photoreceptors. These nanowires could allow for both light detection and neuron stimulation. Another group, atCarnegie Mellon, is also making inside-the-eye devices even smaller. Their device would provide detail comparable to that of the fovea, the part of the eye with the highest density of photoreceptors.
Under some conditions, photoreceptors, like a dark-adapted rod cell, can detect a single photon. More impressively, that single rod cell can inform its owner of the event with some statistical reliability. In other words, the person can guess whether they saw the photon or not with significance better than chance. Considering that the same cell can also function on the reflective sands of a sunny beach, that gives us some appreciation for the dynamic range through which the retina can operate. Capturing this full complement of skill with a prosthetic bionic eye will certainly take time.
As far as choosing a go-to implant manufacturer, is hard to know what technologies and algorithms various developers may eventually employ. If your implant allows new vision apps to be installed over-the-air, that might be a good sign. Operating system/firmware upgrades should be provided for as well. Ultimately, if your implant permits actual hardware upgradeability by including a spare FPGA, that would be preferable. People with disabilities are learning today to temper their expectations when news reports announce medical breakthroughs. The day will come soon enough when lack of technology won’t be the biggest problem, rather they might simply be too expensive for the mass market to acquire. Hopefully, equitable systems for the disbursement of these new products will be found, and they can be enjoyed in the spirit of the best for the most.