What if a brain still worked, but the limbs refused to listen? Could there be a way to artificially translate the intentions of people with paralysis into movements? Over a four-decade career, neuroscientist John Donoghue, founding director of the Wyss Center for Bio and Neuroengineering in Geneva, convinced himself that he could do it.

In 2002, Donoghue showed that monkeys could move a cursor with the help of a decoder that interpreted their brain activities. In the decade that followed, he and colleagues showed that the system worked in people too: Individuals with quadriplegia could use their brain activity to move a cursor. That line of research recently culminated in the demonstration that people with paralysis could control a tablet computer this way.

Donoghue himself went on to further develop the system to allow people to open and close a robotic hand, and to reach, grasp and drink from a bottle by using a multijointed robotic arm. Last year, he was a coauthor on a study demonstrating how a similar system could help people do all those things with their own arms.

By now, more than a dozen patients have used the technology in experimental settings, but Donoghue’s ultimate goal is to develop technology that they — and many others like them — can take home and use day-to-day to restore the abilities they have lost.

This conversation has been edited for length and clarity.

How do you find out which movements someone with paralysis would like to make?

We implant a small 4-by-4-millimeter microelectrode array into the brain’s motor cortex, in a region that we know directs the movements of the arm. This array consists of 100 hair-thin silicon needles, each of which picks up the electrical activity of one or two neurons. Those signals are then transmitted through a wire to a computer that we can use to convert the brain activity into instructions to control a machine, or even the person’s own arm. We are assuming that the relevant variable here — the language we should try to interpret — is the rate at which neurons discharge, or “fire.”

Let me explain this using the example of moving a cursor on the screen.

We first generate a movie of a cursor moving: say, left and right. We show this to the person and ask them to imagine they are moving a mouse that controls that cursor, and we record the activity of the neurons in their motor cortex while they do so. For example, it might be that every time you think “left,” a certain neuron will fire five times — pop pop pop pop pop — and that if you think “right,” it will fire ten times. We can use such information to map activity to intention, telling the computer to move the cursor left when the neuron fires five times, and right when it fires ten times.

Diagram showing a close-up of the electrodes that are implanted in the brain to control computers, prosthetic devices and even limbs. Scientists implant a 4x4-millimeter microelectrode array into the motor cortex. This array consists of 100 hair-thin silicon needles, each of which picks up the electrical activity of one or two neurons. Those signals are then transmitted by wire, through a “pedestal” that crosses the skull and skin, to a computer that decodes them.

To record brain activity, scientists implant a 4x4-millimeter microelectrode array (not drawn to scale here) into the motor cortex. This array consists of 100 hair-thin silicon needles, each of which picks up the electrical activity of one or two neurons. Those signals are then transmitted by wire, through a “pedestal” that crosses the skull and skin, to a computer that decodes them to control computers, prosthetic limbs or real limbs.

Of course, there are other decisions to be made: What if a neuron fires just three times? So you need a computer model to decide which numbers are close enough to five. And since neuronal activity is naturally noisy, the more neurons we can measure, the better our prediction will be — with the array we implant, we usually get measurements from 50 to 200.

For the arm prosthesis, we similarly ask people to imagine making the same movement with their own arm. There were people who thought you would have to build separate models for “flex and extend your elbow,” “move your wrist up and down,” and so on. But it turns out this isn’t necessary. The brain doesn’t think in terms of muscles or joint angles — the translation of intentions into movement happens later.

How do you find the exact spot in the motor cortex at which to implant the array?

In fact, I don’t think the exact location matters that much. There is also no need for us to know exactly what each individual neuron is trying to do, as long as we can dependably predict the intended action from their combined activity. That goes against the standard old theory that there is a separate location for controlling each finger, for example. If that were the case, it would mean that if you put the array in a particular place you’d get great thumb control, but nothing else. I’ve spent my entire scientific career saying it is not true that doing something only engages a small and specific part of the brain. All our neurons form parts of large, interconnected networks.

Do people get better with experience in using the device?

Not really. The neurons often change their activity, which can corrupt the map, so we have to recalibrate the model at the beginning of every session. This means people have to work with a different model every day, so they don’t get better at it.

And if, as sometimes happens, something goes wrong and we give them control that isn’t very good, they don’t get over it on that day, which can be very frustrating for them. It appears the brain isn’t plastic enough to change the activity of specific neurons quickly enough to overcome such problems the same day.

This video describes how brain-machine interfaces can allow people with paralysis to control cursors on computer screens and, even more challenging, move prosthetic devices or their own limbs. You’ll see Cathy Hutchinson use a robotic arm to bring a bottle up to her face and sip coffee from a straw — independently, for the first time in almost 15 years.

CREDIT: NATURE VIDEO VIA YOUTUBE

Some scientists are developing ways to allow people to feel what the prosthesis is doing, giving them some tactile feedback to keep track of how things are going. Maybe this could help.

The system has also been adapted to allow people to move their own arms. How can a computer give movement directions to a real arm?

In the case of the patient that we’ve published about, it’s electrical stimulation of the muscles themselves, which seemed the most practical. The energy cost is very high, however. It would be more energy efficient to stimulate the nerves that control the muscles, as they are excellent amplifiers of energy. Yet stimulating the right nerves in the right way is pretty complicated — you can’t simply shock them into action.

Having a person move their own arm is an important achievement, although it is slow and definitely not as dexterous as we’d like it to be. To a large extent, I think this is because of our lack of understanding of the signals going from our brain into the limbs.

Photograph of Bill Kochevar using his own arm to feed himself a spoonful of mashed potato eight years after he lost use of his arms and legs in a bicycle accident. The setup of electrodes implanted in his brain’s motor cortex, a brain-computer interface and a system to stimulate his arm and hand muscles also allowed him to do simple but important things such as scratch an itch on his nose. Brain instructions also help to control a support device. Researchers say the temporary system could soon be adapted for long-term use.

Bill Kochevar uses his own arm to feed himself a spoonful of mashed potato eight years after he lost use of both arms and legs in a bicycle accident. The setup of electrodes implanted in his brain’s motor cortex, a brain-computer interface and a system to stimulate his arm and hand muscles also allowed him to do simple but important things such as scratch an itch on his nose. Brain instructions also help to control the black support device. Researchers say the temporary system could soon be adapted for long-term use.

CREDIT: CASE WESTERN RESERVE UNIVERSITY

The number of limbs we can use is tightly constrained by our evolutionary history. Can you imagine that our brain could ever be adapted to using an extra limb?

In a way, we already do that today, by using extensions of our body such as tools, computers or cars. Some of those are quite complex and very different from our body, yet we learn to handle them reasonably well and largely without thinking. And just like the monkeys in our experiments — which were moving a cursor or a robotic arm with their brain activity, even though they still had functioning arms —  people have a tendency to use their own bodies less if they can use a more efficient device instead.

Do you think that all of us might one day consider it practical to put an array into our brain so we can communicate with a computer or other devices more directly?

I don’t. Evolution has given us such fabulous natural interfaces that I think the barrier of brain surgery will remain too high. There’s always a risk of something going wrong, so I don’t think we should use implants for pure augmentation like that. Some people will do dangerous things, of course, but fortunately, you can’t easily stick an electrode in the right place in your own brain.

Have you heard of neurologist Phil Kennedy? He was the first person to implant an electrode in a human permanently, and he later had himself implanted in Belize, as no one in the United States would do anything like that. I find that disturbing — he’s a perfectly healthy, very bright man. 

A diagram showing how the brain can control prosthetic devices. A microelectrode array implanted in the brain records the activity of several neurons. These firing patterns are sent to a computer that uses a decoding algorithm to translate them into instructions for an output device, such as a robotic arm. Sensory feedback, which is usually just visual but can be tactile as well, allows the user to adjust brain activity.

This diagram describes the basic steps for controlling prosthetic devices via brain activity. A microelectrode array (1) implanted in the brain records the activity of several neurons (2). These patterns are sent to a computer that uses a decoding algorithm (3) to translate them into instructions for an output device, such as a robotic arm (4). Sensory feedback (5), which is usually just visual but can be tactile as well, allows the user to adjust brain activity.

I think the aim of the field should be to create the opportunity for people with paralysis to restore or achieve typical abilities. For people who want to be superenabled, I think we need some serious regulations, as that could be extraordinarily disruptive. It also raises other issues — if I am rich and you are not, and only my child gets a brain booster implant, this creates a very unfair situation.

How do you apply such ethical considerations to your own work?

I think we should always strive to make the technologies we create available to as many people as possible. That doesn’t mean we should stop developing or producing them because they currently cost too much and we can’t give them to everybody who needs them. But eventually, that should be the goal.

What is the biggest obstacle to getting this technology out there to people who need it?

One issue is that the arrays tend to degrade over time in the rather harsh environment of the brain. But as some have lasted for over five years, I don’t think this is the main obstacle, as you’ll probably want to get a new one anyway after that much time has passed.

If you ask me, the biggest problem is that people have a plug on their head with wires everywhere connecting them to a computer. For this to become a product people can use at home, it will have to be largely technician-free and located entirely inside the skull.

At the Wyss Center, we are trying to do exactly that: develop an implantable system that can radio out the signals. That is very hard, because we need to make the entire device small, and it will need a very good battery. If you can use this only 45 minutes a day to save power, it’s not worth it. So that’s what we are working on right now.