Brain-Machine Interface Could Give Voice to the Voiceless

Edward-Chang

Mind reading usually conjures images of scam artists with crystal balls, but a group of San Francisco neuroscientists and engineers is developing a device that can do it sans crystal ball. Their research aims to figure out what people with paralysis or brain injury are trying to say by studying how they attempt to move their mouths. By decoding patterns in the part of the brain that orchestrates the movement of the lips, tongue, jaw and larynx, the mechanical mind reader — a speech prosthetic — will give these people a voice through a computer-driven speech synthesizer.

In the short term, the device would help patients whose brains can’t drive the vocal machinery in their mouths. That includes the thousands of people with brain trauma, spinal cord injury, stroke or ALS who are fully conscious but unable to express their thoughts. (Most now rely on devices that require physical input.) The team published a paper in Nature mapping the relevant brain activity with unparalleled precision and has developed a general design for the device. After fixing bugs and securing funding, the researchers expect to start human trials in two or three years.

In the long term, the technology underlying this prosthetic could advance the broader field of brain-machine interfaces. The key to this device is not so much in its physical mechanisms but in the algorithms behind it, says neurosurgeon Edward Chang of the University of California, San Francisco. They’re what gives the device its ability to decode the complex “language” of the brain, expressed through the electrical signals of large groups of neurons.

Learning to Speak Brain

Chang — co-director of the Center for Neural Engineering and Prostheses, a UC Berkeley-UC San Francisco collaboration — is both a brain surgeon and a neuroscientist familiar with the field’s deep computational frontiers. He says he works in “the most privileged research environment in the world: the inside of the human brain.”

This environment is complicated, but a speech prosthetic isn’t actually as tricky as you might think. “The signals generated in the part of the motor cortex that controls and coordinates the lips, tongue, jaw and larynx as a person speaks are already intended to control an external device: the mouth,” says Keith Johnson, a UC Berkeley professor of linguistics and a co-author with Chang on last year’sNature paper, which describes, for the first time, the neuronal mechanisms controlling speech. “Having those same signals control a different physical device, the speech prosthetic, is a much more tractable problem than trying to figure out what thought a person would like to express and trying to give voice to that,” Johnson says.

Reading brain commands for mouth movements may be simpler than reading cognitive content, but it is hardly easy. “As far as motor activities go, human speech is as complex as it gets,” says Chang. Even a simple phrase is the equivalent of an Olympic gymnastics routine for the speaker’s tongue, lips, jaw and larynx. Just as a gymnast’s twists, flips, jumps and landings all require precise muscle control and perfect timing, a fraction of a second too long before curling the tongue or engaging the larynx can mean the critical difference between saying “snappy” or “crappy.”

Beyond mapping the precise locations of the brain areas controlling these movements for the first time, Chang and colleagues also recorded and analyzed patterns of neuron activity in those areas. By cataloging these dynamic “higher-order” patterns showing when, and how intensely, each set of neurons turns on, Chang’s lab learned how to read intended speech directly from the brain.

BI0414-HowItWorks

Talking to Yourself

Helping voiceless patients speak their minds is already a laudable goal, but this device could do more. The technology may one day let healthy people control electronic gadgets with their thoughts.

UC San Diego neuroscientist Bradley Voytek suggests such speech-reading brain-machine interfaces (BMIs) could make an excellent control interface for all kinds of devices beyond voice synthesizers because speech is so precise. We have much better control over what we say (even what we say to ourselves in our own minds) than over what we think.

The possibilities are tantalizing. You could silently turn off your phone’s ringer in the theater just by thinking the words, “Phone controls: turn off ringer.” Or compose and send an email from the pool without interrupting your stroke: “Email on: Toni, I’m still swimming and will be 15 minutes late for dinner. Send. Email off.” Voytek dreams even bigger: “Pair this [technology] with a Google self-driving car, and you can think your car to come pick you up. Telepathy and telekinesis in the cloud, if you will.”

Here’s the rub. Even if Chang’s speech prosthetic 1.0 is ready to roll in two or three years, implanting the device would require serious brain surgery, something that would make even committed early adopters balk. For commercial speech-reading BMIs to become mainstream, one of two things would need to happen: Brain implant surgery would have to become much safer, cheaper and more routine, or noninvasive sensing devices would have to become much more powerful.

But triggering a wave of new, convenient gadgets for the masses would just be a bonus. This technology already promises to make a difference for patients who’ve lost their voices. It doesn’t take a mind reader to see that.

Source by : Discover Magzine

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s