Researchers have developed a mechanism to translate brain activity into simple sentences, a breakthrough that could lead to new tools for people who are unable to communicate through speech.
In a paper published in Nature this week, a team from the University of California, San Francisco explained how they created a “neural decoder” that can create speech. The team included neurosurgeon Edward Chang, computer scientist Gopala Anumanchipalli, and Ph.D. student Josh Chartier.
“Our central goal was to make an ‘artificial vocal tract,’” Chartier told the Washington Post via email on Friday. “Not a real physical one, but a computer one that could generate full sentences, not just words.”
“Our goal is to help those who cannot speak to say what they wish to say,” he said.
The team worked with five volunteers at the UCSF Epilepsy Center who had had electrodes implanted on the surface of their brains in preparation for neurosurgery to control their seizures. They read hundreds of sentences out loud while researchers monitored activity in their brains’ speech centers, which control people’s ability to talk.
The next step was to decode the brain signals. The researchers used machine learning to generate a simulation of the movements of a person’s vocal tract based on the signals they received from the volunteer’s brains, and translated those simulated movements into synthesized speech, Chartier explained.
The neural decoder could be an important step toward improving the tools available to help those who cannot speak be heard.
Subscribe to the Coronavirus newsletter
Get the day’s latest Coronavirus news delivered to your inbox by subscribing to our newsletter.