Facebook-backed study gives individual with severe speech loss restored communication abilities

Facebook also announced it would make its Brain-computer Interface tools open source.
By Laura Lovett
01:23 pm
Share

Photo: MR.Cole_/Getty Images

Results from a new study published in the New England Journal of Medicine show a potential breakthrough for helping individuals who have lost the ability to speak and are impacted by paralysis by using a new technology that gives them a way to type by just attempting to speak.

In the Facebook-backed UCSF study, researchers surgically implanted a "subdural, high-density, multielectrode array" over the part of the brain that controls speech in an individual who had lost his ability to speak and had paralysis.

During the study, a participant was presented with a question on a screen. While the subject attempted to speak a reply, his brain activity was then recorded. A computer algorithm was then able to translate the brain activity into words and sentences in real time.

“My research team at UCSF has been working on this [speech neuroprosthesis] goal for over a decade. We’ve learned so much about how speech is processed in the brain during this time, but it’s only in the last five years that advances in machine learning have allowed us to get to this key milestone,” Edward Chang, chair of neurosurgery at UCSF, said in a Facebook blog. “That combined with Facebook’s machine learning advice and funding really accelerated our progress.”

Facebook's involvement in the study was primarily around providing machine learning advice and funding. The Palo Alto tech giant said that it has no interest in developing tech that requires implantable electrodes.

Facebook has been working on a Brain-Computer Interface (BCI) project since 2017. However, following this study the company said it would make its software and head-mounted hardware prototype open source for others to use for research.  

However, following this announcement, the company said that it would be pivoting from focusing on a head-mounted BCI tech to a wrist-based electromyography tool. If a user is trying to move their hands or fingers in a certain way, the technology is able to pick up on those brain signals and translate them into digital commands.

“We’re developing more natural, intuitive ways to interact with always-available AR glasses so that we don’t have to choose between interacting with our device and the world around us,” says FRL research director Sean Keller. “We’re still in the early stages of unlocking the potential of wrist-based electromyography (EMG), but we believe it will be the core input for AR glasses, and applying what we’ve learned about BCI will help us get there faster.”

WHY IT MATTERS

The study demonstrates a potential communication avenue for individuals who have lost their ability to speak. In the study, participants were able to use a set of 50 vocabulary words. The research found that the system was able to translate the participant's brain activity to speech rate at a median of 15.2 words per minute, with a median word error rate of 25.6%. However, it is important to note that the study was done with a single subject.

"Today we’re excited to celebrate the next chapter of this work and a new milestone that the UCSF team has achieved and published in The New England Journal of Medicine: the first time someone with severe speech loss has been able to type out what they wanted to say almost instantly, simply by attempting speech. In other words, UCSF has restored a person’s ability to communicate by decoding brain signals sent from the motor cortex to the muscles that control the vocal tract — a milestone in neuroscience."

THE LARGER TREND

There have been hints that Facebook is interested in the wearables space. In June, reports surfaced that Facebook was working on a new wearable that included a smartwatch with two cameras, which customers can use to upload photos to social media.

In March, Facebook Reality Labs shared that it was conducting research on "wrist-based input combined with usable but limited contextualized AI, which dynamically adapts to you and your environment."

Tags: 
Facebook, UCSF
Share