One of the great tragedies of anarthria, the loss of speech, is that many people who suffer the condition—following, say, a stroke—can still think clearly. They just can’t express themselves the way most of us do, with words. Especially if they’re also paralyzed and can’t type out their thoughts on a tablet.
For years, scientists have been trying to help anarthric people—in particular paralyzed ones—speak through technology. The latest approach is to implant devices, in or near the brains of anarthric people, that can literally read the electrical impulses that comprise their thoughts—and beam text to a device that either displays it or sounds it out.
These brain-computer interfaces, or BCIs, have been getting more and more sophisticated. But they still have a big accuracy problem. In a major experiment three years ago, one leading BCI prototype mistranslated the thoughts of around a quarter of the trial’s participants.
ADVERTISEMENT
In other words, every fourth phrase the users were thinking… ended up wrong on the screen. That’s almost as if every fourth line of text you wrote in an email conversation just ended up being automatically rewritten as gibberish.
The same team that oversaw that 2019 trial, the Chang Lab at University of California, San Francisco, is now trying a different approach. The lab, led by top neuroscientist Edward Chang, has developed a new BCI that translates individual letters instead of whole words or phrases. Users spell out their thoughts, one letter at a time.
The initial results are encouraging. The BCI was able to correctly translate and present about 94 percent of the letters being thought out by participants. The Chang Lab’s new spelling-BCI could help advance brain implant technology, bringing it closer to everyday use by large numbers of people. Giving a voice to the voiceless.
The Chang Lab made headlines three years ago when it demoed its BrainNet BCI. In the experiment, two volunteers wore electroencephalogram electrodes on their heads—the kind neurologists use to detect epilepsy. Unlike older, cruder BCIs, BrainNet did not require invasive surgery to implant sensors directly into the brain.
The volunteers silently concentrated on certain simple thoughts. The EE headsets detected their brain waves through their skulls, and an algorithm matched these waves to a “dictionary” of phrases the lab had written by asking volunteers to utter phrases, then recording the resulting neurological activity.
That BrainNet worked at all was impressive. But its 76-percent peak accuracy left a lot of room for improvement. “A major challenge for these approaches is achieving high single-trial accuracy rates,” Chang and his team conceded.
Spelling out thoughts one letter at a time would certainly be slower than feeding whole thoughts into a BCI, but could it be more accurate? To find out, the Chang Lab recruited a volunteer who, back in 2019, had an electrocorticography array—a postcard-size patch of 16 electrodes—implanted under their skull. The volunteer suffers from “severe limb and vocal-tract paralysis,” according to the lab.
Chang and his teammates, including UCSF neuroscientists Sean Metzger and David Moses, taught the subject the NATO phonetic alphabet. “Alpha” for A. “Bravo” for B. “Charlie” for C. Et cetera. They instructed the volunteer to spell out their thoughts by thinking of each letter’s NATO code word.
The BCI read the brain waves. An algorithm did its best to match the waves to a 1,152-word dictionary. Thoughts—at least, the algorithm’s best translation of one’s thoughts—scrolled across a computer screen at a rate of 29 letters per minute.
The system was pretty accurate. During both instances when the subject thought, “Thank you,” the translated text came out onscreen as, well, “thank you.”
But it wasn’t perfect. “Good morning” came out as “good morning” on the first try and “good for legs” on the second try. And “you are not going to believe this” totally befuddled the BCI and its algorithm, getting a garbled translation as “you plan to go in on a bit love this” on the first attempt, and as “ypuaranpdggingloavlinesoeb” on the second attempt.
Overall, the system demonstrated a “median character error rate” of six percent. Scaling up the data for a hypothetical 9,000-word vocabulary, Chang’s team concluded that the error rate would be only slightly greater: just 8 percent or so.
“These results illustrate the clinical viability of a silently controlled speech neuroprosthesis to generate sentences from a large vocabulary through a spelling-based approach,” Chang, Metzger, Moses and their co-authors wrote in a peer-reviewed study that was published in Nature Communications on Tuesday.
Samuel Andrew Hires, a University of Southern California neurobiologist who was not involved with the study, told The Daily Beast he was impressed. “A typical human is around 30 to 35 words per minute with modern text prediction, perhaps faster if you are a teenager,” he said. “Here, the subjects were only about six times slower, which is quite impressive considering they couldn’t move or speak. I’m not sure what my word error rate is on my phone, but it feels like about one in every 10 words, on par with the performance from brain decoding.”
But don’t expect the spelling approach to change the world overnight. We’re still a long way from a tough, fast, accurate and affordable version of a thought-to-text system that a wide variety of speech-impaired people can use in public.
Durability is an issue. Implanting a device under the skull is traumatic and risky. Ideally, a device will work for many, many years before needing to be repaired or replaced. To that end, it’s good news that the volunteer’s electrocorticography array still worked pretty well after 2.5 years, Moses told The Daily Beast.
But a lot more experimentation is necessary in order to prove the system is widely effective. “We think that the main thing to confirm is that our BCI can work with a variety of users with a variety of disabilities,” Moses said.
Only after a lot more testing can any lab—Chang’s or another—think about licensing the technology for use by the general public. At that point, the challenge will be to shrink it down, toughen it and make it portable—and affordable. Moses said he envisions “a fully implantable neural interface” that can “wirelessly communicate with a phone, tablet or laptop computer to allow portable use.”
As for the price… who knows? “It is too early to accurately estimate the cost of such a system,” Moses said. But even at a premium, a thought-to-text device—even one that spells out just six or seven words a minute, could help a lot of people in settings where total speech impairment is currently a major impediment.
Offices. Classrooms. Even bars and restaurants. “Brain-computer interfaces have the potential to restore communication,” the Chang team wrote. All those pent-up thoughts, rattling around in the brains of anarthric people who can think clearly but say nothing, could come tumbling out. One letter at a time.