Skip to main content

Amazing new headset translates thoughts into speech for vocally impaired wearers

Vicky Just/University of Bath

“In a nutshell,” said Scott Wellington, “we’re hoping to create a technology that can take your imagined speech — that is, you think of a word or a sentence, without moving or speaking at all — and translate your brain signals into synthesized speech of that same word or sentence.”

That’s quite a mission, but Wellington, a Ph.D. researcher at the University of Bath’s Center for Accountable, Transparent and Responsible Artificial Intelligence, may just be up to the job.

For the past several years, via his previous work at the University of Edinburgh and a startup called SpeakUnique, Wellington has been working on an ambitious, but potentially game-changing, project: Creating personalized synthetic voices for those who have impaired speech or entirely lost the ability to speak as a result of neurodegenerative conditions like Motor Neurone Disease (MND).

“The goal is to create a new technique that allows more fluent communication by either supporting or, even better, altogether replacing the need to type out what you want to communicate, by using the brain signal to do the ‘typing’ instead.”

Synthetic voices for people with potentially debilitating conditions like MND have been around for years. Famously, the late theoretical physicist Stephen Hawking communicated using a synthesized computer voice, created for him by a Massachusetts Institute of Technology engineer named Dennis Klatt, as far back as 1984. The voice, a default male named “Perfect Paul,” could be operated using a handheld clicker that would enable him to choose words from a computer. Later, when Hawking lost the use of his hands, he switched to a system that detected his facial movement.

Vicky Just/University of Bath

Wellington’s work would be a step forward from this. For one thing, where recordings exist or suitable sound parts could be made, he could piece together a synthetic personalized voice that sounds like the person it’s being used for. Furthermore, this voice could be controlled entirely through the user’s thoughts — all using a humble, commercially available gamer’s headset.

Promising developments

“There have already been some promising developments in the field from researchers around the world, but these have all used a process called electrocorticography, which requires a craniotomy,” Wellington said.

A craniotomy, as he points out, is invasive brain surgery. The goal of his work at the University of Bath is to achieve the effect of “imagined speech recognition,” but without the need for someone to cut open your head and plant sensors onto the surface of your brain.

“For people who have lost their natural speech, one of the biggest causes of frustration is the inability to communicate their thoughts to friends and family with the same speed and naturalness as they had previously,” he said. “For instance, for people in advanced stages of MND, eye-tracking technologies can allow people with severely impaired motor control to use text-to-speech systems to communicate at around 10 words a minute, and that’s if they’re fluent users of the technology. You and I can speak 10 words in a few seconds. You can see why this is one of the biggest causes of frustration for people with motor impairment who have lost their speech.”

In the University of Bath setup, the gaming headset employed is equipped with an EEG (electroencephalography) system to detect the wearers’ brain waves. These are then processed by a computer that uses neural networks and deep learning to identify the intended speech of the user.

“We’ve been able to translate these imagined sounds with a promising degree of accuracy.”

“The goal is to create a new technique that allows more fluent communication by either supporting or, even better, altogether replacing the need to type out what you want to communicate, by using the brain signal to do the ‘typing’ instead,” Wellington said. “With the latest developments in engineering, machine learning, and artificial intelligence, I believe we’re at the stage to begin to make this a reality.”

To train the system, volunteers wore the EEG device while a recording of their own speech was played for them. At the same time, they had to imagine saying the sound, as well as vocalize the sound. While it would be accurate to describe the system as reading thoughts, it would still require the user to silently verbalize the words they wanted to say. (The plus side of this is that there’s no risk of it accidentally reading a wearers’ most private thoughts.)

The future’s bright, but manage expectations

Wellington was clear that he wants to “manage expectations.” Taking the noisy signal of brain waves and trying to pick up the all-important signal contained in it is tough. He likened it to trying to have a phone conversation with a person who is outside in heavy wind — or even a hurricane. “If they’re shouting the same word over and over, yes, probably you’ll get it,” he said. “But a natural, full sentence? Probably not.”

Vicky Just/University of Bath

This will hopefully change as the project advances and they get better at extracting information from the brain signal. New machine learning techniques should push the capabilities of gaming headsets for better imagined natural speech reception. One challenge, which will prove worthwhile in the end, is that the researchers want to make sure that whatever hardware they use is affordable, practical, and mobile.

“[So far] we’ve managed to achieve some success in decoding imagined speech sounds from the brain signal,” Wellington said. “That is, imagine you were sounding out the English language phonically, as children do in school: ‘Aah,’ ‘buh,’ ‘kuh,’ ‘duh,’ ‘ehh,’ ‘guh,’ and so forth. We’ve been able to translate these imagined sounds with a promising degree of accuracy. Of course, this is far from natural speech, but does already allow for a brain-computer interface that can translate a small ‘closed’ vocabulary of distinct words quite reliably. For example, if you wanted the device to speak, from your thoughts, the words for ‘up,’ ‘down,’ ‘left,’ ‘right,’ ‘start,’ ‘stop,’ ‘back,’ ‘forwards,’ [that would be possible].”

Wellington noted that he is excited about developments like Elon Musk’s Neuralink hardware, a “brain chip” that could be implanted beneath the skull, which could prove extremely transformative for work such as this. “As you can imagine, I was left wanting to know what we could achieve if such a device were implanted over the speech- and language-processing regions of the brain,” he said. “There’s certainly an exciting future ahead for this research!”

The work was presented at the Interspeech virtual conference in late October 2020.

Topics
Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
How to revive dead companions in Baldur’s Gate 3
Withers offering services to the player in Baldur's Gate 3.

You're given a good bit of leeway during battles in Baldur's Gate 3 before you or a companion actually bites the dust for good. While in battle, if a teammate does take enough damage to drop, they aren't dead then and there. Instead, they will be downed with a chance to roll every turn to get back up. If they roll successfully three times, the battle ends, or you use another character to pick them up, they're good. If they fail that roll three times, however, they will be completely dead. That can be harsh when you've become attached to certain characters and want to further their stories, so you'll be looking for any way you can to bring them back. Thankfully you do have a few options for reviving companions in Baldur's Gate 3, but just like respeccing, they aren't so obvious.
Pay Withers to bring them back

Withers is a friendly undead you can find in a secret room in the Dank Crypt found inside the Overgrown Ruins. After finding and speaking to him in his sarcophagus, he will offer you various services, one of which is bringing back any dead companions. He won't do this out of the kindness of his heart (probably because it isn't beating) and will charge you a heavy fine of 200 gold to do so. Still, that's a small price to pay to bring back a beloved character. Once paid, that character will appear in your camp where they would normally be, so there's no need to go back to their corpse and find them.
Use a scroll of Revivfy or learn it

Read more
Every video game delay that has happened in 2023 so far
The player skates toward the moon in Skate Story.

Few things feel as inevitable in the video game industry as delays. Ever since the onset of the COVID-19 pandemic, game delays have only become more and more common as developers find previously set timelines unrealistic and adjust their release plans accordingly. More than halfway through 2023, we've already seen some notable AAA games like Suicide Squad: Kill the Justice League, Skull & Bones, and Pragmata delayed pretty heavily. Because video game release date delays are so common, it can be tough to keep track of every game that has had its launch date shifted in some way.
That's why, just as we did in 2021 and 2022, Digital Trends is rounding up every game delay that's announced throughout 2023. Here are the high-profile ones that have happened so far, listed chronologically by their new intended release dates.
The Dark Pictures: Switchback VR (March 16)

As Until Dawn: Rush of Blood is one of the best games for PlayStation VR, The Dark Pictures: Switchback VR, Supermassive Games' PlayStation VR2 successor, is a highly anticipated launch title for the upcoming VR headset. Unfortunately, it will no longer make PlayStation VR2's February 22 launch and will instead be released on March 16. On Twitter, a message from Supermassive Games says this delay will ensure that players "receive the most polished, terrifying experience possible" at release. The game was released on that date to mixed reviews.
Atelier Ryza 3: Alchemist of the End & the Secret Key (March 24)

Read more
Is Remnant 2 cross-platform?
Three characters shoot at a boss in Remnant 2.

Aside from the focus on firearms and integrating some randomly generated environments, the Remnant series sets itself apart from other souls-like games mainly with its focus on co-op. Both titles encourage you to team up with two friends to fight your way through the mutated monsters that await. After so many years of progress in terms of multiplatform games incorporating full cross-platform support, you might assume Remnant 2 will follow suit and let you make a group with anyone regardless of what platform they're on. However, the truth may be a bit more disappointing. Before you make plans with your squad, here's what you need to know about Remnant 2's cross-platform support.
Is Remnant 2 cross-platform?

Unfortunately, Remnant 2 does not have cross-platform play between PS5, Xbox Series X or PC -- and there's no word about it being added in the future.

Read more