Skip to main content

Groundbreaking A.I. brain implant translates thoughts into spoken words

Researchers from the University of California, San Francisco, have developed a brain implant which uses deep-learning artificial intelligence to transform thoughts into complete sentences. The technology could one day be used to help restore speech in patients who are unable to speak due to paralysis.

Recommended Videos

“The algorithm is a special kind of artificial neural network, inspired by work in machine translation,” Joseph Makin, one of the researchers involved in the project, told Digital Trends. “Their problem, like ours, is to transform a sequence of arbitrary length into a sequence of arbitrary length.”

The neural net, Makin explained, consists of two stages. In the first, the neural data gathered from brain signals, captured using electrodes, is transformed into a list of numbers. This abstract representation of the data is then decoded, word by word, into an English language sentence. The two stages are trained together, not separately, to achieve this task. The words are finally outputted as text — although it would be equally possible to output it as speech using a text-to-speech converter.

For the study, four women with epilepsy, who had previously had electrodes attached to their brains to monitor for seizures, tested out the mind-reaching tech. Each participant was asked to repeat sentences, allowing the A.I. to learn and then demonstrate its ability to decode thoughts into speech. The best performance had an average translation rate error of only 3%.

Currently the A.I. has a vocabulary of around 250 words. By comparison, the average American adult native English speaker has a vocabulary of somewhere between 20,000 and 35,000 words. So if the researchers are going to make this tool as valuable as it could be, they will need to vastly scale up the number of words it can identify and verbalize.

“The algorithms for natural-language processing, including machine translation, have advanced quite a bit since I conceived the idea for this decoder in 2016,” Makin continued. “We’re investigating some of these now. [In order to] achieve high-quality decoding over a broader swath of English, we need to collect more data from a single subject — or somehow get even bigger boosts from our transfer learning.”

A paper describing the work was recently published in the journal Nature Neuroscience.

Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Can A.I. beat human engineers at designing microchips? Google thinks so
google artificial intelligence designs microchips photo 1494083306499 e22e4a457632

Could artificial intelligence be better at designing chips than human experts? A group of researchers from Google's Brain Team attempted to answer this question and came back with interesting findings. It turns out that a well-trained A.I. is capable of designing computer microchips -- and with great results. So great, in fact, that Google's next generation of A.I. computer systems will include microchips created with the help of this experiment.

Azalia Mirhoseini, one of the computer scientists of Google Research's Brain Team, explained the approach in an issue of Nature together with several colleagues. Artificial intelligence usually has an easy time beating a human mind when it comes to games such as chess. Some might say that A.I. can't think like a human, but in the case of microchips, this proved to be the key to finding some out-of-the-box solutions.

Read more
Read the eerily beautiful ‘synthetic scripture’ of an A.I. that thinks it’s God
ai religion bot gpt 2 art 4

Travis DeShazo is, to paraphrase Cake’s 2001 song “Comfort Eagle,” building a religion. He is building it bigger. He is increasing the parameters. And adding more data.

The results are fairly convincing, too, at least as far as synthetic scripture (his words) goes. “Not a god of the void or of chaos, but a god of wisdom,” reads one message, posted on the @gods_txt Twitter feed for GPT-2 Religion A.I. “This is the knowledge of divinity that I, the Supreme Being, impart to you. When a man learns this, he attains what the rest of mankind has not, and becomes a true god. Obedience to Me! Obey!”

Read more
Google’s LaMDA is a smart language A.I. for better understanding conversation
LaMDA model

Artificial intelligence has made extraordinary advances when it comes to understanding words and even being able to translate them into other languages. Google has helped pave the way here with amazing tools like Google Translate and, recently, with its development of Transformer machine learning models. But language is tricky -- and there’s still plenty more work to be done to build A.I. that truly understands us.
Language Model for Dialogue Applications
At Tuesday’s Google I/O, the search giant announced a significant advance in this area with a new language model it calls LaMDA. Short for Language Model for Dialogue Applications, it’s a sophisticated A.I. language tool that Google claims is superior when it comes to understanding context in conversation. As Google CEO Sundar Pichai noted, this might be intelligently parsing an exchange like “What’s the weather today?” “It’s starting to feel like summer. I might eat lunch outside.” That makes perfect sense as a human dialogue, but would befuddle many A.I. systems looking for more literal answers.

LaMDA has superior knowledge of learned concepts which it’s able to synthesize from its training data. Pichai noted that responses never follow the same path twice, so conversations feel less scripted and more responsively natural.

Read more