Posted on

Speech recognition technology translates brain waves into sentences

By DRS. NORBERT HERZOG AND DAVID NIESEL for Galveston County The Daily News

I remember many years ago when speech recognition software was introduced. It was astounding. Spoken words appeared on your computer screen without the assistance of a keyboard. This was an amazing innovation at the time.

Recent studies have taken this a step further: translating brain waves into complete sentences. Scientists report the error rate is as low as 3 percent, which is much better than my speech-to-text software many years ago.

Although we still don’t know much about how the brain works, we’re making some significant progress at understanding some of its complex functions. For example, scientists have been able to map our memories to precise regions of the brain.

In animal models, scientists identified the brain cells where specific memories were stored and then altered them by manipulating those cells. This is amazing, and it may sound to some like the beginnings of making “The Manchurian Candidate” a reality.

In other work, we’re beginning to be able to harness brain waves or signals for practical use to help people who’ve become incapacitated. Recall the media stories on paralyzed patients who can use their thoughts to control a sophisticated mechanical arm to feed themselves or move objects. This ability to use the brain to interact with the outside world holds great promise for humans in restoring lost functions.

Some recent work has achieved yet another leap forward. Scientists have developed a way of decoding sentences by examining brainwaves, also called neural signals. This is a challenge that scientists have been working on for many years.

Previous studies explored translating brain waves into words using the component sounds, or phonemes, that make up words which was subject to a high error rate. For this work, they used an approach that has been successfully used for years in the translation between different languages. This uses neural networks, which is an incredibly accurate system. It’s the same technology as the language translation apps on your smartphone.

For the study, the scientists used electrodes in subjects to read their brainwaves. The subjects read sentences while the electrodes recorded their brain waves into a computer. The scientists set the neural network in the computer to use the brainwaves as the first language, and they set the sentences the subjects read as the second language. Brilliant.

After this, the computer could translate brain waves just like another language. The accuracy of the translation was as good as what you could expect from professional language translators. In the study, the subjects read 50 sentences that had about 250 unique words.

The technique will have to be expanded to include more words and phrases. One promising attribute was that the machine learning was trainable, meaning that accuracy improved after pretraining the machine software. The machine learning also was transferable person to person, so a system could pre-learn brain waves that would work for different people.

As you can imagine, this would be a huge advance to help disabled people who have lost the ability to speak. Look for many more advances in this area soon.

Need more dictation or transcription supplies and accessories?

Visit our friends over at TranscriptionGear to get the rest of what you need! From headsets to foot pedals, they have you covered.