Summary: For the first time, scientists have succeeded in deciphering silent thoughts from words using brain-computer interface technology. Using commands, they achieved this with up to 74 percent accuracy. By recording the neural activity of severely paralyzed participants, the team discovered that inner speech and speech attempts show overlapping patterns of brain activity, although inner speech signals are weaker.
AI models trained on these samples were able to interpret imaginary words from a wide vocabulary, and a password-based system ensured that decoding only occurred upon request. This development could pave the way for faster and more natural communication for non-speaking people, offering the potential for even greater accuracy as technology advances.
Key data
- Innovative decoding: On command, inner speech was decoded from brain activity with up to 74% accuracy.
- Common brain patterns: Language attempts and thoughts activate overlapping areas in the motor cortex.
- Data protection function: A thought-based password can block and unlock the decoding of inner speech.
Source: Cell Press
Scientists have identified brain activity associated with inner speech, the silent monologue in people’s minds, and successfully decoded it using commands, with an accuracy of 74 percent.
The findings, published Aug. 14 in the Cell Press journal Cell, could help people who can’t speak out loud communicate more easily. They use brain-computer interface (BCI) technologies that begin translating internal thoughts when a participant mentally pronounces a password.
“For the first time, we’ve been able to understand what brain activity is like when we just think about speaking,” says lead author Erin Kunz of Stanford University. “For individuals with severe motor and speech impairments, brain-computer interfaces (BCIs) that decode internal speech offer a promising solution, enabling more natural and effortless communication.”
BCIs have recently become a tool for people with disabilities. Using sensors implanted in the brain regions responsible for movement, BCI systems can decode neural signals related to movement and translate them into actions, such as moving a prosthetic hand.
Studies have shown that BCIs can even understand speech from paralyzed people. When users try to speak out loud by activating the muscles responsible for producing sound, BCIs can interpret the resulting brain activity and write down what they want to say, even if the speech itself is not understood.
Although BCI-assisted communication is much faster than previous technologies, including systems that track users’ eye movements to input words, speaking can still be tiring and slow for people with limited muscle control.

The team wondered if BCIs could decode internal speech. “Simply thinking about speaking instead of actually trying to speak can make it easier and faster for people,” said co-author of the paper, Benjamin Mesquide-Krsa, from Stanford University.
The team recorded neural activity using microelectrodes implanted in the motor cortex (the brain region responsible for language) of four participants with severe paralysis due to amyotrophic lateral sclerosis (ALS) or brainstem stroke. The researchers asked the participants to try to speak or imagine saying a series of words.
They found that speech attempts and inner speech activate overlapping areas in the brain and show similar patterns of neural activity, but that inner speech generally has less strength of activation. Using internal speech data, the team trained AI models to interpret imaginary words. In a proof-of-concept demonstration, the BCI was able to decode imaginary sentences from up to 125,000 words with up to 74% accuracy.
The BCI was also able to capture things that some participants would never be allowed to say in their inner speech, such as numbers when they had to count pink circles on a screen.
The team also found that although intentional and internal speech produce similar patterns of neural activity in the motor cortex, they are still distinct enough to allow for precise differentiation. According to lead author Frank Willett of Stanford University, researchers could use this distinction to train BCIs to ignore internal speech entirely.
For users who want to use inner speech for faster or easier communication, the team also demonstrated a password-enabled mechanism that prevents the BCI from decoding inner speech unless it is temporarily unlocked with a specific keyword. In the experiment, users could type the phrase “chit chit bang bang” to decode inner speech. The system recognized the password with more than 98% accuracy. Although current BCI systems cannot decode independent internal speech without significant errors, researchers expect that more advanced devices with more sensors and better algorithms will be able to do so in the future.
“The future of brain-computer interfaces (BCIs) is promising,” says Willett. “This research offers hope that speech BCIs could one day enable conversation that feels fluid, natural, and enjoyable.”
Medium:
This work was supported by the Under Secretary of Defense for Health and Human Services, the National Institutes of Health, the Simons Global Brain Collaboration, the A.P. Giannini Foundation, the Department of Veterans Affairs, the Wu Tsai Neurosciences Institute, the Howard Hughes Medical Institute, the Larry and Pamela Garlick National Institute of Neurological Disorders and Stroke, the Eunice Kennedy Shriver National Institute of Child Health and Human Development, the Blavatnik Family Foundation, and the National Science Foundation.
Abstract
Internal speech in the motor cortex and its implications for speech neuroprostheses
Language-based brain-computer interfaces (BCIs) promise to restore communication ability for paralyzed people, but their ability to understand personal inner dialogue has also been debated.
On the other hand, inner speech could be a way to circumvent the current approach, in which speech BCI users need to physically try to speak, which is tiring and can slow down communication. Using multi-unit recordings from four participants, we found that internal speech is well represented in the motor cortex and that imagined sentences can be decoded in real time.
Internal speech representation was found to be strongly associated with speech intention, although we have also identified a neural dimension of ‘motor intention’ that distinguishes the two.
We investigated the decoding ability of private inner speech and found that some aspects of free inner speech can be decoded during sequence retrieval and enumeration. Finally, we demonstrate highly accurate strategies that prevent speech BCIs from inadvertently decoding private internal speech.

