Brain-AI System Translates Thoughts Into Movement
Brain-AI System Translates Thoughts Into Movement

Brain-AI System Translates Thoughts Into Movement

Summary: Researchers have developed a non-invasive, artificial intelligence-powered brain-computer interface that enables more precise and faster control of a robotic arm or cursor. The system translates brain signals from EEG recordings into movement commands, while an AI-controlled camera interprets the user’s intentions in real time.

In tests, participants — including one person with a stroke — completed tasks much faster with the help of AI and even performed actions that would otherwise have been impossible without it. Researchers say the breakthrough could pave the way for safer and more accessible assistive technologies for people with strokes or motor impairments.

Key data:

  • Non-invasive breakthrough: Combines EEG-based brain signal decoding with AI vision for shared autonomy.
  • Faster task completion: Paralyzed participants were able to complete tasks that would have been impossible without the help of AI.
  • Affordable Alternative: Offers a safer and less risky solution compared to invasive surgical implants.

Source: UCLA

UCLA engineers have developed a wearable, non-invasive brain-computer interface system that uses artificial intelligence as a co-pilot to infer the user’s intentions and perform tasks by moving a robotic arm or computer cursor.

Nature Machine Intelligence , the interface demonstrates a new level of performance in non-invasive brain-computer interface (BCI) systems.

This could lead to various technologies that allow people with limited physical abilities, such as those with stroke or neurological disorders, to manipulate and move objects more easily and accurately.

The team developed custom algorithms to decode electroencephalography (EEG), a method of recording the brain’s electrical activity, and extract signals that reflect movement intentions.

They combined the decoded signals with a camera-based AI platform that interprets the user’s direction and intent in real time. The system allows humans to perform tasks much faster without the help of AI.

“We’re exploring safer, less invasive alternatives to enhance brain-computer interface systems powered by artificial intelligence,” said Jonathan Kao, associate professor of electrical and computer engineering at UCLA’s Samueli School of Engineering.

“Our long-term goal is to create AI-driven brain-computer interfaces that offer shared autonomy, helping individuals with movement disorders like stroke or ALS regain some independence in daily activities,” he added.

The next generation of BCI devices can be surgically implanted and convert brain signals into commands. However, the benefits they currently offer do not outweigh the risks and costs associated with the neurosurgery required for implantation.

With AI support, the paralyzed participant completed the robotic arm task in 6.5 minutes—otherwise impossible. Credit: StackZone Neuro
With AI support, the paralyzed participant completed the robotic arm task in 6.5 minutes—otherwise impossible. Credit: StackZone Neuro

More than twenty years after their first demonstration, these devices are still limited to small-scale clinical pilot studies. However, wearables and other external brain-computer interfaces (BCIs) have been less effective at reliably detecting brain signals.

To overcome these limitations, the researchers tested their novel AI-based, non-invasive brain-computer interface (BCI) with four participants: three without motor impairments and a fourth who was paralyzed from the waist down.

Participants wore a cap to record their electroencephalographic (EEG) activity, and the researchers used custom decoding algorithms to translate these brain signals into computer cursor and robotic arm movements. At the same time, an AI system with an integrated camera observed the decoded movements and helped participants complete two tasks.

In the first task, participants had to move a cursor on a computer screen to eight targets, holding the cursor on each target for at least half a second. In the second test, participants activated a robotic arm to move four blocks on a table from their original positions to designated positions.

With the support of artificial intelligence, all participants were able to complete both tasks significantly faster. The technology streamlined their interactions, allowing for smoother control and quicker execution, which highlighted the potential of AI-assisted systems in enhancing physical capabilities.

Most notably, the paralyzed participant successfully completed the task in approximately six and a half minutes using a robotic arm guided by AI. Without AI assistance, he was unable to finish the task at all—underscoring the transformative impact of intelligent systems in restoring functional independence for individuals with severe movement limitations.

The brain-computer interface (BCI) interpreted electrical brain signals that encoded the actions that participants intended to perform. Using a computer vision system, the specially designed AI inferred the users’ intentions—not their eye movements—to guide the cursor and position the blocks.

About this neurotech and AI research news

Author: Christine Wei-li Lee
Source: UCLA
Contact: Christine Wei-li Lee – UCLA
Image: The image is credited to StackZone Neuro

Original Research: Closed access.
Brain–computer interface control with artificial intelligence copilots” by Jonathan Kao, et al. Nature Machine Intelligence

“The next steps for AI-powered brain-computer interface systems could involve designing more advanced copilots that enhance both speed and precision in robotic arm movements,” explained Johannes Lee, co-lead author and doctoral candidate in electrical and computer engineering at UCLA. These improvements aim to make the systems more responsive and intuitive, allowing users to perform tasks with greater ease and accuracy.

Lee also emphasized the importance of adaptability, noting that future AI copilots should be able to adjust their control based on the specific object a user wants to grasp. This expert-level responsiveness would mark a significant leap forward in assistive technology, offering people with movement disorders a more natural and effective way to interact with their environment.

“Furthermore, integrating large-scale training data can help AI collaborate on more complex tasks as well as improve its own EEG decoding.”

The research team behind the paper comprises members of Cao’s Neural Engineering and Computation Laboratory at UCLA. Contributors include Sungjun Lee, Abhishek Mishra, Su Yan, Brandon McMahan, Brent Gasford, Charles Kobashigawa, Mike Qiu, and Chang Zhi—all of whom played key roles in advancing the study’s findings and technological innovations.

Dr. Cao, who leads the lab, holds a professorship in the Department of Computer Science and is part of the Interdepartmental Neuroscience Doctoral Program. He is also affiliated with the UCLA Brain Research Institute, where his interdisciplinary work bridges engineering, neuroscience, and artificial intelligence to develop cutting-edge brain-computer interface systems.

Funding: The study received financial support from the National Institutes of Health and the Center for Humanity and Artificial Intelligence Science—a collaborative initiative between UCLA and Amazon. This backing underscores the growing interest in advancing AI-driven brain-computer interface systems that can improve quality of life for individuals with movement disorders.

In recognition of the innovation behind the research, the UCLA Technology Development Group has filed a patent application for the AI-based brain-computer interface technology. This move signals the potential for future commercialization and broader application of the system, paving the way for more accessible and effective assistive technologies.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *