Brain-AI System Translates Thoughts Into Movement
Brain-AI System Translates Thoughts Into Movement

Brain-AI System Translates Thoughts Into Movement

Summary: Researchers have developed a non-invasive, AI-enhanced brain-computer interface that allows users to control a robotic arm or cursor with greater precision and speed. The system translates brain signals from EEG recordings into movement commands, while an AI-powered camera interprets the user’s intentions in real time.

In tests, participants, including one person with a stroke, completed tasks significantly faster with AI support and even performed actions that would otherwise be impossible without AI. Researchers say the breakthrough could pave the way for safer and more accessible assistive technologies for people with stroke or motor impairments.

Important facts:

  • Non-aggressive progress: Combines EEG-based brain signal decoding with AI vision for shared autonomy.
  • Faster work completion: Paralyzed participants were able to complete tasks that were impossible without the help of AI.
  • Accessible alternatives: It offers a safer and less risky solution than invasive surgical implants.

Source: UCLA

UCLA engineers have developed a wearable, non-invasive brain-computer interface system that uses artificial intelligence to predict a user’s intent and act as a copilot to perform tasks by moving a robotic arm or computer cursor.

Research published in Nature Machine Intelligence shows that the interface demonstrates a new level of performance in non-invasive brain-computer interface (BCI) systems.

This could lead to a range of technologies that allow people with limited physical abilities, such as those with stroke or neurological problems, to move objects easily and accurately.

The team developed special algorithms to decode electroencephalography (EEG), a method of recording the brain’s electrical activity, and extract signals that reflect movement intentions.

They combined the decoded signals with a camera-based artificial intelligence (AI) platform that interprets the user’s direction and intent in real time. The system allows humans to perform tasks much faster than without AI.

“By using artificial intelligence to complement brain-computer interface systems, we are looking at much less risky and invasive approaches,” said study leader Jonathan Kao, an associate professor of electrical and computer engineering at the UCLA Samueli School of Engineering.

“Ultimately, we want to develop AI-BCI systems that provide shared autonomy, allowing people with movement disorders, such as stroke or ALS, to gain a degree of independence for everyday tasks.”

The latest BCI devices, which are surgically implanted, can convert brain signals into commands. However, the benefits they currently offer do not outweigh the risks and costs associated with neurosurgery.

More than twenty years after their first demonstration, these devices are still limited to small-scale clinical pilot studies. Wearable BCIs and other external BCIs have performed poorly in reliably detecting brain signals.

To overcome these limitations, the researchers tested their novel, non-invasive AI-assisted BCI with four participants: three without motor impairments and a fourth who was paralyzed from the waist down.

Participants wore a cap to record their EEG, and the researchers used custom decoding algorithms to translate these brain signals into computer cursor and robotic arm movements. At the same time, an AI system with an integrated camera observed the decoded movements and helped participants complete two tasks.

Notably, with AI assistance, the paralyzed participant completed the robotic arm task in about six-and-a-half minutes—something he couldn't do without it. Credit: StackZone Neuro
Notably, with AI assistance, the paralyzed participant completed the robotic arm task in about six-and-a-half minutes—something he couldn’t do without it. Credit: StackZone Neuro

In the first task, participants had to move a cursor on a computer screen to reach eight targets, holding it there for at least half a second. In the second challenge, participants had to activate a robotic arm to move four blocks on a table from their original positions to designated positions.

All participants completed both tasks significantly faster with AI support. Notably, the paralyzed participant completed the task with the robotic arm with AI support in about six and a half minutes, while he could not have done so without AI.

The BCI understood the electrical signals in the brain that encoded the participants’ intended actions. Using a computer vision system, the custom AI inferred the users’ intentions to guide the cursor and place blocks—not their eye movements.

“Future advancements for AI-BCI (Artificial Intelligence–Brain-Computer Interface) systems may involve creating more advanced copilots capable of moving robotic arms with greater speed, precision, and adaptive expertise based on the object the user intends to grasp,” said co-lead author Johannes Lee, a UCLA doctoral candidate in electrical and computer engineering and an advisor to Kago. These next-generation copilots would not only improve mechanical control but also enhance the overall responsiveness and intuitiveness of assistive robotic systems, especially for users with limited mobility or neuromuscular impairments.

By combining real-time brain signal interpretation with machine learning algorithms and contextual awareness, the envisioned AI copilots could offer a more seamless and intelligent extension of the user’s intent. This means that the robotic arm would not just follow basic movement commands, but would intelligently adjust grip strength, trajectory, and speed depending on the size, shape, and fragility of the target object whether it’s a delicate glass or a heavy tool. Such adaptive interaction would significantly improve both usability and safety in real-world applications.

Lee emphasized that these advancements would mark a critical step toward human-AI symbiosis in assistive technology. “We’re moving toward a future where AI systems can anticipate and refine human intention in real time, not just follow it,” he noted. As research continues, the goal is to make AI-BCI interfaces more intuitive, accessible, and capable of supporting users in complex, dynamic environments from daily household tasks to precision-oriented professional settings.

“And incorporating large-scale training data could also help AI collaborate on more complex tasks and improve EEG decoding.”

The paper’s authors are members of Kao’s Neural Engineering and Computational Laboratory, including Sungjun Lee, Abhishek Mishra, Su Yan, Brandon McMahan, Brent Gasford, Charles Kobashigawa, Mike Qiu, and Chang Zhi. Kao, a member of the UCLA Brain Research Institute, also holds faculty positions in the Department of Computer Science and the Interdepartmental PhD Program in Neuroscience.

Funding: The research was funded by the National Institutes of Health and the Science Center for Humanity and Artificial Intelligence, a partnership between UCLA and Amazon. The UCLA Technology Development Group has filed a patent application for the AI-BCI technology.

About this neurotech and AI research news

Author: Christine Wei-li Lee
Source: UCLA
Contact: Christine Wei-li Lee – UCLA
Image: The image is credited to StackZone Neuro

Original Research: Closed access.
Brain–computer interface control with artificial intelligence copilots” by Jonathan Kao, et al. Nature Machine Intelligence

Abstract

Brain-computer interface control with artificial intelligence copilots

Motor brain-computer interfaces (BCIs) decode neural signals to help people with paralysis move and communicate.

Even with the tremendous progress of the past two decades, BCIs face a major hurdle in clinical feasibility: the performance of BCIs must far outweigh their costs and risks. To significantly improve BCI performance, we use shared autonomy, where artificial intelligence (AI) co-pilots collaborate with BCI users to achieve task objectives.

We demonstrate this AI-BCI in a non-invasive BCI system that decodes electroencephalography signals. First, we contribute a hybrid adaptive decoding approach using convolutional neural networks and a ReFIT-type Kalman filter, which allows healthy users and a paralyzed participant to control computer cursors and robotic arms via decoded EEG signals. Next, we designed two AI copilots to assist BCI users in a cursor control task and a pick-and-place task with a robotic arm.

We demonstrate AI-BCIs that allow a paralyzed participant to achieve a 3.9-fold higher hit rate during cursor control and enable a robotic arm to move random blocks to random locations sequentially. These tasks would not be possible without an AI copilot. As AI copilots improve, BCIs designed with shared autonomy may offer better performance.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *