Before a car crash in 2008 left her paralysed from the neck down, Nancy Smith enjoyed playing the piano. Years later, Smith started making music again, thanks to an implant that recorded and analysed her brain activity. When she imagined playing an on-screen keyboard, her brain–computer interface (BCI) translated her thoughts into keystrokes — and simple melodies, such as ‘Twinkle, Twinkle, Little Star’, rang out.
But there was a twist. For Smith, it seemed as if the piano played itself. “It felt like the keys just automatically hit themselves without me thinking about it,” she said at the time. “It just seemed like it knew the tune, and it just did it on its own.”
Smith’s BCI system, implanted as part of a clinical trial, trained on her brain signals as she imagined playing the keyboard. That learning enabled the system to detect her intention to play hundreds of milliseconds before she consciously attempted to do so, says trial leader Richard Andersen, a neuroscientist at the California Institute of Technology in Pasadena.
On supporting science journalism
If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
Smith is one of roughly 90 people who, over the past two decades, have had BCIs implanted to control assistive technologies, such as computers, robotic arms or synthetic voice generators. These volunteers — paralysed by spinal-cord injuries, strokes or neuromuscular disorders, such as motor neuron disease (amyotrophic lateral sclerosis) — have demonstrated how command signals for the body’s muscles, recorded from the brain’s motor cortex as people imagine moving, can be decoded into commands for connected devices.
But Smith, who died of cancer in 2023, was among the first volunteers to have an extra interface implanted in her posterior parietal cortex, a brain region associated with reasoning, attention and planning. Andersen and his team think that by also capturing users’ intentions and pre-motor planning, such ‘dual-implant’ BCIs will improve the performance of prosthetic devices.
Andersen’s research also illustrates the potential of BCIs that access areas outside the motor cortex. “The surprise was that when we go into the posterior parietal, we can get signals that are mixed together from a large number of areas,” says Andersen. “There’s a wide variety of things that we can decode.”
The ability of these devices to access aspects of a person’s innermost life, including preconscious thought, raises the stakes on concerns about how to keep neural data private. It also poses ethical questions about how neurotechnologies might shape people’s thoughts and actions — especially when paired with artificial intelligence.
Meanwhile, AI is enhancing the capabilities of wearable consumer products that record signals from outside the brain. Ethicists worry that, left unregulated, these devices could give technology companies access to new and more precise data about people’s internal reactions to online and other content.
Ethicists and BCI developers are now asking how previously inaccessible information should be handled and used. “Whole-brain interfacing is going to be the future,” says Tom Oxley, chief executive of Synchron, a BCI company in New York City. He predicts that the desire to treat psychiatric conditions and other brain disorders will lead to more brain regions being explored. Along the way, he says, AI will continue to improve decoding capabilities and change how these systems serve their users. “It leads you to the final question: how do we make that safe?”
Consumer concerns
Consumer neurotech products capture less-sophisticated data than implanted BCIs do. Unlike implanted BCIs, which rely on the firings of specific collections of neurons, most consumer products rely on electroencephalography (EEG). This measures ripples of electrical activity that arise from the averaged firing of huge neuronal populations and are detectable on the scalp. Rather than being created to capture the best recording possible, consumer devices are designed to be stylish (such as in sleek headbands) or unobtrusive (with electrodes hidden inside headphones or headsets for augmented or virtual reality).
Still, EEG can reveal overall brain states, such as alertness, focus, tiredness and anxiety levels. Companies already offer headsets and software that give customers real-time scores relating to these states, with the intention of helping them to improve their sports performance, meditate more effectively or become more productive, for example.
AI has helped to turn noisy signals from suboptimal recording systems into reliable data, explains Ramses Alcaide, chief executive of Neurable, a neurotech company in Boston, Massachusetts, that specializes in EEG signal processing and sells a headphone-based headset for this purpose. “We’ve made it so that EEG doesn’t suck as much as it used to,” Alcaide says. “Now, it can be used in real-life environments, essentially.”
And there is widespread anticipation that AI will allow further aspects of users’ mental processes to be decoded. For example, Marcello Ienca, a neuroethicist at the Technical University of Munich in Germany, says that EEG can detect small voltage changes in the brain that occur within hundreds of milliseconds of a person perceiving a stimulus. Such signals could reveal how their attention and decision-making relate to that specific stimulus.
Although accurate user numbers are hard to gather, many thousands of enthusiasts are already using neurotech headsets. And ethicists say that a big tech company could suddenly catapult the devices to widespread use. Apple, for example, patented a design for EEG sensors for future use in its Airpods wireless earphones in 2023.
Yet unlike BCIs aimed at the clinic, which are governed by medical regulations and privacy protections, the consumer BCI space has little legal oversight, says David Lyreskog, an ethicist at the University of Oxford, UK. “There’s a wild west when it comes to the regulatory standards,” he says.
In 2018, Ienca and his colleagues found that most consumer BCIs don’t use secure data-sharing channels or implement state-of-the-art privacy technologies. “I believe that has not changed,” Ienca says. What’s more, a 2024 analysis of the data policies of 30 consumer neurotech companies by the Neurorights Foundation, a non-profit organization in New York City, showed that nearly all had complete control over the data users provided. That means most firms can use the information as they please, including selling it.
Responding to such concerns, the government of Chile and the legislators of four US states have passed laws that give direct recordings of any form of nerve activity protected status. But Ienca and Nita Farahany, an ethicist at Duke University in Durham, North Carolina, fear that such laws are insufficient because they focus on the raw data and not on the inferences that companies can make by combining neural information with parallel streams of digital data. Inferences about a person’s mental health, say, or their political allegiances could still be sold to third parties and used to discriminate against or manipulate a person.
“The data economy, in my view, is already quite privacy-violating and cognitive- liberty-violating,” Ienca says. Adding neural data, he says, “is like giving steroids to the existing data economy.”
Several key international bodies, including the United Nations cultural organization UNESCO and the Organisation for Economic Co-operation and Development, have issued guidelines on these issues. Furthermore, in September, three US senators introduced an act that would require the Federal Trade Commission to review how data from neurotechnology should be protected.
Heading to the clinic
While their development advances at pace, so far no implanted BCI has been approved for general clinical use. Synchron’s device is closest to the clinic. This relatively simple BCI allows users to select on-screen options by imagining moving their foot. Because it is inserted into a blood vessel on the surface of the motor cortex, it doesn’t require neurosurgery. It has proved safe, robust and effective in initial trials, and Oxley says Synchron is discussing a pivotal trial with the US Food and Drug Administration that could lead to clinical approval.
Elon Musk’s neurotech firm Neuralink in Fremont, California, has surgically implanted its more complex device in the motor cortices of at least 13 volunteers who are using it to play computer games, for example, and control robotic hands. Company representatives say that more than 10,000 people have joined waiting lists for its clinical trials.
At least five more BCI companies have tested their devices in humans for the first time over the past two years, making short-term recordings (on timescales ranging from minutes to weeks) in people undergoing neurosurgical procedures. Researchers in the field say the first approvals are likely to be for devices in the motor cortex that restore independence to people who have severe paralysis — including BCIs that enable speech through synthetic voice technology.
As for what’s next, Farahany says that moving beyond the motor cortex is a widespread goal among BCI developers. “All of them hope to go back further in time in the brain,” she says, “and to get to that subconscious precursor to thought.”
Last year, Andersen’s group published a proof-of-concept study in which internal dialogue was decoded from the parietal cortex of two participants, albeit with an extremely limited vocabulary. The team has also recorded from the parietal cortex while a BCI user played the card game blackjack (pontoon). Certain neurons responded to the face values of cards, whereas others tracked the cumulative total of a player’s hand. Some even became active when the player decided whether to stick with their current hand or take another card.
Both Oxley and Matt Angle, chief executive of BCI company Paradromics, based in Austin, Texas, agree that BCIs in brain regions other than the motor cortex might one day help to diagnose and treat psychiatric conditions. Maryam Shanechi, an engineer and computer scientist at the University of Southern California in Los Angeles, is working towards this goal — in part by aiming to identify and monitor neural signatures of psychiatric diseases and their symptoms.
BCIs could potentially track such symptoms in a person, deliver stimulation that adjusts neural activity and quantify how the brain responds to that stimulation or other interventions. “That feedback is important, because you want to precisely tailor the therapy to that individual’s own needs,” Shanechi says.
Shanechi does not yet know whether the neural correlates of psychiatric symptoms will be trackable across many brain regions or whether they will require recording from specific brain areas. Either way, a central aspect of her work is building foundation models of brain activity. Such models, constructed by training AI algorithms on thousands of hours of neural data from numerous people, would in theory be generalizable across individuals’ brains.
Synchron is also using the learning potential of AI to build foundation models, in collaboration with the AI and chip company NVIDIA in Santa Clara, California. Oxley says these models are revealing unexpected signals in what was thought to be noise in the motor cortex. “The more we apply deeper learning techniques,” he says, “the more we can separate out signal from noise. But it’s not actually signal from noise, it’s signal from signal.”
Oxley predicts that BCI data integrated with multimodal streams of digital data will increasingly be able to make inferences about people’s inner lives. After evaluating that data, a BCI could respond to thoughts and wants — potentially subconscious ones — in ways that might nudge thinking and behaviour.
Shanechi is sceptical. “It’s not magic,” she says, emphasizing that what BCIs can detect and decode is limited by the training data, which is challenging to obtain.
The I in AI
In unpublished work, researchers at Synchron have found that, like Andersen’s team, they can decode a type of preconscious thought with the help of AI. In this case, it’s an error signal that happens just before a user selects an unintended on-screen option. That is, the BCI recognizes that the person has made a mistake slightly before the person is aware of their mistake. Oxley says the company must now decide how to use this insight.
“If the system knows you’ve just made a mistake, then it can behave in a way that is anticipating what your next move is,” he says. Automatically correcting mistakes would speed up performance, he says, but would do so by taking action on the user’s behalf.
Although this might prove uncontroversial for BCIs that record from the motor cortex, what about BCIs that are inferring other aspects of a person’s thinking? Oxley asks: “Is there ever going to be a moment at which the user enables a feature to act on their behalf without their consent?”
Angle says that the addition of AI has introduced an “interesting dial” that allows BCI users to trade off agency and speed. When users hand over some control, such as when brain data are limited or ambiguous, “will people feel that the action is disembodied, or will they just begin to feel that that was what they wanted in the first place?” Angle asks.
Farahany points to Neuralink’s use of the AI chatbot Grok with its BCI as an early example of the potentially blurry boundaries between person and machine. One research volunteer who is non-verbal can generate synthetic speech at a typical conversational speed with the help of his BCI and Grok. The chatbot suggests and drafts replies that help to speed up communication.
Although many people now use AI to draft e-mail and other responses, Farahany suspects that a BCI-embedded AI chatbot that mediates a person’s every communication is likely to have an outsized influence over what a user ends up saying. This effect would be amplified if an AI were to act on intentions or preconscious ideas. The chatbot, with its built-in design features and biases, she argues, would mould how a person thinks. “What you express, you incorporate into your identity, and it unconsciously shapes who you are,” she says.
Farahany and her colleagues argued in a July preprint for a new form of BCI regulation that would give developers in both experimental and consumer spaces a legal fiduciary duty to users of their products. As happens with a lawyer and their client, or a physician and their patient, the BCI developers would be duty-bound to act in the user’s best interests.
Previous thinking about neurotech, she says, was centred mainly on keeping users’ brain data private, to prevent third parties from accessing sensitive personal information. Going forward, the questions will be more about how AI-empowered BCI systems work in full alignment with users’ best interests.
“If you care about mental privacy, you should care a lot about what happens to the data when it comes off of the device,” she says, “I think I worry a lot more about what happens on the device now.”
This article is reproduced with permission and was first published on November 19, 2025.
