In brief
- Boston startup AlterEgo unveiled a wearable that lets users communicate silently with machines by decoding neuromuscular signals in the jaw and throat.
- The technology builds on a 2018 MIT Media Lab prototype that showed subvocal speech could be captured and translated with high accuracy.
- AlterEgo positions its non-invasive approach as a practical alternative to brain implants like Neuralink or EMG wristbands from Meta.
A Boston startup called AlterEgo on Monday unveiled a wearable device that allows users to communicate silently with computers, marking the first serious attempt to commercialize a revolutionary technology pioneered at the MIT Media Lab.
The device, described by the company as a “near-telepathic” interface, does not read brain activity. Instead, it detects faint neuromuscular signals in the face and throat when a person internally verbalizes words. Those signals are decoded by machine learning software and transmitted as commands or text. Responses are delivered privately through bone-conduction audio.
The story was first reported by Axios, and shared by the company’s founder Arnav Kapur on X.
Introducing Alterego: the world’s first near-telepathic wearable that enables silent communication at the speed of thought.
Alterego makes AI an extension of the human mind.
We’ve made several breakthroughs since our work started at MIT.
We’re announcing those today. pic.twitter.com/KX5mxUIBAk
— alterego (@alterego_io) September 8, 2025
The approach builds on research first presented at MIT in 2018, when Kapur, then a graduate student, introduced a prototype headset under the same name. That version demonstrated that subvocal speech—words uttered in silence—could be captured with sufficient accuracy to control simple systems. The lab positioned it as a potential aid for people with speech impairments, while also suggesting broader applications in human-computer interaction.
AlterEgo has not disclosed details about funding, launch timing, or commercialization strategy, but the company will present the technology publicly at the Axios AI+ Summit in Washington, D.C., on Sept. 17.
The system draws on several existing strands of research. Electromyography, or EMG, has long been used in prosthetics to capture muscle impulses for controlling artificial limbs; AlterEgo applies the same principle to the muscles involved in speech. The U.S. military supported similar “subvocal speech” experiments in the 2000s, though early prototypes were bulky and inaccurate. Bone-conduction audio, which transmits sound through vibrations in the skull, is a well-established technology in consumer headsets and hearing aids.
What sets AlterEgo apart is the integration of these elements into a discreet, wearable package with improved machine learning that can parse silent speech in real time. Unlike invasive brain–computer interfaces such as Neuralink, or non-invasive EEG caps that attempt to interpret brain waves, AlterEgo does not attempt to decode thought directly. It registers only intentional motor signals, a distinction the company emphasizes as a safeguard for user privacy.
If successful, the device could reshape how people interact with artificial intelligence systems and connected devices by creating a channel for communication without keyboards, touchscreens or spoken voice. For consumers, it could mean whispering a command to an AI assistant in a crowded room without being overheard. For individuals with speech impairments, it may offer a new way to interact with the world.
The company enters a field that is attracting attention from major players. Elon Musk’s Neuralink is pursuing invasive brain implants with a focus on medical applications. Meta has explored wristbands that detect EMG signals in the forearm to control augmented-reality systems, while Apple and Google continue to invest in wearable interfaces tied to voice and gesture.
AlterEgo’s bet is that a lightweight, non-invasive system will prove more practical—and more acceptable to consumers—than implants or bulky hardware.
Generally Intelligent Newsletter
A weekly AI journey narrated by Gen, a generative AI model.