Silicon Valley startup Sabi is developing a noninvasive brain-computer interface (BCI) designed to decode a person’s imagined speech into text on a computer screen. The company’s first product is a brain-reading beanie planned for availability by the end of the year, with a baseball cap version also in development. The system aims to enable users to type by thinking—an alternative to dictation that, if successful, could change how people interact with computers.
From speech-to-text to thought-to-text
Speech-to-text technology is now built into modern computers. Sabi’s approach differs by translating internal speech—speech a person imagines rather than speaks aloud—into words displayed on a screen.
The system uses a technology category known as a brain-computer interface, or BCI. A BCI provides a direct communication pathway between the brain and an external device. Sabi CEO Rahul Chhabra positions the company’s wearable as an alternative to BCIs that require surgery, potentially enabling broader consumer use.
Sabi’s approach contrasts with Neuralink, which is developing surgically implanted BCIs for people with severe motor disabilities. While Sabi’s device could allow broader access to BCI technology, it differs from implanted brain chips. Instead, Sabi is targeting a wearable form factor.
Why noninvasive EEG is challenging—and Sabi’s scaling approach
Sabi’s brain-reading hat relies on EEG, or electroencephalography. EEG records the brain’s electrical activity using metal disks placed on the scalp. The system decodes imagined speech from EEG signals—something that is already possible—but current approaches are limited to small sets of words or commands rather than continuous, natural speech.
A key technical challenge is inherent to the noninvasive approach. Wearable systems must capture brain signals through a layer of skin and bone, which dampens neural signals. By comparison, surgically implanted devices pick up much stronger signals because they sit closer to neurons.
Sabi’s response is to improve signal quality by increasing sensor density. Chhabra argues that the way to boost accuracy with a wearable is to massively scale up the number of sensors. Most EEG devices have a dozen to a few hundred sensors, while Sabi’s cap is expected to have anywhere from 70,000 to 100,000 miniature sensors.
Chhabra describes the expected effect of that high-density setup: “Given that high-density sensing, it pinpoints exactly what and where neural activity is happening. We use that information to get much more reliable data to decode what a person is thinking.” The engineering approach is that a wearable can compensate for signal damping by using a more complex sensing array to capture richer spatial information.
Analysis: This sensor-scaling strategy suggests Sabi is treating the core challenge as an instrumentation problem as much as a decoding problem. If the system can produce reliable features from EEG collected under skin and bone attenuation, high-density sensing could become a general approach for other noninvasive BCI use cases. However, published performance metrics and benchmarks will be important to assess as the product approaches its stated timeline.
BCI adoption hinges on being noninvasive
Venture capitalist Vinod Khosla, an early investor in OpenAI and founder of Khosla Ventures, argues that noninvasive, wearable BCIs are necessary for widespread adoption. Khosla states: “The biggest and baddest application of BCI is if you can talk to your computer by thinking about it… ‘If you’re going to have a billion people use BCI for access to their computers every day, it can’t be invasive.’”
This framing links product design to market size: even if implanted BCIs are technically capable, a wearable approach may be necessary to reach mass-market usage. For technology planning, this shifts focus from clinical-grade signal acquisition to consumer-grade usability—comfort, fitting, and daily wearability—while maintaining sufficient decoding accuracy.
Analysis: Khosla’s argument is a market adoption claim about what users will accept, rather than a technical claim about EEG decoding. If Sabi’s beanie can deliver usable typing performance without surgery, it could accelerate broader adoption of BCIs as an input method rather than a specialized medical device. The extent to which users can learn the system and maintain stable performance across different users remains to be demonstrated.
Typing speed targets: a starting point for thought-based input
Sabi’s initial goal is an initial typing speed of 30 words per minute. This is slower than typical typing speeds. Chhabra indicates that speed should improve as users spend more time with the cap, suggesting an iterative learning process between the user and the decoding system.
Analysis: For BCI systems, the gap between “it works” and “it’s practical” often depends on throughput, error rates, and user training. The stated 30-words-per-minute target functions as a benchmark for what Sabi believes is sufficient for an early consumer experience—particularly if the value proposition is immediate access to computer input without dictation.
With a beanie planned for release by the end of the year and a cap version in development, Sabi’s next steps will likely focus on engineering the high-density sensor array, validating EEG-based imagined speech decoding beyond limited vocabularies, and demonstrating that the system can sustain improved performance over time. The core question is whether noninvasive EEG can be made capable enough—through sensor scaling and decoding improvements—to support everyday interaction with computers.
Source: WIRED