AI wearables have tended to oscillate between two extremes: devices that attempt to be “always on,” and gadgets that are limited enough to avoid complex workflows but still struggle to deliver consistently. A new hardware approach from former Apple Vision Pro developers aims to take a different path—an AI chatbot embedded in a small “puck” that only activates when the user presses a physical button.
The Device
According to WIRED, Chris Nolet and Ryan Burgoyne—former Apple employees who worked on developing the Apple Vision Pro—are behind a startup associated with Y Combinator called Button. The device is listed for $179 for preorder and is scheduled to ship in December. It is designed to address two issues that have affected many AI gadgets: privacy expectations and the practical moment-to-moment experience of when the device listens.
Button’s core technology is straightforward: it is an AI hardware puck that contains a generative AI chatbot inside a case that looks like an iPod Shuffle. The user interface is intentionally physical. The device is a button; pressing it enables the chatbot to listen, after which it can answer questions and take commands.
According to WIRED, the chatbot can respond out loud, or it can connect to earbuds or smart glasses via Bluetooth. This combination suggests the system is designed to fit into existing audio workflows rather than requiring a dedicated speaker experience at all times. The technology is centered on voice interaction, but the output path is flexible.
Button’s design includes a “brushed aluminum tin” look, according to WIRED. That styling choice aligns with the product’s stated goal: making the activation moment unambiguous through a tactile control.
Privacy by Design: No Passive Listening
The most explicit differentiator described in WIRED’s report is privacy. Button is described as a device that only works when you push the button, and therefore does not listen passively to everything around it.
This “push-to-talk” behavior is framed as a response to a recurring concern with wearable AI: whether microphones and sensors are continuously capturing conversations without clear user control. According to WIRED, Nolet ties the design to a real-world experience. He says he met and talked with someone who later turned out to have been recording their entire conversation with a wearable device.
Nolet’s quote, as reported by WIRED, is: “It really freaked me out.” He continues: “It’s one thing if I make a conscious decision to share something, but that’s totally a different thing. If people are just wearing around these pendants, or they’re recording all of our conversations, I think it feels a little icky to me.”
For tech readers, the key point is that Button’s privacy posture is operational behavior rather than a software setting or a legal promise. If the device truly only enables listening when the button is pressed, the system’s data capture window becomes tied to a user action rather than an ambient, continuous state.
Comparison to Other AI Wearables
WIRED places Button in the context of earlier attempts at wearable AI. The most direct comparison is the Humane Ai Pin, a wearable device released in 2024 that was billed as a smartphone replacement but, according to the report, failed to deliver on its promises and was shut down a year later.
Button’s creators, as WIRED reports, are pursuing different differentiators rather than replicating the same ambition level. The report describes Button’s differentiators as privacy and immediacy. Immediacy refers to the activation flow: the user presses the button, then the chatbot listens and responds. This is a narrower interaction model than a device that attempts to interpret continuous context.
From a technology standpoint, this reframes the product challenge. Instead of solving every aspect of “assistant” behavior across a device’s always-on life, Button concentrates on a single loop: activation → listening → response. If the loop is fast and consistent, the device could feel more predictable even if it does not attempt to replace a full smartphone interface.
What to Watch as Button Ships
Button is positioned as a pre-order item at $179, with a shipping timeline of December, per WIRED. That schedule indicates the device is moving from concept to deployment. For the technology community, the most relevant questions will likely center on the behavior that differentiates it: does the listening truly occur only after the button press, and how does Bluetooth audio integration perform when the chatbot responds out loud or routes output to earbuds and smart glasses?
WIRED’s framing highlights a broader theme in AI hardware: the way a device handles when it listens can be as important as what it can say. If Button’s push-to-talk model makes privacy more legible to users and bystanders, it could influence how future wearable AI products balance ambient sensing with user consent. WIRED does not provide additional technical specifications beyond the described interaction model, so real-world testing will likely determine how well the design translates into day-to-day use.
For anyone tracking the evolution of AI wearables, Button’s approach demonstrates how a hardware interface can reshape an AI system’s operating assumptions. Instead of asking users to interpret a device’s intent from its presence, the technology asks them to initiate the interaction through a physical control.
Source: WIRED