Onix is launching what WIRED describes as a “Substack of bots,” a platform that lets people pay for AI versions of human experts to dispense advice 24/7. The model centers on converting an expert’s knowledge into a revenue-generating asset independent of the expert’s time—while using product design elements such as disclaimers and privacy protections to distinguish guidance from medical treatment. The concept is not entirely new, but Onix’s focus on health and wellness, along with its early roster of vetted experts, demonstrates how the “chatbot-as-a-persona” model is moving from novelty to a structured business platform.
A platform built around AI representations of experts
WIRED reports that Onix is not introducing a fundamentally unfamiliar idea: chatbots standing in for human experts are already common, and the business case for monetizing them is established. What Onix is formalizing is a distribution and monetization model that resembles a subscription publishing platform—described as a “Substack of bots”—where users interact with AI representations of specific people rather than generic assistants.
The economic framing in WIRED’s report draws directly from an Onix white paper. It states, “The expert’s knowledge base becomes a capital asset that generates revenue independent of their time.” In practice, this points to a technical and operational shift: instead of treating expert advice as a service delivered on demand, the platform treats the expert’s knowledge as something that can be ingested, represented, and then accessed repeatedly by many users.
WIRED notes that the company’s longer-term plan is to scale to “many thousands of experts” on the platform. For now, Onix is starting with a “highly vetted group of 17,” with a “concentration on health and wellness.” This early constraint matters technologically because it suggests Onix is iterating on quality control, training scope, and interface behaviors in a narrower domain before expanding to other categories.
Early roster includes marketers and influencers
WIRED reports that most of Onix’s initial experts have professional credentials, but are also “notable as marketers and influencers.” The article notes that some have books or podcasts to promote, and some sell “supplements or medical devices.” This matters for the technology because a persona-based AI system is not only answering questions; it is also operating as a branded interface. When expert identity and product promotion are intertwined, the system’s responses, disclaimers, and boundaries become part of the product’s technical design.
One example WIRED provides is Michael Rich, who counsels children and parents on media overuse and its effects. WIRED reports that in the platform’s chats, Rich’s views on screen time “dominate.” This detail suggests a specific kind of knowledge representation: the AI persona likely reflects the expert’s emphasis areas in a way that users can quickly perceive, which raises questions about how Onix structures the expert’s “knowledge base” and how it prioritizes topics during conversation.
Guidance versus treatment: how disclaimers shape user expectations
WIRED’s report emphasizes that Onix is positioning its AI systems as guidance rather than medical care. When WIRED spoke to Rich, he said he agreed to transfer his knowledge to Onix partly because of Onix’s “privacy protections” and also because of the company’s “clear communication that it doesn’t provide actual medical treatments.” Rich is quoted as saying: “It’s about helping folks understand exactly what may be going on for them and how they might pursue seeking therapy if they need it.“
Bennahum is also quoted in WIRED’s report, clarifying how the platform should be interpreted. WIRED attributes this statement to Bennahum: “It’s meant to augment [a user’s] ability to be thoughtful around whatever pediatric journey they’re on.” The same passage notes that engaging with a bot representing a pediatrician is “in no way akin to a doctor’s visit.”
Technically, this framing is not just language; it affects how the system should behave. WIRED notes that “a disclaimer appears when you access the system noting you are receiving guidance, not medical treatment.” For AI systems operating in health-adjacent contexts, these disclaimers function as part of the interaction design, setting constraints on user interpretation and potentially shaping what the model is allowed or expected to do within the conversation.
WIRED’s report includes explicit examples of intended boundaries: Rich’s and Bennahum’s statements are used to reinforce that the platform is “about” understanding and next steps, not diagnosing or treating. While WIRED’s excerpt does not detail every technical mechanism behind those boundaries, the article’s inclusion of disclaimers and communications suggests Onix is treating “medical versus non-medical” scope as a product requirement.
Industry context: turning expert time into scalable AI revenue
WIRED situates Onix within a broader pattern by pointing to an existing business model: Manhattan psychologist Becky Kennedy has a parenting advice business featuring a chatbot named Gigi trained on her “acumen and knowledge.” WIRED reports that Kennedy’s company “pulled in $34 million last year.” The article uses this as an example of how expert-persona chatbots can become financially material, not just experimental.
That comparison matters because it frames Onix as a continuation of a trend rather than a one-off product. If an expert-to-bot model can reach large revenues—WIRED cites $34 million for Kennedy’s company—then platforms like Onix can be seen as attempting to standardize the process: vet experts, package their knowledge into a conversational interface, and deliver it through a subscription-like experience.
At the same time, Onix’s emphasis on privacy protections (as Rich describes them) highlights a persistent technical and compliance consideration for AI systems that represent real people. WIRED does not provide detailed information in the excerpt about what those protections entail, but the fact that the expert cites privacy protections as a reason to participate suggests that Onix’s data handling and access controls are part of the trust proposition.
Looking ahead, observers may watch how Onix scales from 17 experts to “many thousands,” particularly in domains beyond health and wellness. WIRED’s focus on health and wellness in the initial lineup suggests that the company may be refining a repeatable approach for knowledge ingestion, persona behavior, and user boundary-setting. If that approach generalizes, the “capital asset” framing in the Onix white paper could become a template for other knowledge-heavy industries—though the extent to which that happens would likely depend on how clearly platforms can maintain distinctions between informational guidance and professional services.
Source: WIRED