A group of amateur sleuths on Discord gained unauthorized access to Anthropic’s Mythos Preview, a restricted AI model the company has carefully limited due to its reported capability for finding security vulnerabilities in software and networks. The breach was reported by Bloomberg and published in a WIRED security roundup on April 25, 2026.
The Discord users pieced together access through a combination of relatively straightforward methods. They examined data from a recent breach of Mercor, an AI training startup that works with developers, and used their knowledge of Anthropic’s URL formatting conventions for other models to make an educated guess about Mythos Preview’s online location. At least one person in the group also reportedly leveraged existing permissions they held through work with an Anthropic contracting firm, ultimately gaining access not only to Mythos but to other unreleased Anthropic models as well.
According to Bloomberg, the group has so far used the access only to build simple websites — a deliberate choice, the report suggests, to avoid detection by Anthropic rather than to exploit the model’s capabilities.
The incident highlights potential gaps in access controls around powerful AI tools, even when creators take deliberate steps to restrict their distribution. Anthropic had positioned Mythos Preview as a highly capable system, which is why it limited its release in the first place. The fact that unauthorized users reached it without sophisticated hacking techniques may raise questions about how AI companies secure early-stage or restricted models.
Separately, Mozilla disclosed this week that it used early access to Anthropic’s Mythos Preview to identify and fix 271 vulnerabilities in its Firefox 150 browser release, underscoring the model’s intended use case in legitimate security research.
Source: WIRED