OpenAI is preparing to launch a specialized cybersecurity model called GPT-5.5-Cyber, with access restricted to a select group of vetted “cyber defenders” rather than the general public. CEO Sam Altman announced the model on X in April 2026, saying the rollout would begin “in the next few days.”
Altman said OpenAI would “work with the entire ecosystem and the government to figure out trusted access for Cyber.” The intended purpose is to help institutions strengthen their cyberdefenses, though OpenAI has not released technical details or specifications about the model’s capabilities. Based on its name, GPT-5.5-Cyber appears to be a specialized version of GPT-5.5, which OpenAI recently described as its “smartest and most intuitive to use model yet.”
It is not clear which individuals or organizations will receive access first. Previous “trusted access” schemes from OpenAI have involved vetted professionals and institutions.
The restricted launch follows a pattern OpenAI has used before. The company has staggered the release of earlier cybersecurity-focused models, and applied a similar approach to GPT-Rosalind, a purpose-built life sciences model aimed at biology research and drug discovery.
The move also reflects a broader trend in the AI industry of companies limiting access to certain models citing potential for misuse. Anthropic took a comparable approach with its Claude Mythos model this month, though its rollout drew significant attention and encountered problems with its secure release. The White House has since opposed plans to expand access to Mythos further, with unnamed officials citing cybersecurity concerns and worries that increased demand could limit the government’s own ability to use the system, according to The Wall Street Journal.
The restricted rollout of GPT-5.5-Cyber suggests that access to advanced AI tools in sensitive domains may increasingly be mediated through government and institutional gatekeeping, at least in the near term.
Source: The Verge