Google has identified and disrupted what it says is the first known zero-day exploit developed with the assistance of artificial intelligence. The discovery, reported in May 2026 by Google’s Threat Intelligence Group (GTIG), revealed that “prominent cyber crime threat actors” were preparing to use the vulnerability in a “mass exploitation event.”
The exploit targeted an unnamed open-source, web-based system administration tool and was designed to bypass two-factor authentication by taking advantage of “a high-level semantic logic flaw where the developer hardcoded a trust assumption” in the platform’s 2FA system.
GTIG researchers identified signs of AI involvement in the Python script used for the exploit, including a “hallucinated CVSS score” and “structured, textbook” formatting consistent with LLM training data. Google noted, however, that its researchers “do not believe Gemini was used” in developing the attack.
The find comes amid broader concerns about cybersecurity-focused AI models, including Anthropic’s Mythos, and follows a recently disclosed Linux vulnerability that was also discovered with AI assistance.
Beyond this specific incident, Google’s report warns that hackers are increasingly using AI to find and exploit security vulnerabilities. Tactics include “persona-driven jailbreaking” — prompting AI to adopt the role of a security expert — as well as feeding AI models entire repositories of vulnerability data. Attackers are also reportedly using a tool called OpenClaw in ways that suggest an interest in refining AI-generated payloads before deployment.
GTIG also flagged AI systems themselves as an emerging target, noting that adversaries are increasingly going after “integrated components that grant AI systems their utility, such as autonomous skills and third-party data connectors.” The report suggests that as AI becomes more capable, it may become both a more powerful tool for attackers and a more attractive target.
Source: The Verge