May 12, 2026

aiincider.ai

AI News. No Noise. Just Signal.

Google Catches Hackers Using AI to Build Zero-Day Exploit

3 min read
Google's threat intelligence team says hackers used AI to build a zero-day exploit that bypassed two-factor authentication. Read the full breakdown.

Google says it has caught hackers in the act of using artificial intelligence to weaponize a previously unknown software flaw, marking what the company calls the first confirmed case of an AI-built zero-day exploit found in the wild. The discovery, made public on May 11, escalates a long-feared scenario: criminal groups now have working AI tools that help them break in faster than defenders can patch.

What Google Found

The company’s Threat Intelligence Group disclosed that a financially motivated hacker collective tried to launch a “mass vulnerability exploitation operation” against users of a popular open-source web administration tool. Buried in the attackers’ Python script was a zero-day flaw that bypassed two-factor authentication, the security checkpoint many businesses rely on to keep accounts safe.

According to Google Cloud’s threat intelligence team, analysts examined the structure of the exploit and concluded with high confidence that an AI model helped the attackers identify and weaponize the vulnerability. Google said the model was not its own Gemini system, and the affected vendor was notified and patched the bug before broad exploitation occurred.

A Long-Predicted Moment Arrives

Cybersecurity experts have warned for years that frontier AI models would eventually shift from theoretical risk to practical attack tool. John Hultquist, chief analyst at Google’s threat intelligence arm, summed up the moment in two words: “It’s here.” Google also flagged related malware families, including one it calls PROMPTFLUX, that use large language models at runtime to generate fresh decoy code and evade detection.

The disclosure lands less than a month after Anthropic announced Mythos, a model so capable at offensive security work that it is being released only to a tight group of vetted partners. OpenAI followed last week with GPT-5.5-Cyber, a variant tuned for authorized defensive use and now being offered to European cybersecurity teams under a limited preview. The arms race the labs have been quietly bracing for is no longer hypothetical.

Why It Matters

The economics of cyberattacks just changed. Finding a zero-day used to take a skilled human researcher weeks or months, which kept the supply of fresh exploits relatively low. An AI assistant that can search source code, propose vulnerable paths, and draft working proof-of-concept code collapses that timeline and makes large, automated campaigns far cheaper to run. Smaller crews can now do work that once required well-funded nation-state teams.

For defenders, the takeaway is blunt. Patch cycles, identity hardening, and detection tooling all need to assume that adversaries are running their own AI copilots, twenty-four hours a day. Expect more pressure on software vendors to ship fixes faster, more interest in AI-powered defensive products, and renewed debate in Washington and Brussels about who gets early access to cyber-capable models.

Watch for two things next: how quickly other major cloud providers confirm similar findings in their own telemetry, and whether regulators move to require labs to share offensive-capability red-team results with governments before release.

Continue Reading…

Leave a Reply