April 30, 2026

aiincider.ai

AI News. No Noise. Just Signal.

Google’s Pentagon AI Deal: Inside the Classified Contract

3 min read
Google signed a classified deal giving the Pentagon access to Gemini AI for military use. Learn what's covered, who opposed it, and why it matters for AI ethics.

Google has signed a classified agreement with the U.S. Department of Defense, granting the Pentagon direct access to its Gemini AI models for sensitive military operations. The deal, finalized on April 28, extends an existing $200 million government contract into classified networks used for mission planning and weapons targeting.

How We Got Here

The relationship between Silicon Valley and the U.S. military has long been complicated. Google faced intense backlash in 2018 over Project Maven, a drone analysis program that prompted employee protests and ultimately led the company to let that contract expire. Since then, Google had maintained a cautious public stance on defense applications for its AI systems.

The new deal follows a notable refusal by Anthropic, one of Google’s chief AI rivals, to grant the DoD similar access. Anthropic declined to allow its models to be used for domestic mass surveillance or autonomous weapons without human oversight. That refusal created an opening, and Google moved to fill it.

What the Contract Covers

The classified agreement, reported by Bloomberg and TechCrunch, grants the DoD authority to use Google’s Gemini AI for “any lawful government purpose.” The full terms are classified, but confirmed applications include mission planning and weapons targeting on secure military networks. Google states that safety filters are built in and that autonomous weapons without human oversight are excluded.

The deal is structured as an amendment to a $200 million contract Google had already signed with the Pentagon. OpenAI and Elon Musk’s xAI had signed similar agreements before Google, making this the latest in a series of moves by major AI labs into classified defense work.

Not everyone inside Google supports the decision. More than 950 employees signed an open letter urging the company to follow Anthropic’s example and decline the classified expansion. The Pentagon’s AI chief confirmed the new deal on April 28 and noted that relying on any single AI provider is “never a good thing,” signaling the DoD’s intent to continue signing agreements across the industry.

Why This Moment Matters

This deal represents a meaningful shift in what was once a closely guarded line. For years, AI companies built public commitments around responsible, peaceful use. Those commitments are now being tested at scale, with the most capable models moving into classified environments where public oversight is limited by design. The gap between a company’s stated AI principles and its actual contracts is becoming harder to ignore.

The competitive pressure here is real too. Anthropic’s refusal positioned it as a principled holdout, but it also ceded ground to rivals willing to sign. As defense budgets for AI continue to climb, the pressure on remaining holdouts will increase. What gets set as standard practice in these early contracts could shape how military AI is governed for years to come.

Continue Reading…

Leave a Reply