Pentagon Signs AI Deals With 7 Tech Giants, Locking Out Anthropic
3 min readThe U.S. Department of Defense has signed agreements with seven of America’s top AI companies to deploy frontier artificial intelligence on its most classified military networks. The deals bring OpenAI, Google, Microsoft, and others inside the Pentagon’s highest-security infrastructure, but one major player is conspicuously absent: Anthropic.
What Was Agreed
The seven companies entering these agreements are SpaceX, OpenAI, Google, Nvidia, Reflection, Microsoft, and Amazon Web Services. Their AI systems will be integrated into the Pentagon’s Impact Level 6 and Level 7 networks, the top tiers of the military’s classified digital architecture. These networks handle the most sensitive intelligence and operational data the DoD manages.
The stated goals are practical: faster data synthesis, sharper situational awareness, and better decision support for military personnel operating in high-stakes environments. The Pentagon also framed the deal as a way to prevent “vendor lock” by spreading AI capabilities across multiple providers rather than depending on a single company.
Why Anthropic Was Left Out
Anthropic, the company behind the Claude AI platform, was not part of the announcement. According to CNN, negotiations broke down after Anthropic asked for written guarantees that its technology would not be used for fully autonomous weapons systems or domestic mass surveillance programs. The Pentagon declined to provide those assurances, and talks ended.
The exclusion is notable. Anthropic is one of the most prominent AI safety-focused labs in the world, and its willingness to walk away from a major government contract over safety concerns is unusual in an industry that has largely moved toward military engagement.
Why This Matters
Today’s agreements reflect a broader shift in how the U.S. military views AI: not as a peripheral tool, but as core infrastructure for intelligence and warfighting operations. The inclusion of companies like OpenAI signals that frontier AI labs are increasingly willing to work inside classified government systems.
Anthropic’s position raises a harder question for the industry. Can AI companies hold firm on safety constraints when a major national security customer offers a contract? The company’s choice to walk away suggests some labs are willing to test that limit. How others respond to similar pressure will shape the ethics of AI in national security for years ahead.
What to Watch Next
The Pentagon has not commented on whether it plans to revisit talks with Anthropic. Meanwhile, the seven participating companies will begin integration into classified DoD networks. Observers of AI governance and military technology will be watching closely to see whether these deployments produce the efficiency gains the Pentagon expects, and whether any safety incidents follow.
