Court Blocks Pentagon’s Anthropic Ban, Calling It ‘First Amendment Retaliation’
2 min readA federal judge dealt a major blow to the Trump administration’s aggressive AI policy last week, blocking the Pentagon’s ban on Anthropic’s Claude and calling it what it appears to be: government retaliation against a company for speaking its mind.
How It Started
The dispute traces back to a fundamental disagreement between Anthropic and the Department of Defense. The DOD demanded broad, unrestricted access to Anthropic’s Claude models for all “lawful purposes.” Anthropic pushed back, refusing to allow its AI to be used for fully autonomous weapons systems or to conduct domestic mass surveillance of American citizens. CEO Dario Amodei went public with those concerns — and that’s when things escalated fast.
President Trump responded in February, publicly accusing Anthropic of making a “disastrous mistake” by trying to impose its corporate ethics on the U.S. military. Within weeks, the Pentagon formally designated Anthropic a “supply chain risk” — a national security label typically reserved for foreign adversaries — and Trump ordered every federal agency to stop using Claude entirely.
The Judge’s Ruling
On March 26, U.S. District Judge Rita F. Lin of the Northern District of California granted Anthropic a preliminary injunction, temporarily blocking both the supply chain designation and the government-wide ban on Claude. Her language was striking.
Nothing in the relevant statute supports “the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for exposing a disagreement with the government,” Lin wrote. She called the Pentagon’s actions “likely both contrary to law and arbitrary and capricious” and concluded: “This appears to be classic First Amendment retaliation.”
The injunction took effect with a seven-day window — meaning the government has until approximately April 2 to seek an emergency stay from the Ninth Circuit Court of Appeals, which it has signaled it intends to pursue. A final verdict in the case remains months away.
Why This Matters
This case is bigger than Anthropic. It sets a potential precedent for whether AI companies can publicly disagree with the government over how their technology is used — without fear of being blacklisted from federal contracts or branded a national security threat. The ruling suggests that using regulatory machinery to punish speech may cross a constitutional line, even in the high-stakes arena of defense contracting.
For the broader AI industry, the implications are significant. Companies like OpenAI, Google DeepMind, and Meta all have AI safety policies that could, at some point, clash with government demands. If Anthropic prevails, it would reaffirm that AI developers have some latitude to set ethical boundaries on their own technology — even when working with the military.
Watch the Ninth Circuit closely. An emergency stay application from the government could come as early as this week, and how the appeals court responds will signal whether this fight is just beginning.
