Anthropic Hits $30B Revenue with Massive Google TPU Deal
3 min readAnthropic’s Revenue Tripled — And It Just Locked In 3.5 Gigawatts of AI Compute
Anthropic just disclosed that its annual revenue run rate has surpassed $30 billion — a more than threefold jump from the $9 billion it reported at the end of 2025. The news landed alongside a blockbuster infrastructure deal: Anthropic has signed an agreement with Google and Broadcom to secure approximately 3.5 gigawatts of TPU computing capacity set to come online in 2027, making it one of the largest AI compute agreements ever announced.
Background: The Race for Compute
Tensor Processing Units, or TPUs, are Google’s custom-built chips designed specifically to accelerate machine learning workloads. Unlike general-purpose GPUs, TPUs are optimized to run the kind of matrix math that underlies large language models like Claude. For an AI company at Anthropic’s scale, securing a pipeline of dedicated, high-throughput compute is as strategic as securing any other critical resource.
Anthropic has been one of the fastest-growing companies in the AI space since the launch of its Claude model family. In November 2025, the company committed $50 billion toward American AI computing infrastructure — a pledge that today’s deal appears to be fulfilling at scale.
What Happened
According to reporting by The Next Web and The Register, Broadcom CEO Hock Tan confirmed that Anthropic is already consuming 1 gigawatt of Google TPU compute in 2026 — and that 2027 demand is expected to surge to more than 3 gigawatts. The deal positions Anthropic as one of Google’s most significant cloud customers, and one of Broadcom’s largest chip design partners for custom AI silicon.
On the revenue side, Anthropic revealed that the number of enterprise customers spending more than $1 million per year has more than doubled in under two months — climbing from over 500 in February 2026 to more than 1,000 today. The $30 billion run rate figure, if sustained, would place Anthropic among the fastest-growing enterprise software businesses in history.
Anthropic published its own announcement on the compute expansion at anthropic.com, framing the deal as part of a long-term strategy to build reliable, domestic AI infrastructure.
Why It Matters
The numbers here are staggering. Going from $9 billion to $30 billion in annual run rate in less than four months is not incremental growth — it’s an acceleration that suggests enterprise AI adoption has shifted from experimentation to serious operational deployment. Companies that were piloting AI tools are now integrating them at the core of their workflows, and Anthropic appears to be capturing a substantial share of that wave.
The compute deal is equally significant. Locking in 3.5 gigawatts of dedicated TPU capacity gives Anthropic a degree of infrastructure certainty that most AI companies don’t have. As AI models grow more capable and more computationally expensive to run, owning guaranteed access to next-generation chips could become a decisive competitive advantage — and a serious barrier to entry for rivals without similar arrangements.
Watch for Anthropic’s next model release and any pricing moves as the company continues to scale. With this level of infrastructure investment, larger and faster Claude models are almost certainly on the roadmap.
Continue Reading: Anthropic signs biggest compute deal yet with Google and Broadcom as run rate hits $30bn — The Next Web
