Back
The VergeBusinessThe Verge2026-04-06

Anthropic Signs Multi-Gigawatt TPU Deal With Google and Broadcom — Revenue Hits $30B Run Rate

Anthropic has secured a major infrastructure deal with Google and Broadcom for multiple gigawatts of next-generation TPU capacity, with the compute expected to come online in 2027. The announcement came alongside a disclosure that Anthropic's annualized revenue has now crossed the $30 billion mark.

Original source

Anthropic has signed what sources describe as a landmark compute deal with Google and Broadcom, securing multiple gigawatts of next-generation TPU capacity that will power the next generation of Claude models starting in 2027. The announcement marks a significant escalation in Anthropic's infrastructure ambitions — and a deepening of its already substantial relationship with Google, which has invested heavily in the company.

The deal structure is notable: rather than purchasing or leasing generic cloud compute, Anthropic is locking in dedicated TPU capacity through a long-term arrangement that spans multiple product generations of Google's custom AI chips. Broadcom's involvement signals that the custom ASIC layer beneath the TPUs is also part of the agreement — a level of vertical integration in AI compute previously associated only with the hyperscalers themselves.

Alongside the infrastructure announcement, Anthropic disclosed that its annualized revenue run rate has surpassed $30 billion — a milestone that puts it in rarefied company among AI companies and validates the commercial traction of Claude across enterprise, developer, and consumer markets. The company had reported a $3 billion run rate in early 2025, making the 10x growth over roughly a year one of the fastest revenue ramps in enterprise software history.

The compute deal and revenue disclosure together paint a picture of a company moving decisively to secure its position as the training and inference costs for frontier AI scale dramatically. Industry observers note that the timing — right after OpenAI's gpt-oss open-weight release — suggests a strategic message: Anthropic is building for the long game, investing in proprietary infrastructure rather than competing on model releases alone.

For developers building on Claude, the practical implication is that Anthropic's capacity constraints — which have led to rate limiting and service disruptions during peak demand — should ease substantially as the new TPU capacity comes online. It also raises questions about how the compute advantage will translate into model capabilities, with Anthropic's next frontier models expected to arrive later in 2026.

Panel Takes

The Builder

The Builder

Developer Perspective

The practical upshot for me is that Anthropic's rate limits should get less punishing as this capacity comes online. But 'multi-gigawatt TPU deal starting in 2027' means we're at least a year from feeling it — the near-term capacity crunch isn't resolved by this announcement.

The Skeptic

The Skeptic

Reality Check

$30B run rate is impressive but run rate math can flatter — one big enterprise contract renewal inflates it. The deeper concern is dependency: Anthropic is more deeply entangled with Google's infrastructure than ever, which has competitive implications as the two companies increasingly offer overlapping AI products.

The Futurist

The Futurist

Big Picture

This is the inflection point where AI stops being a software business and starts being an infrastructure business. Securing gigawatts of next-gen TPU capacity years in advance is the compute equivalent of buying land — and the $30B revenue validates that the bet on frontier AI has commercial substance behind it.