Anthropic just dropped a bombshell of an announcement: they’re expanding their partnership with Amazon in a way that makes their previous deals look like pocket change. We’re talking up to 5 gigawatts of new compute capacity, over $100 billion committed to AWS over the next decade, and another $5 billion investment from Amazon today, with the potential for up to $20 billion more down the line.
Let me unpack what this actually means, because the raw numbers are staggering even by AI industry standards.
The infrastructure play
5 gigawatts is a lot of power. For context, that’s enough to run several large nuclear reactors worth of data centers. Anthropic is locking in capacity across Amazon’s custom silicon lineup—Graviton, Trainium2, Trainium3, and Trainium4—with an option to buy future generations as they roll out. The first chunk of Trainium2 capacity hits Q2 2026, and by the end of the year they expect nearly 1 GW of combined Trainium2 and Trainium3 online.
This isn’t just talk. Project Rainier, which they launched together, is already one of the largest compute clusters on the planet. Anthropic is currently running over a million Trainium2 chips to train and serve Claude. That’s a serious hardware footprint.
Demand is exploding
The real story here is the demand curve. Anthropic’s run-rate revenue has hit $30 billion, up from roughly $9 billion at the end of 2025. That’s a 3x jump in about six months. Consumer usage across free, Pro, and Max tiers has spiked hard, and they’re honest about the strain: “our unprecedented consumer growth, in particular, has impacted reliability and performance for free, Pro, Max, and Team users, especially during peak hours.”
I appreciate the candor. Most companies would bury that under marketing fluff. They’re essentially saying “we grew too fast and our infrastructure couldn’t keep up.” The new capacity is supposed to fix that, with meaningful compute arriving within three months.
Claude Platform on AWS
One interesting detail is that the full Claude Platform will be available directly within AWS—same account, same controls, same billing. No extra credentials or contracts. This is smart for enterprise customers who want to use Claude without breaking their existing governance and compliance setups.
Claude remains the only frontier model available on all three major clouds: AWS Bedrock, Google Cloud Vertex AI, and Microsoft Azure Foundry. That’s a flex, but it also means Anthropic isn’t locking themselves into one ecosystem.
The investment numbers
Amazon is putting $5 billion into Anthropic today, with an option for up to $20 billion more. That’s on top of the $8 billion they’ve already invested. Total potential commitment from Amazon: $28 billion. Plus the $100 billion+ in AWS spending over ten years.
Dario Amodei, Anthropic’s CEO, framed it as necessary to keep pace with demand. Andy Jassy from Amazon talked up their custom silicon’s performance and cost advantages. Both are right, but the scale of this deal suggests something bigger: a bet that AI compute demand won’t slow down anytime soon.
My take
This is the kind of deal that makes you realize how capital-intensive frontier AI really is. $100 billion in cloud spending, $28 billion in equity investment, 5 GW of compute—and this is just one partnership. OpenAI has Microsoft, Google has DeepMind and its own infrastructure, Meta is building its own clusters.
The race for compute is real, and it’s only going to get more expensive. Anthropic is essentially buying insurance against capacity shortages while betting that Amazon’s custom chips will give them a cost advantage over NVIDIA-based clusters.
Whether that bet pays off depends on execution. Trainium has had a mixed reputation compared to NVIDIA’s H100/B200 lineup, but Amazon keeps improving it. If Trainium3 and Trainium4 deliver on performance, this could be a smart long-term play. If not, they’ve locked themselves into a very expensive relationship.
Either way, the next few years are going to be interesting.
Comments (0)
Login Log in to comment.
Be the first to comment!