Meta Places Multi-million-dollar Order for Amazon AI Chips
Amazon has once again achieved a major victory with its self-developed chips. Last Friday, Amazon announced that Meta has signed an agreement to use millions of AWS Graviton chips to meet its growing AI demands.
It’s worth mentioning that AWS Graviton is a CPU based on the ARM architecture, specifically designed for general-purpose computing tasks, not for GPU (Graphics Processing Unit) purposes.
Although GPUs remain the preferred chips for training large models, once these models are trained, the AI agents built upon them are driving a shift in the types of chips required. AI agents generate compute-intensive workloads, such as real-time inference, coding, searching, and the management tasks involved in coordinating agents to complete multi-step tasks. AWS states that its latest version of Graviton is specifically designed to handle AI-related computational demands.

This deal brings more funds from Meta back to AWS rather than flowing to competitors like Google Cloud. In August last year, Meta signed a six-year, $10 billion agreement with Google Cloud, although Meta had primarily been a customer of AWS before, while also using Microsoft Azure.
We can’t help but notice that AWS chose to announce this news right after Google Cloud Next had just concluded, as if giving a smug smile to its cloud competitor. Of course, Google also manufactures its own custom AI chips and unveiled a new version at the conference.
Indeed, Amazon also manufactures its own AI GPU: the Trainium. Despite its name, this chip is for both training and inference—inference being the stage where models process prompts after the training.
But Anthropic had already reached an agreement earlier this month to package many such chips for years to come. This Claude manufacturer has agreed to spend $100 billion over ten years running its workloads on AWS, with a particular focus on Trainium. In exchange, Amazon has agreed to invest an additional $5 billion in Anthropic, bringing its total investment to $13 billion.
In the end, the Meta deal allowed Amazon to showcase a large AI customer base as a validation case for its self-developed CPUs. These chips compete with Nvidia’s new Vera CPU, which is also based on ARM architecture and designed specifically for handling AI agent workloads. Of course, the difference is that Nvidia sells its chips and AI systems to enterprises and cloud providers (including AWS), while AWS only provides the right to use its chips through its cloud services.
Earlier this month, Amazon CEO Andy Jassy targeted Nvidia and Intel in his annual shareholder letter, stating that companies hope for better value for money in the AI field and he intends to use this as a basis to win deals. This also means that the pressure faced by Amazon’s internal chip construction team has reached unprecedented heights.