Amazon’s Self-developed AI Chip Generates Billions of Dollars in Revenue
Can any company, big or small, really shake Nvidia’s dominant position in the AI chip field? Perhaps not. But Amazon CEO Andy Jassy said this week that there are still hundreds of billions of dollars in revenue opportunities for companies that can even get a share of it.
As expected, at the AWS re: Invent conference, the company unveiled its next-generation AI chip, the Trainium3, which is four times faster and consumes less power than the current Trainium2 and benchmarks against Nvidia. Jassy revealed some details about the current Trainium chip in a post on X, which explains why the company is so optimistic about the chip.
He stated that the Trainium2 business “has made substantial progress, with an annual revenue of billions of dollars, over 1 million chips in production, and over 100000 companies currently using it, making up the majority of Bedrock’s usage”. Bedrock is Amazon’s AI application development tool that allows companies to choose and combine from numerous AI models.

Cost effectiveness advantage and key customers
Jassy said that Amazon’s AI chip is winning out in its vast cloud customer list because it has a “striking cost performance advantage compared to other GPU options”. In other words, he believes that it has better performance and lower cost than those “other GPUs” on the market. This is certainly Amazon’s classic strategy: offering its self-developed technology at a lower price.
In addition, AWS CEO Matt Garman provided more information in an interview with CRN, revealing one of the clients who has contributed significantly to billions of dollars in revenue: unsurprisingly, Anthropic. Garman said, “We see that Trainium2 has gained tremendous market appeal, especially from our partner Anthropic. We have announced the ‘Rainier Project’, in which over 500000 Trainium2 chips are helping them build the next generation model for Claude.”
The “Rainier Plan” is Amazon’s most ambitious AI server cluster, distributed across multiple data centers in the United States, aimed at meeting the surging demand of Anthropic. It was launched in October. Of course, Amazon is a major investor in Anthropic. In exchange, Anthropic will use AWS as its primary model training partner, although Anthropic is now also providing services on Microsoft Cloud through Nvidia chips.
OpenAI is now also starting to use AWS outside of Microsoft Cloud. But the cloud giant stated that its partnership with OpenAI does not contribute significantly to Trainium’s revenue, as AWS runs its services on Nvidia chips and systems.
Competitive landscape and technological challenges
In addition, the AI models and software built for NVIDIA chip services also rely on NVIDIA’s proprietary computing unified device architecture software. CUDA allows applications to use GPUs for parallel processing, computing, and other tasks. Just like the battle between Intel and SPARC chips in the past, rewriting AI applications for non CUDA chips is not an easy task.
However, Amazon may have plans for this. As we previously reported, its next-generation AI chip Trainium4 will be designed to be interoperable with Nvidia’s GPU in the same system. Whether this will help win more business from Nvidia or just consolidate its dominant position on AWS cloud remains to be seen.
In fact, only a few American companies like Google, Microsoft, Amazon, and Meta have all the engineering elements – chip design expertise, self-developed high-speed interconnects, and network technology – to truly compete with Nvidia. (Please note that Nvidia monopolized a major high-performance networking technology market in 2019, when CEO Huang Renxun bid higher than Intel and Microsoft to acquire InfiniBand hardware manufacturer Mellanox.)
Amazon’s winning logic
This may not be important for Amazon. If it has already entered the track of generating billions of dollars in revenue through the Trainium2 chip, and the next generation of chips will be even better, this may be enough to make it a winner.