Amazon’s cloud service shows new AI servers, says Apple will use its chips

Amazon’s cloud service shows new AI servers, says Apple will use its chips

By Stephen Nellis and Greg Bensinger

LAS VEGAS (Reuters) – Amazon.com (NASDAQ:AMZN)’s cloud unit on Tuesday showed new data center servers packed with its own AI chips that will challenge Nvidia (NASDAQ:NVDA), with Apple (NASDAQ:AAPL) coming aboard as a customer to use them.

The new servers, based on 64 of Amazon Web Services’ Trainium2 chips, will be strung together in a massive supercomputer with hundreds of thousands of chips, with the help AI startup Anthropic, which will be the first to use it. Apple executive Benoit Dupin also said that Apple is using Trainium2 chips.

With more than 70% market share, Nvidia dominates the sale of AI chips, and traditional chip industry rivals such as Advanced Micro Devices (NASDAQ:AMD) are rushing to catch up.

But some of Nvidia’s most formidable competitors are also its customers: Meta Platforms (NASDAQ:META), Microsoft (NASDAQ:MSFT) and Alphabet (NASDAQ:GOOGL)’s Google all have their own custom AI chips. While Meta’s chip powers internal operations, Amazon and Google use their chips internally but also market them to paying customers.

AWS Chief Executive Matt Garman also said that Trainium3, the company’s next generation of AI chip, will debut next year.

The new offerings “are purpose-built for the demanding workloads of cutting edge generative AI training and inference,” said Garman at the event in Las Vegas on Tuesday.

The new servers, which AWS calls Trn2 UltraServers, will compete against Nvidia’s flagship server packing 72 of its latest “Blackwell” chips. Both companies also offer proprietary technology for connecting the chips, though Gadi Hutt, who leads business development for the AI chips at AWS, said that AWS will be able to connect a greater number of chips together than Nvidia.

“We think with Trainium2, (customers) get more compute than what’s available from Nvidia today, and they will be able to save cost,” Hutt told Reuters in an interview, adding that some AI models can be trained at 40% lower cost than on Nvidia chips.

AWS executives said the new servers and huge supercomputer would come online next year but did not give a specific date. Both AWS and Nvidia are rushing to get their flagship offerings to market amid booming demand, though Nvidia shipments have been limited by supply chain constraints.

Both Nvidia and AWS use Taiwan Semiconductor Manufacturing to manufacture their chips.

“From a supply standpoint, we are in a pretty good shape across all of the supply chain,” Hutt told Reuters. “When we do the systems, the only item that we cannot dual source is the Trainium chips.”

This post appeared first on investing.com