Amazon unveils next-generation Graviton4 and Trainium2 chips to power the business AI future

AWS has confirmed its intentions to be one of the world’s largest hardware suppliers with the launch of its most powerful and efficient chips today.

Unveiled at the AWS re:Invent 2023 event by CEO Adam Selipsky, the new Graviton4 and Trainium2 chips are designed to power the next generation of AI and machine learning models, delivering more performance and efficiency than ever before.

“Silicon supports every customer workload, making it a critical area of ​​innovation for AWS,” said David Brown, vice president of Compute and Networking at AWS. “By focusing our chip designs on real-world workloads that matter to customers, we can deliver the most advanced cloud infrastructure to them.

AWS Graviton4 and Trainium2

AWS promises a big step forward for Graviton4, claiming it will offer up to 30% better compute performance, 50% more cores, and 75% more memory bandwidth than the current generation of Graviton3 processors.

The company says that as customers experiment and deploy more AI-powered workloads, they will see their compute, memory, storage and networking requirements increase and thus require higher performance and larger instance sizes, all at an affordable cost and energy savings. efficiency. to reduce any impact on the environment.

AWS is making Graviton4 available in Amazon EC2 R8g instances that it says offer larger instance sizes with up to three times more vCPUs and three times more memory than current generation, allowing customers to process larger amounts of data, scale their workloads and reduce time-to- date can be improved. results and reduce total cost of ownership.

Following its original launch in 2020, the second generation of Trainium2 looks to deliver faster and more efficient training for current and future AI models using larger data sets than ever, with today’s most advanced FMs and LLMs having hundreds of billions to trillions of parameters.

AWS says Trainium2 will deliver up to four times faster training than its first-generation hardware, along with three times more memory capacity and can improve energy efficiency by up to two times.

It can be deployed in EC2 UltraClusters of up to 100,000 chips, making it possible to train basic and large language models in a fraction of the time previously required. The company gives the example of training an LLM with 300 billion parameters in weeks versus months.

More from Ny Breaking

Related Post