Nvidia powers a mega Tesla supercomputer powered by 10,000 H100 GPUs

Tesla has revealed its investment in a massive computing cluster of 10,000 computers Nvidia H100 GPUs specifically designed to power AI workloads.

The system, which went online this week, is designed to handle the mountains of data collected by the fleet with the aim of developing fully self-driving vehiclesaccording to its AI infrastructure leader, Tim Zaman.

Tesla has been striving for years to get to the point where its vehicles can be considered fully autonomous and has invested more than a billion dollars in adopting the infrastructure to make this happen.

Tesla supercomputer

In July 2023, CEO Elon Musk revealed that the company would invest $1 billion over the next year to build out its Dojo supercomputer. Dojowhich is based on Teslas proprietary technology, started with the D1 chip, equipped with 354 custom CPU cores. Each training tile module contains 25 D1 chips, with the base Dojo V1 configuration containing a total of 53,100 D1 cores.

The company also built a compute cluster with 5,760 computers Nvidia A100 GPUs in June 2012. But the company’s latest investment in 10,000 of the company’s H100 GPUs dwarfs the power of this supercomputer.

This AI cluster, valued at more than $300 million, will provide peak performance of 340 FP64 PFLOPS for technical computing and 39.58 INT8 ExaFLOPS for AI applications, according to Tom’s hardware.

In fact, the power available to Tesla is greater than that of the Lenoardo supercomputer, the publication said, making it one of the most powerful computers in the world.

Nvidia’s chips are the components that power many of the world’s leading generative AI platforms. These GPUs, which are built into servers, have a variety of other uses, from medical imaging to weather model generation.

Tesla hopes to use the power of these GPUs to more efficiently and effectively process the massive amounts of data it has to build a model that can successfully rival a human.

While many companies normally rely on infrastructure hosted by the likes of Google or Microsoft, Tesla’s supercomputing infrastructure is entirely on-premise, meaning the company will have to maintain all of them as well.

More from TechRadar Pro

Related Post