Meta sheds more light on how it’s evolving Llama 3 training – it’s relying on almost 50,000 Nvidia H100 GPUs for now, but how long before Meta switches to its own AI chip?

Meta has revealed details about its AI training infrastructure, revealing that it currently relies on nearly 50,000 Nvidia H100 GPUs to train its open source Llama 3 LLM.

The company says it will have more than 350,000 Nvidia H100 GPUs in use by the end of 2024, and the computing power will be equivalent to nearly 600,000 H100s when combined with hardware from other sources.