The most formidable supercomputer ever gears up for ChatGPT 5 – thousands of ‘old’ AMD GPU accelerators processed 1 trillion parameter models

The world’s most powerful supercomputer used just over 8% of the GPUs it is equipped with to train a large language model (LLM) containing one trillion parameters – comparable to OpenAI’s GPT-4.

Frontier, based at Oak Ridge National Laboratory, used 3,072 of its AMD Radeon Instinct GPUs to train an AI system on a trillion-parameter scale, and it used 1,024 of these GPUs (about 2.5%) to create a model with 175 billion parameters, essentially the same size as ChatGPT.