Nvidia’s fastest AI chip ever could cost a pretty reasonable $40,000, but the chances of you actually being able to buy one are very, very low and for good reason
In a recent interview with CNBC’s Jim CramerNvidia CEO Jensen Huang shared details about the company’s upcoming Blackwell chip, which cost $10 billion in research and development to create.
The new GPU, which is built on a custom 4NP TSMC process and contains a total of 208 billion transistors (104 billion per chip), with 192 GB of HMB3e memory and 8 TB/s of memory bandwidth, entailed the creation of new technology , because what the company was trying to achieve “went beyond the limits of physics,” Huang said.
During the chat, Huang also revealed that the fist-sized Blackwell chip will be sold in bulk for “between $30,000 and $40,000.” That’s comparable in price to the H100, which analysts say cost between $25,000 and $40,000 per chip when demand was at its peak.
A major upgrade
According to estimates from investment services firm Raymond James (via @eersteadopter), the Nvidia B200s will cost Nvidia more than $6,000 to make, compared to the H100’s estimated $3,320 production cost.
The actual final retail price of the GPU will vary depending on whether it was purchased directly from Nvidia or through a third-party seller, but customers are not likely to purchase the chips alone.
Nvidia has already unveiled three variants of its Blackwell AI accelerator with different memory configurations: B100, B200 and the GB200, which brings together two Nvidia B200 Tensor Core GPUs and a Grace CPU. However, Nvidia’s strategy is focused on selling $1 million AI supercomputers, such as the multi-node, liquid-cooled NVIDIA GB200 NVL72 rack-scale system, DGX B200 servers with eight Blackwell GPUs, or DGX B200 SuperPODs.