- HBM4 chips ready to power Tesla’s advanced AI ambitions
- Dojo supercomputer to integrate Tesla’s powerful HBM4 chips
- Samsung and SK Hynix are competing for Tesla’s AI memory chip orders
As the high-bandwidth memory (HBM) market continues to grow and is expected to reach $33 billion by 2027, the competition between Samsung and SK Hynix is heating up.
Tesla is adding fuel to the fire as it has reportedly contacted both Samsung and SK Hynix, two of South Korea’s largest memory chip makers, seeking samples of its next-generation HBM4 chips.
Now a report from the Korean economic daily claims that Tesla plans to evaluate these samples for possible integration into its custom-built Dojo supercomputer, a crucial system designed to support the company’s AI ambitions, including its self-driving vehicle technology.
Tesla’s ambitious AI and HBM4 plans
The Dojo supercomputer, powered by Tesla’s proprietary D1 AI chip, helps train the neural networks needed for the Full Self-Driving (FSD) function. This latest request suggests that Tesla is gearing up to replace older HBM2e chips with the more advanced HBM4, which offers significant improvements in speed, power efficiency and overall performance. The company is also expected to integrate HBM4 chips into its AI data centers and future self-driving cars.
Samsung and SK Hynix, long-time rivals in the memory chip market, are both preparing prototypes of HBM4 chips for Tesla. These companies are also aggressively developing customized HBM4 solutions for major US technology companies such as Microsoft, Meta and Google.
According to industry sources, SK Hynix remains the current leader in the high-bandwidth memory (HBM) market, supplies HBM3e chips to NVIDIA and has a significant market share. However, Samsung is quickly closing the gap by forging partnerships with companies like Taiwan Semiconductor Manufacturing Company (TSMC) to produce key components for its HBM4 chips.
SK Hynix appears to have made progress with its HBM4 chip. The company claims its solution delivers 1.4 times the bandwidth of HBM3e while consuming 30% less power. With a bandwidth expected to exceed 1.65 terabytes per second (TB/s) and lower power consumption, the HBM4 chips provide the performance and efficiency needed to train massive AI models using Tesla’s Dojo -supercomputer.
The new HBM4 chips are also expected to feature a logic chip at the base of the chip stack, which acts as a control unit for memory chips. This logical die design allows for faster data processing and better energy efficiency, making HBM4 ideal for Tesla’s AI-driven applications.
Both companies are expected to accelerate their HBM4 development timelines, with SK Hynix aiming to deliver the chips to customers by the end of 2025. Samsung, on the other hand, is pushing its production plans with its advanced 4-nanometer (nm) foundry process, which could help the country secure a competitive advantage in the global HBM market.
Via TrendForce