South Korean memory giant SK Hynix has made a number of big announcements in recent months, including its plans to build the world’s largest chip factory and the creation of a mobile storage chip that could make phones and laptops run faster.
The company has also started working with Taiwanese semiconductor foundry TSMC to develop and produce the next generation of High Bandwidth Memory known as HBM4, which will significantly improve HPC and AI performance and could end up in Nvidia’s alleged H300 GPU .
At the recent 2024 IEEE 16th International Memory Workshop (IMW 2024), held in Seoul, South Korea, SK Hynix revealed more details about its plans for HBM4, which is expected to be generally available in 2026 (more on that later). As the largest maker of HBM, the company obviously had a lot to say on the subject.
Accelerating HBM development
The company’s development roadmap shows that HBM4 will have the same chip density as HBM3E (24 Gb), but with 16 layers instead of 12. It will also offer a bandwidth of 1.65 TB/s, up from 1.18 TB /s from HBM3E. The capacity will be 48 GB, up from 36 GB, with a total IO/cube of 2048 pins, double that of its predecessor. SK Hynix claims the chip’s power consumption will be cut by about half.
According to PC watch“The keynote speech also covered the next generation of the HBM4E module. Commercialization is planned for 2028. The maximum input/output bandwidth will likely exceed 2TB/s. Details such as storage capacity and DRAM chips are unknown.”
What’s really interesting is that PC watch also reports that at the end of the keynote there was a slide stating that the company’s production schedule will be accelerated and that “the commercialization of the ‘HBM4’ module will be brought forward to 2025 and the ‘HBM4E’ module to 2026 .
If that’s the case, it’s likely that SK Hynix is responding to the threat from its archrival Samsung, which is developing its own HBM4 module expected to debut next year.