AMD, Meta are working on revolutionary tech that could recycle petabytes worth of RAM
Meta has teamed up with AMD to present a type of memory that can be added to servers using Compute Express Link (CXL) technology.
By using this open standard for fast processor-to-device and processor-to-memory interfaces, CXL memory can lead to much more efficient use of memory, saving hyperscalers money and improving performance.
The demo board, which is a AMD EPYC 9004 Genoa processor, had four dual in-line memory module (DIMM) slots around it, in addition to a heat sink and fan. It also had a PCIe x16 connector, according to Serve the House.
Ushering in the era of CXL 2.0 memory
The demo of CXL 2.0 memory expansion at the Open Compute Project (OCP) Global Summit 2023 was unusual in that it was built around an AMD chip, as opposed to Intel’s Xeon chip built on the Sapphire Rapids architecture.
One of the key promises of CXL memory is the ability for hyperscalers to reuse DRAM, with memory controllers that can communicate between DDR4 or DDR5 RAM and CXL can potentially lead to major cost savings.
Essentially, now that DDR4 memory is being phased out DDR5 RAMthis can entail a significant cost investment for hyperscalers in managing their data centers. In fact, it may be one of the biggest costs here.
Using CXL memory could be a way for these companies to continue using DDR4 RAM units they want to phase out, as additional memory to expand newer units and improve their server configurations.
There are multiple configurations of CXL memory in development, with Serve the Home also reporting on the aforementioned Intel chip displayed next to the Astera Labs CXL memory expansion card at SC22.
FADU was also shown last month a device that can expand a server with additional memory via CXL 2.0. The Apollo CXL 20 switch is built to reduce latency and power consumption during use.
Last year, Samsung also released one new version of its CXL DRAMbuilt with an ASIC CXL controller and 512 GB DDR5 DRAM to give servers a capacity boost.