Yes, AMD has a secret weapon to fight off Nvidia AI armada — no, it has absolutely nothing to do with GPUs and everything to do with HBM

AMD will rely on advances in high-bandwidth memory (HBM) in its bid to dethrone Nvidia as the market leader for making the components that power generative AI systems.

Continuing the processor-in-memory (PIM) theme, AMD-owned Xilinx showcased its Virtex XCVU7P card, in which each FPGA had eight accelerator-in-memory (AiM) modules. The company demonstrated this at OCP Summit 2023, alongside SK Hynix’s HBM3E memory unit, according Serve the House.

Performing computer operations directly in memory essentially eliminates the need for data to be moved between components on systems, meaning increased performance and making the overall system more energy efficient. Using PIM, with SK Hynix’s AiM, resulted in ten times lower server latency, five times lower energy consumption, and half the cost of AI inference workloads.

The latest twist in the ongoing AI arms race

Nvidia and AMD together make the majority of the best GPUs, and it’s fair to assume that efforts to improve the quality of these components are critical to improving AI performance. But it’s actually by tinkering with the relationship between computing and memory that these companies are seeing huge gains in power and efficiency.

Nvidia is also racing ahead with its own plans to integrate HBM technology into its line of GPUs, including the A100, H100 and GH200, which are among the best graphics cards available. It struck a deal with Samsung last month to integrate its HBM3 memory technology into its GPUs, for example, and will likely expand this with the new HBM3e units.

PIM is something that several companies have been working on in recent months. For example, Samsung presented its processing-near-memory (PNM) in September. The CXL-PNM module is a 512 GB card with a bandwidth of up to 1.1 TB/s.

This follows a prototype for an HBM-PIM card, which was created in collaboration with AMD. The addition of such a card increased performance by 2.6% and increased energy efficiency by 2.7% over existing GPU accelerators.

More from TechRadar Pro

Related Post