“Predatory pre-annoucement” – The brain behind the largest CPU ever calls out Nvidia for spreading ‘FUD’ amidst surprise updated GPU roadmap announcement

Nvidia is using deceptive practices and abusing its market dominance to destroy competition, Cerebras Systems CEO Andrew Feldman said after the company unexpectedly announced its latest GPU product roadmap in October 2023.

Nvidia outlined new graphics cards planned for annual release between 2024 and 2026 to complement the industry-leading A100 and H100 GPUs that are currently in such high demand, with organizations across the industry gobbling them up for generative AI workloads .

But Feldman labeled this news as a ‘pre-announcement’ HPCThread, emphasizing that the company is under no obligation to continue releasing the components it is teasing. By doing so, he speculates that it will only confuse the market, especially in light of Nvidia being a year late with the H100 GPU, for example. And he doubts that Nvidia can continue this strategy, and may not want to.

Nvidia is just throwing sand in the air

Nvidia teased annual single architecture leaps in its announcement, with the Hopper Next following the Hpper GPU in 2024, followed by the Ada Lovelace-Next GPU, a successor to the Ada Lovelace graphics card, due in 2025.

“Companies have been making chips for a long time, and no one has ever been able to achieve success in a year, because factories don’t change in a year,” Feldman told HPCWire.

“In many ways it was a terrible block of time for Nvidia. Stability AI said they would use Intel. Amazon said the Anthropic would run on them. We announced a monster deal that would provide enough computing power that it would be clear that you could build large clusters with us.

“(Nvidia’s) response, not surprisingly to me, in terms of strategy, is not a better product. It’s… throwing sand in the air and moving your hands a lot. And you know, Nvidia was a year late with the H100.”

Feldman has designed the world’s largest AI chip, the Cerebras Wafer-Scale Engine 2 CPU, which measures 46,226 square millimeters and contains 2.6 trillion transistors across 850,000 cores.

He told the New Yorker that huge chips are better than smaller ones because cores communicate faster when they’re on the same chip, rather than spread out across a server room.

More from Ny Breaking

Related Post