Just when we thought we were safe, ChatGPT is coming for our graphics cards

>

Everyone seems to be talking about ChatGPT these days thanks to Microsoft Bing, but given the nature of large language models (LLMs), a gamer would be forgiven for feeling a certain sense of déjà vu.

See, even though LLMs run on massive cloud servers, they use dedicated GPUs to do all the training they need to run. Usually this means feeding a downright obscene amount of data through neural networks running on an array of GPUs with advanced tensor cores, and this not only requires a lot of power, but also requires a lot of real GPUs to do at scale.

This looks a lot like crypto mining, but it is not. Crypto mining has nothing to do with machine learning algorithms and, unlike machine learning, the only value of crypto mining is producing a highly speculative digital commodity called a token, which some people think is worth something and so are willing to spend real money on it.

This led to a crypto bubble that caused a shortage of GPUs for the past two years as cryptominers bought up all Nvidia Ampere graphics cards from 2020 to 2022, leaving gamers out in the cold. That bubble has now burst and the GPU stock has now stabilized.

But with the emergence of ChatGPT, are we going to see a repeat of the past two years? It’s unlikely, but not impossible either.

Your graphics card will not drive large LLMs

(Image credit: Future)

While you might think the best graphics card money can buy is the kind of thing machine learning types might want for their setups, you’d be wrong. Unless you’re in college and researching machine learning algorithms, a consumer graphics card isn’t going to be enough to drive the kind of algorithm you need.

Most LLMs and other generative AI models that produce graphics or music really emphasize the first L: Big. ChatGPT has processed an unfathomably large amount of text, and a consumer GPU isn’t really that if suited to that task as industrial GPUs running on server-class infrastructure.

These are the GPUs that will be in high demand, and this is what makes Nvidia so excited about ChatGPT: not that ChatGPT will help people, but that running it will require pretty much all of Nvidia’s server-grade GPUs, which means Nvidia is on the point to monetize the excitement of ChatGPT.

The following ChatGPT runs in the cloud, not on local hardware

(Image credit: CHUAN CHUAN via Shutterstock)

Unless you’re Google or Microsoft, you don’t have your own LLM infrastructure. You use someone else’s in the form of cloud services. That means you won’t have a bunch of startups buying up all the graphics cards to develop their own LLMs.

It is more likely that we will see LLMaaS or Large Language Models as a Service. You have Microsoft Azure or Amazon Web Services data centers with huge server farms full of GPUs ready to rent for your machine learning algorithms. This is something that startups love. They hate buying equipment that isn’t a ping pong table or bean bag chair.

That means as ChatGPT and other AI models proliferate, they won’t run locally on consumer hardware, even if the people running it are a small team of developers. They’ll run on server-grade hardware, so no one is coming for your graphics card.

Gamers are not out of the woods yet

So nothing to worry about then? Good…

The point is, while your RTX 4090 may be safe, the question becomes how much RTX 5090s Nvidia will make if it only has a limited amount of silicon at its disposal, and using that silicon for server-grade GPUs can be significantly more profitable than using for a GeForce graphics card?

If there’s anything to fear from ChatGPT’s rise, it’s the prospect of fewer consumer GPUs being made as shareholders demand more server-grade GPUs to be produced to maximize profits. That’s not an idle threat either, as the way the rules of capitalism are currently written often obliges companies to do whatever it takes to maximize shareholder returns, and the cloud will always be more profitable than selling graphics cards to gamers.

On the other hand, this is really an Nvidia thing. Team Green may be going all in on server GPUs with a smaller stock of consumer graphics cards, but they’re not the only ones making graphics cards.

AMD RDNA 3 graphics cards just introduced AI hardware, but this is nowhere near the tensor cores found in Nvidia cards, making Nvidia the de facto choice for machine learning uses. That means AMD could become the default card maker for gamers, while Nvidia moves on to something else.

It’s certainly possible, and unlike crypto, AMD probably won’t be a second-tier LLM card that’s still good for LLMs if you can not buy an Nvidia card. AMD really isn’t equipped for machine learning at all, especially at the level that LLMs require, so AMD just doesn’t play a role here. That means there will always be consumer-grade graphics cards for gamers, and good ones too, there may not be as many Nvidia cards as there once were.

Partisans of Team Green may not like that future, but it’s the most likely one given the rise of ChatGPT.

Related Post