AI marketing is a disadvantage, especially when it comes to CPUs

Artificial intelligence is increasingly making itself felt in more and more areas of our lives, especially since the launch of ChatGPT. Depending on your view, it’s either the big bad boogeyman that will steal jobs and cause widespread copyright infringement, or a gift with the potential to catapult humanity into a new age of enlightenment.

What many have achieved with the new technology, from Midjourney and LLMs to smart algorithms and data analytics, is beyond radical. It is a technology that, like most silicon-based breakthroughs that preceded it, has a lot of potential. It can do a lot of good, but many also fear a lot of harm. And those outcomes are entirely dependent on how it is manipulated, managed and regulated.

It’s not surprising, given how quickly AI has found its way into the zeitgeist, that tech companies and their sales teams are leaning equally into the technology, putting its various iterations into their latest products, all with the goal of making us encourage them to buy their products. hardware.

Look at this new AI-powered laptop, that motherboard that uses AI to overclock your CPU to the limit, those new webcams with AI deep-learning technology. You get the point. You just know that from Silicon Valley to Shanghai, shareholders and business leaders are asking their marketing teams, “How can we get AI into our products?”, in time for the next CES or the next Computex, no matter how modest the value will be. are actually for us consumers.

My biggest problem comes in the form of the latest generation of CPUs being launched by the likes of AMD, Intel and Qualcomm. These are not bad products by a long shot. Qualcomm is making huge leaps into the desktop and laptop chip market, and the performance of both Intel and AMD’s latest chips is nothing short of impressive. Generation after generation, we’re seeing higher performance scores, better efficiency, broader connectivity, lower latencies, and ridiculous power savings (here’s looking at you, Snapdragon), along with a slew of innovative design changes and choices. To most of us mere mortals, it is magical, far beyond the basic zeros and ones.

Despite this, we still get AI applied to everything, regardless of whether it actually adds anything useful to a product. We’ve added new neural processing units (NPUs) to chips, which are co-processors designed to accelerate low-level operations that can benefit from AI. These are then placed in low-powered laptops, allowing them to use advanced AI features like Microsoft’s Copilot assistant to tick that AI checkbox, as if it makes a difference to a predominantly cloud-based solution.

The thing is, though, when it comes to AI, CPU performance is insignificant. Like seriously unimportant, to the point where it’s not even remotely relevant. It’s like trying to launch NASA’s JWST space telescope with a bottle of Coke and some mentos.

Everything nowadays is an AI that controls this or that (Image credit: Future)

New clothes from the emperor?

I’ve been testing a range of laptops and processors over the past month, especially with regard to how they handle artificial intelligence tasks and apps. Using UL’s Procyon benchmark suite (makers of the 3D Mark series) you can run the Computer Vision inference test, and that can give you a nice number, giving you a score for each part. Intel Core i9-14900K? 50. AMD Ryzen 9 7900X? 56. 9900X? 79 (that’s a 41% performance increase, gen-on-gen, by the way, seriously huge).

But the point is, throw a GPU through that same test, like Nvidia’s RTX 4080 Super, and it scores 2,123. That’s a 2,587% performance increase over that Ryzen 9 9900X, and that’s not even taking advantage of Nvidia’s own TensorRT SDK, which scores even higher.

The simple fact is that AI requires parallel processing performance like nothing else, and nothing does that better than a graphics card right now. Elon Musk knows this: he just got installed 100,000 Nvidia H100 GPUs in xAI’s latest AI training system. That’s more than $1 billion worth of graphics cards in a single supercomputer.

Obscured by clouds

To make matters worse, the vast majority of popular AI tools today require cloud computing to fully function anyway.

LLMs (large language models) such as ChatGPT and Google Gemini require so much processing power and storage space that it is impossible to run them on a local machine. Even Adobe’s Genative Fill and AI smart filter technology in the latest versions of Photoshop require cloud computing to process images.

Large language models simply require too much processing power to function on your home setup, sorry (Image credit: Google)

It’s just not feasible or possible to actually run the vast majority of these AI programs that are so popular today on your own home computer. There are of course exceptions; Certain AI image generation tools are much easier to use on a solo machine, but still, 99% of the time you’ll be much better off using cloud computing to handle this.

The one major exception to this rule is localized upscaling and supersampling. Things like Nvidia’s DLSS and Intel’s . Otherwise you’re basically out of luck.

Yet, here we are. Another week, another AI-powered laptop, another AI chip, much of which, in my opinion, amounts to much ado about nothing.

You might also like…

Related Post