AI PCs: Is it all hype, should you make the switch, and if so, when?

The emerging landscape of AI PCs will transform the way we interact with our devices, both PC and laptop. There are both positives and negatives to this new technology and before embarking on an adoption program, it is important to know the facts.

What is an AI PC?

AI PCs feature a dedicated Neural Processing Unit (NPU) within the System on Chip (SoC) that handles AI applications, experiences, and the technology that powers the compute, whether it’s language models, targeted tasks, security, or privacy. One of the biggest benefits is that it offers low latency and enables greater personalization, meeting the growing need for greater autonomy.

AI PC is a new variant of edge computing, where computations are performed close to the data source or close to the end user, rather than relying solely on the cloud. This blended approach combines the power of the cloud for intensive tasks with the speed and privacy benefits of local processing. AI PCs demonstrate this by using local hardware such as GPUs and NPUs for AI tasks, reducing latency, saving bandwidth, and improving data security by reducing the amount of sensitive data sent to the cloud. The overall effect is a better user experience, supporting a variety of real-time analytics and AI development.

Anna Keef

Regional Director – UK & Ireland, Kingston Technology EMEA.

Analyst rating

Given that 2024 has been identified by industry commentators as the year of the AI ​​PC, it is interesting to view the landscape through the lens of analysts. For example, Gartner predicts 54.4 million AI PCs will ship this year, while IDC cites 50 million and Canalys uses a slightly different metric but believes 1 in 5 shipments will be an AI PC. Looking ahead to 2025, Gartner estimates that 43% of all AI PC shipments will be AI PC, but both IDC and Canalys predict the figure will rise to 60% by 2027. This represents a clear market shift towards AI PCs.

AI PC Chipset Advancement

The evolution of AI PCs has relied on bringing together hardware and processor to support AI applications at the PC level and an early example of this system-on-chip approach was on the iPhone with the A11 bionic processor. Now, with the introduction of chiplets such as the Intel Ultra Core processor, we have seen a new design of CPU that can be suited for different purposes. Instead of the traditional block CPU, we now have a tile-based CPU which allows a single file to be dedicated to the GPU – the compute tile for the processor – and the SoC – which includes the NPU – to support the AI ​​engine. Chip manufacturers are now developing and releasing their solutions, making AI PCs a realistic prospect for users.

Importance of combining CPU, GPU and NPU

Modern computing tasks require many different computing capabilities that are best fulfilled by the combination of CPU, GPU and NPU. The CPU is the central processing unit, a general-purpose processor designed for sequential processing, which runs the operating system and the conventional apps that we all like to use on our laptops. The GPU is the graphics processing unit, originally created for graphics rendering. This is equally effective in parallel computation, ideal for the type of matrix and vector operations that are essential for AI and deep learning. The NPU is the neural processing unit, a specialized processor designed specifically for AI tasks. The NPU efficiently accelerates neural network calculations while maintaining low power consumption.

This triumvirate enables flexible computing, where any type of processor can be used for specific tasks, leading to significant improvements in performance and energy efficiency. And these aren’t just designed for PCs and laptops. CPUs, GPUs, NPUs and System on Chip, which include all three components, are enabling an increasing number of devices, including smartphones and embedded systems in sectors such as manufacturing, to realize the potential of AI.

What role do memory and storage play?

One of the biggest challenges to confidently adopting AI PCs is dealing with the lack of information and myths surrounding how much memory is needed to run laptops and PCs with AI PC chiplets. As it stands, there are no minimum specs and it’s common to see systems with 8GB, 16GB, and 32GB of memory. However, as applications evolve and the intelligent use of AI PCs becomes more demanding, we expect to see a shift in memory requirements.

The same goes for storage. Some systems have 256GB of SSD storage, while others have 1TB or 2TB. It’s important to think beyond your needs today, or even next year and beyond, and anticipate what future applications may require and what your storage and memory requirements will be.

Current use cases

Examples of where AI PC is being used are growing by the day. Microsoft Copilot is groundbreaking in business productivity, but equally popular are solutions such as Zoom, Webex and Slack for project management. Jasper is a popular sales and marketing tool, while the Adobe suite is ideal for media and creative tasks, Audacity for audio and GIMP for creative design.

It’s clear that these tools are focused on communication and creativity, and they reflect the early stages of AI adoption. They are high-demand applications and a clear starting point for the benefits of AI, where it makes an immediate difference in collaboration and content creation. For many users, the first approach involves using AI PCs, but not in isolation, with cloud AI counterparts still part of the mix. As the autonomy and security benefits of AI PC applications on local servers become more important, this balance will shift.

As the landscape evolves, technology will become more advanced and accessible and applications will diversify enormously. We should see the current focus areas as a testing ground for the capabilities of AI in terms of user acceptance. There will be a learning curve as users adopt AI, but during this learning curve the foundations of AI are being laid across multiple industries and use cases.

Why local is good

The biggest advantage of running AI models on AI PCs is that all processing takes place locally, increasing security and privacy and allowing users to avoid the risks of moving or storing sensitive data in the cloud – or sending it to public AI models. AI PCs have the potential to reduce the likelihood of data breaches or unauthorized access, and will provide greater control over data protection regulations such as GDPR simply by keeping data on-site.

Additionally, locally operated models are more resilient to network-related issues, ensuring that essential AI functionality remains accessible even if cloud services go down due to connectivity issues or cyberattacks on the cloud infrastructure.

Of course, local AI devices still need strong security measures to protect against local cyber threats such as malware or physical tampering. A comprehensive approach must be taken to secure model training, data encryption, good access control, and continuous monitoring for potential threats.

Preparing for change

Before you make the decision to migrate to AI PCs, consider what your organization needs now, what is available today to meet that need, what applications are needed for specific functions, and where you are in the renewal cycle located. For example, if you are willing to be an early adopter and are fully aware that applications for AI PC are currently limited but meet your needs, then you are well placed to make the switch. However, if you’re not likely to innovate in the next three to four years, it may be worth waiting until the technology and applications have evolved.

By keeping a close eye on AI PC chiplets from key manufacturers like AMD and Intel and understanding how storage is evolving to keep pace (e.g. DDR4 vs. DDR5), you can determine the right time to implement AI PC in terms of applications, performance, and cost.

Another important factor is internal readiness. Staff must be trained to fully optimize AI PC systems and operate them within a cyber-secure environment. AI technology is changing rapidly and adoption requires a comprehensive strategy. One of the biggest challenges today is the lack of skilled professionals who understand the implications of AI from all perspectives. Rather than rushing to manage AI regulatory compliance once AI PCs are implemented, the best approach is to be upfront about the policies and practices that will be needed, and to understand the resources that will be needed internally.

A final word

As with any new technology, there are subtle trade-offs in the opportunities and risks of adoption. Early adopters who can take advantage of the AI ​​PC applications currently available may have first-mover advantages over their competitors. Other organisations will want to ensure they have the right systems and policies in place to support AI PC adoption. It is also worth considering a more nuanced approach and ‘buying’ yourself time by upgrading key components – as this technology is developing rapidly – ​​rather than committing fully today and changing everything at once.

But if you buy an AI PC today, it’s best to ensure that you can upgrade your storage or memory in the future. This means your hardware is better equipped to run AI PC applications that can work alongside existing AI applications in the cloud.

We list the best business computers for you.

This article was produced as part of Ny BreakingPro’s Expert Insights channel, where we showcase the best and brightest minds in the technology sector today. The views expressed here are those of the author and do not necessarily represent those of Ny BreakingPro or Future plc. If you’re interested in contributing, you can read more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Related Post