The tech tightrope: protecting privacy in an AI-powered world
One of the four main themes at this year’s World Economic Forum in Davos was ‘Artificial Intelligence (AI) as a driving force for the economy and society’. There were between 10 and 15 sessions that were at least about AI, if not exclusively focused on this highly influential technology. While many of these panels highlighted the potential benefits of generative AI and large language models (LLM) for sectors such as fintech, health research and climate science, an even greater emphasis was placed on concerns about the widespread reach of AI into our personal lives and the potential consequences .
For AI to be effective, it needs enormous amounts of data from which it trains itself. Positive outcomes include deep learning models that can recognize complex patterns to make accurate predictions, enabling biometrics for homeland security and protection against financial fraud, for example. The most common use of emerging AI today is in big data algorithms for targeted advertising and real-time translation applications, which are constantly improving through further expanded data pools.
However, a line must be drawn between publicly available information to train AI systems and personal and proprietary data that is increasingly exploited and analyzed without user consent and despite being of a sensitive nature. An example of this is biometric security, which while great for securing our borders, is also deeply personal and can easily be used nefariously if it falls into the wrong hands.
This raises another concerning topic in AI: the potential for leaks and breaches. Unfortunately, most existing AI and LLM platforms and apps (such as ChatGPT) are riddled with vulnerabilities, so much so that many large enterprises have banned their use to protect their trade secrets, a trend we see growing in scale and scope.
Therefore, one of the most common topics in Davos was also the urgent need for regulation and limiting the reach of AI, both in the present and in the future, especially when it comes to privacy. Many data-related regulations already exist, such as HIPAA, GDPR and CCPA/CPRA, but such legislation requires companies to be transparent about their use of private information, or allows consumers to opt out of programs that would otherwise not be used. use their personal data. That’s effective at encouraging accountability, but regulations and policies can’t actually protect data from leaks or vectorized attacks.
Senior Marketing Specialist at Chain Reaction, Ltd.
Challenges in secure data processing
The only way to truly secure our privacy is to proactively enforce the highly secure and new technological measures available to us, measures that ensure a strong emphasis on privacy and data encryption, while still enabling breakthrough technologies, such as generative AI models and cloud computing tools. full access to large amounts of data to unleash their full potential.
Protecting data when it is at rest (i.e. in storage) or in transit (i.e. moving across or between networks) is ubiquitous. The data is encrypted, which is generally enough to ensure that it remains safe from unwanted access. The overwhelming challenge is how to secure data while it is in use (that is, being processed and analyzed).
The leading privacy-enhancing technology in wide use today is Confidential Computing, which attempts to protect a company’s IP data and sensitive data by creating a special enclave, called a Trusted Execution Environment (TEE), in the server’s CPU in which sensitive data is processed. Access to the TEE is limited to ensure that when the data within it is decoded for processing, it is not accessible to any computing resources other than those used within the TEE.
A major problem with Confidential Computing is that it doesn’t scale enough to cover the scope of use cases needed to handle every possible AI model and every possible cloud instance. Because a TEE must be created and defined for each specific use case, the time, effort, and costs involved in protecting data are limiting.
However, the bigger problem with Confidential Computing is that it is not infallible. The data in the TEE still needs to be unencrypted before it can be processed, raising the possibility that quantum attack vectors could exploit vulnerabilities in the environment. If the data is decrypted at any point in its lifecycle, it may be exposed. Furthermore, when the AI or computer tools access personal data, even in the TEE, all anonymity is lost once it is decoded.
A revolution in data privacy
The only post-quantum technology for privacy is lattice-based Fully Homomorphic Encryption (FHE), which allows processing data while keeping it encrypted throughout its lifecycle, including during processing. This ensures that no leaks and data leaks can occur and that the anonymity of the data in use is guaranteed.
The benefits of FHE are felt in both the effectiveness of the AI and cloud computing tools and in the assurance of security for individuals and the companies charged with protecting their data. For example, imagine how much more effective an AI model for early cancer detection could be if it had access to millions of patient records instead of thousands. And yet all of this data remains securely encrypted so it cannot be breached or leaked, and no patient is known to the model. Confidentiality is therefore maintained at all times.
The only obstacle that has prevented FHE from being widely adopted and used so far is the enormous processing burden it entails to overcome deep memory, computing power and bandwidth bottlenecks. It is estimated that implementing FHE in a hyperscale cloud data center would require a million times faster acceleration than current generation CPUs and GPUs. A growing number of software-based solutions have emerged in recent years, although they have struggled to scale enough to meet the demands of machine learning, deep learning, neural networks, and heavy algorithm operations in the cloud.
Only a dedicated architecture can address these specific bottlenecks and enable real-time FHE at a TCO comparable to that of processing unencrypted data, so that the end user cannot tell the difference between processing on a CPU or another type Processor . As such, it is becoming increasingly clear why OpenAI CEO Sam Altman is investing $1 billion in developing a dedicated hardware processor for private LLMs, and the hyperscale cloud service providers are following suit.
Privacy: the next frontier
As generative AI has become a focus of Davos and other global forums, it is rightly receiving the attention it deserves, both for its potential benefit to society and for its shortcomings. Any analysis of the challenges posed by AI will inevitably turn to privacy as a textbook example.
So privacy is quickly becoming the next huge technology industry. As more and more technological breakthroughs emerge that capitalize on our personal data, and as data is created and processed at an exponential pace, the demand for security measures to safeguard our privacy increases.
Regulations cannot protect us. Only a technological solution can address a technological problem. And when it comes to privacy, only a specific post-quantum solution will prevail.
We’ve highlighted the best business VPN.
This article was produced as part of Ny BreakingPro’s Expert Insights channel, where we profile the best and brightest minds in today’s technology industry. The views expressed here are those of the author and are not necessarily those of Ny BreakingPro or Future plc. If you are interested in contributing, you can read more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro