The Security Implications of AI Integration – Azeria Labs CEO Explores the Future of AI and Its Threat Landscape

More needs to be done to address the skills and resources gap around AI integration and security, Maria Markstedeter, CEO and founder of Azeria Labs, told the audience at the recent Dynatrace Perform 2024 conference in Las Vegas.

To combat the risks of new innovations such as AI agents and composite AI, security teams and data scientists must improve their communication and collaboration.

Having experienced the frustrations that a lack of resources brings through her experience reverse engineering ARM processors, Markstedter believes better collaboration and understanding are needed to minimize the threats posed by AI integrations.

“You can’t find vulnerabilities in a system you don’t fully understand”

The increasing size and complexity of data processed by AI models is pushing the boundaries of what security teams are able to model threats, especially when security professionals don’t have the tools to understand them.

New attacks and new vulnerabilities “require that you understand data science and how AI systems work, but at the same time also have a very deep understanding of security, threat modeling and risk management,” Markstedter said.

This is especially true when it comes to new multimodal AI systems that can process multiple data inputs, such as text, audio, and images, simultaneously. Markstedter points out that while unimodal and multimodal AI systems differ greatly in the data they can process, the overall call-and-response nature of human-AI interaction remains largely the same.

“This transactional nature is just not the silver bullet we were hoping for. This is where AI agents come into the picture.”

AI agents provide a solution to this highly transactional nature by essentially having the ability to think about their task and come up with a unique end result depending on the information available to them at the time.

This poses a significant and unprecedented threat to security teams because “the idea of ​​access and identity management needs to be reevaluated because we are essentially entering a world where we have a non-deterministic system that can access a large number of corporate data and apps, and has the power to perform non-deterministic actions.”

Markstedter states that because these AI agents need to access internal and external data sources, there is a significant risk of these agents obtaining malicious data that would otherwise not appear malicious to a security assessor.

“This processing of external data will become even more difficult with multimodal AI, because the malicious instructions now do not have to be part of text on a website or part of an email, but can be hidden in images and audio files.”

It’s not all bad news, however. The evolution of composite systems that combine multiple AI technologies into a single product can “create tools that give us a much more interactive and dynamic analytics experience.”

By combining threat modeling with composite AI, and encouraging security teams to work more closely with data scientists, it is possible to not only significantly reduce the risks of AI integrations, but also improve the skills of security teams.

More from Ny Breaking

Related Post