The EU AI Act: What do CISOs need to know to strengthen AI security?

It’s been a few months since the EU AI Act – the world’s first comprehensive legal framework for artificial intelligence (AI) – came into force.

Its purpose? To ensure the responsible and safe development and use of AI in Europe.

It marks an important moment for AI regulation, in response to the rapid adoption of AI tools in crucial sectors such as financial services and government, where the consequences of exploiting such technology could be catastrophic.

The new law is part of an emerging regulatory framework that reinforces the need for robust cybersecurity risk management, including the European Cyber ​​Resilience Act (CRA) and the Digital Operational Resilience Act (DORA). These will push cybersecurity transparency and effective risk management higher up the business agenda, but also add additional layers of complexity to compliance and operational resilience.

For CISOs, navigating this sea of ​​regulation is a significant challenge.

Stefanus de Vries

Key supporters of the EU AI Act

The AI ​​Act introduced a new regulatory aspect of AI governance, in addition to existing legal frameworks such as data privacy, intellectual property and anti-discrimination laws.

Key requirements include establishing a robust risk management system, a security incident response policy and technical documentation demonstrating compliance with transparency obligations. It bans certain types of AI systems, for example emotion recognition or social scoring systems, with the aim of reducing biases caused by algorithms.

It’s also about compliance throughout the supply chain. It is not just the primary providers of AI systems that must adhere to these regulations, but all parties involved, including those integrating General Purpose AI (GPAI) and basic third-party models.

Failure to comply with these new rules could result in a maximum fine of €35 million or 7% of a company’s total global annual turnover for the previous financial year – but this varies depending on the type of violation and the size of the company.

Companies will therefore have to comply with these new regulations if they want to do business in the EU, but they should also take inspiration from other available guidelines, such as the National Cyber ​​Security Center (NCSC) guidelines for the development of secure AI systems, to promote a culture of responsible software development.

Threats targeted by the law

AI has the ability to streamline workflows and improve productivity, but when systems are compromised it can expose critical vulnerabilities that can lead to extensive data breaches and security flaws.

As AI technology becomes more sophisticated and businesses become increasingly dependent on this transformative technology to support complex tasks, threat actors are also evolving to hijack AI models and steal data. This could lead to an increased frequency of high-impact breaches and data leaks, such as the recent Snowflake or MOVEit attacks that affected millions of end users.

With this new EU AI law, both foundation model providers and organizations using AI are responsible for identifying and mitigating these risks. By looking at the broader AI lifecycle and supply chain, the law aims to strengthen the overall cybersecurity and resilience of AI used in business – and in life.

But it’s important to remember that it’s not just EU countries that are affected. Companies abroad must also comply if they offer AI systems on the EU market, or if their AI systems impact individuals within the EU. With the law requiring compliance across the entire supply chain – not just AI providers – this is a truly global imperative.

How can companies adapt to all these new rules?

Complying with Secure by Design principles

Meeting these requirements will be much easier if security is built into the design phase of software development, rather than as an afterthought. Threat modeling – which includes the rigorous analysis of software at the design stage – is one way teams can more effectively comply with these new regulations.

Embedding Secure by Design principles into the AI ​​development process can identify the types of threats that could harm an organization, and help companies think about security risks in machine learning systems, such as data poisoning, input manipulation, and data extraction. This also creates a collaborative environment between security and development teams, ensuring security is prioritized from the start, in line with the new regulations.

In the US, the Cybersecurity and Infrastructure Security Agency (CISA) has urged that manufacturers of software used by the federal government adhere to the principles of secure-by-design. While these guidelines address broader technology implementation, this Secure by Design approach applies to AI development and helps promote the culture of responsible software building. On the other hand, the UK Ministry of Defense has already implemented the Secure by Design principles, setting a standard for other industries to follow.

For CISOs, this shift introduces a culture that anticipates regulatory requirements such as the EU AI Act, allowing companies to proactively meet compliance standards as they build AI solutions.

Key lessons for CISOs

AI is changing the game for businesses worldwide, so CISOs must take a proactive approach to cybersecurity.

They should look to leverage Secure by Design principles to bring security and developer teams more closely together and provide AI software developers with the techniques needed to ensure that AI applications are secure at every stage of their development. By preparing data and building and deploying a threat model of the system, developers can stress test and mitigate vulnerabilities during the design phase to ensure their products comply with the new regulations from the very beginning.

It’s not just EU companies that will have to comply with the new law – it applies to anyone wanting to work in these markets – so it will be crucial to have the right techniques and approaches for AI at the start of the software development cycle development.

We’ve highlighted the best DevOps tools.

This article was produced as part of Ny BreakingPro’s Expert Insights channel, where we profile the best and brightest minds in today’s technology industry. The views expressed here are those of the author and are not necessarily those of Ny BreakingPro or Future plc. If you are interested in contributing, you can read more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Related Post