CISOs are nervous that using Gen AI could lead to more security breaches

Chief Information Security Officers (CISO) are increasingly concerned that the increasing use of generative AI tools could lead to more cybersecurity incidents.

A new paper from security experts Metomic, which surveyed more than 400 CISOs in Britain and the US, found that security breaches related to generative AI are a concern for almost three-quarters (72%) of respondents.

But that’s not the only thing CISOs are concerned about. When it comes to generative AI, the report warns, as they also fear that humans will use sensitive corporate data to train the Large Language Models (LLM) used to power these tools. Sharing the data in this way poses a security risk, as there is a theoretical possibility that a malicious third party could somehow obtain this information.

Spotting malware

However, CISOs have every right to be concerned. Data breaches and similar cybersecurity incidents have increased year after year. Since the introduction of generative AI tools, these attacks have become even more sophisticated, some researchers say.

For example, poorly written writing, as well as grammar and typos, were the best way to recognize a phishing attack. Today, most hacking groups use AI to write convincing phishing emails for them, which not only makes them harder to spot but also significantly lowers the barrier to entry.

Another example is writing malicious code. Whether it’s for a landing page or malware, hackers are constantly finding new ways to abuse the new tools. Generative AI developers are fighting back and setting limits that prevent the tools from being used in this way, but threat actors have always managed to find a way around such roadblocks so far.

Good news is that AI can also be used in defense, and many organizations have already deployed advanced AI-based solutions.

More from Ny Breaking

Related Post