In a new briefing issued this week, software giant Microsoft claims that US rivals such as Iran, Russia and North Korea are are preparing to ramp up their cyberwarfare efforts using modern generative AI. The problem is exacerbated by a chronic shortage of skilled cybersecurity personnel. The briefing cites a 2023 ISC2 Cybersecurity Workforce Study stating as much approximately 4 million additional support staff will be needed to counter the coming attack. Microsoft’s own studies from 2023 show that the number of password attacks has increased dramatically in two years from 579 per second to over 4000 per second.
The company’s response was the introduction of CoPilot For Security. This AI tool is designed to detect, identify and block these threats faster and more effectively than humans can. A recent test showed this, for example the use of generative AI helped security analysts, regardless of expertise level, to operate 44% more accurately and 26% faster when dealing with all types of threats. Eighty-six percent also said AI has made them more productive and reduced the effort required to complete their tasks.
Unfortunately, as the company acknowledges, the use of AI is not limited to the good guys. The explosive rise of technology is leading to an arms race, because threat actors want to use the new instruments to cause as much damage as possible. Hence the publication of this threat briefing to warn of the coming escalation. The briefing confirms that OpenAI and Microsoft are working together to identify and address these bad actors and their tactics as they emerge.
The impact of generative AI on cyber attacks is widespread. In 2023, Darktrace researchers discovered that there was a 135% increase in email-based so-called ‘new cyber attacks’ in January to February 2023, which coincided with the widespread adoption of ChatGPT. In addition, an increase was discovered in phishing attacks that were linguistically complex and used a larger number of words, longer sentences and more punctuation. All this led to a 52% increase in email account takeover attempts, with attackers realistically impersonating the IT team in the victims’ organizations.
The report outlines three key areas of focus that are likely to increasingly consume AI in the near future. Enhanced reconnaissance of targets and weaknesses, improved malware coding using advanced AI coders, and learning and planning assistance. The enormous computing resources required inevitably mean that the early adopters of the technology will almost certainly be nation states.
Several such cyber threat entities are specifically mentioned. Strontium (or APT28) is a very active cyber espionage group that has been operating from Russia for twenty years. It falls under a number of labels and is expected to dramatically increase the use of advanced AI tools as they become available.
North Korea also has a huge cyber espionage presence. Some reports say so more than 7,000 personnel implement ongoing threat programs against the West for decades – with a 300% increase in activity since 2017. One such group is the Velvet Chollima or Emerald Sleet operation, which focuses mainly on academic and NGO operations. Here, AI is increasingly used to improve phishing campaigns and test vulnerabilities.
The briefing highlights two other major players in the global cyberwarfare arena: Iran and China. These two countries have also increasingly used language learning models (LLMs), primarily to explore opportunities and understand potential areas of future attack. In addition to these geopolitical attacks, Microsoft’s briefing outlines the increased use of AI in more conventional criminal activities, such as ransomware, fraud (particularly through the use of voice cloning), email phishing and general identity manipulation.
As the war intensifies, we can expect Microsoft and partners like OpenAI to develop an increasingly sophisticated set of tools to provide threat detection, behavioral analytics, and other methods for quickly and decisively detecting attacks.
The report concludes: “Microsoft expects AI will develop social engineering tactics, creating more sophisticated attacks, including deepfakes and voice cloning. Prevention is key to combating all cyber threats, both traditional and AI-enabled.”