State-sponsored hackers are having their way with LLMs – Microsoft and OpenAI warn new tactics could do more damage than ever before

Hackers are increasingly turning to LLMs and AI tools to refine their tactics, techniques and procedures (TTP) in their campaigns, new reports warn.

A new research paper released by Microsoft in partnership with OpenAI has revealed how threat actors are using the latest technical innovations to keep defenders on their toes.

Microsoft and OpenAI have detected and disrupted attacks from Russia, North Korea, Iran and China-backed threat actors who have used LLMs to refine their hacking playbooks.

AI refines hackers’ edge

State-backed hackers have taken advantage of built-in language support mechanisms to fine-tune their ability to target foreign adversaries and make them appear more legitimate when carrying out social engineering campaigns. They can use this language processing to build apparently legitimate professional relationships with their victims.

Google also says they have observed hackers gathering intelligence by using LLMs to gather information about the industries and locations in which their victims live and work, and to learn more about their personal relationships.

In one example, Microsoft and OpenAI observed Russia’s GRU Unit 26165-linked Forest Blizzard group using LLMs to gather information about how satellites work and communicate in very specific detail. They have also been observed using AI to refine their scripting skills, most likely to automate or increase the efficiency of their technical activities.

The North Korean linked group has been observed using Emerald Sleet LLMs to learn how to exploit critical software vulnerabilities that are publicly reported, to generate content for use in spear phishing campaigns, and to identify organizations collecting information about the North Korean nuclear and defense capabilities.

In all of these cases, Microsoft and OpenAI identified and disabled all accounts used by these threat actors, with Microsoft stating: “AI technologies will continue to evolve and be studied by various threat actors.

“Microsoft will continue to monitor threat actors and malicious activity that abuse LLMs, and will work with OpenAI and other partners to share information, improve protection for customers, and assist the broader security community.”

More from Ny Breaking

Related Post