The FBI says AI is making it easier for hackers to write malware

The FBI has stated that artificial intelligence aids almost every aspect of cybercrime activity, from development to deployment, and the trend seems to be going in only one direction.

In a recent media call, an FBI official indicated that free, customizable open source models are becoming increasingly popular among hackers who try to distribute malware, conduct phishing attacks and other scams.

There has also been a significant increase in the number of hacker-created AI writers built specifically to target vulnerable internet users.

AI could be responsible for increasing cyber-attacks

Generative AI can help with any (or any) aspect of a cyber-attack, not least thanks to its powerful coding capabilities. Dozens of models are now trained to help write and fix code, making malware development more accessible to those who may not have had the skill before.

The FBI and other organizations have also seen tools used to create content, for example phishing emails and other untrustworthy websites.

Furthermore, with the launch of multimodal models like GPT-4, hackers can create convincing deepfakes to pressure victims to relinquish sensitive information, payment, and more.

Earlier this year, Meta proclaimed that its new speech-generating tool, Voicebox, should not be available without the necessary precautions for fear that it could cause serious damage.

Despite promises to work with companies to help protect vulnerable citizens, with suggestions such as watermarking AI content, many remain concerned about the slow development of protective measures compared to the much faster development of AI tools across the board .

Just last week, the White House announced what it called “volunteering” from leading AI companies – notably Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI – as part of its agenda for safe and responsible AI.

Through PCMag

Related Post