AI tools are increasingly being misused to carry out cyber attacks

New research claims that a growing number of cyber attacks are being launched using artificial intelligence (AI) and large language models (LLM).

A report from Imperva shows that the Threat Research team analyzed thousands of attacks between April and September 2024, finding that retail sites collectively experience more than 500,000 AI-powered attacks every day.

These attacks, the researchers explain, often come from AI tools such as ChatGPT or Gemini, in addition to bots designed to collect websites for LLM training data. Cybercriminals would mainly use these tools in exploitative attacks on business logic, DDoS attacks, bad bot attacks and API violations.

Business logic attacks

Business logic misuse was described as the most common AI-driven attack, accounting for almost a third (30.7%) of all incidents. It involves abusing legitimate functions of various apps and APIs to carry out cyber-attacks. DDoS comes a close second (30.6%), while bad bot attacks take up a fifth (20.8%). The bots are designed to collect pricing data, perform credential stuffing, and hoard inventory.

“In previous years, we have seen security threats such as Grinch bots and DDoS attacks cause major disruptions during the holiday shopping season, impacting both retailers and consumers. Now, with the widespread availability of generative AI tools and LLMs, retailers are grappling with a new wave of advanced cyber threats,” said Nanhi Singh, General Manager Application Security at Imperva.

Singh added that retail businesses need robust defenses and a comprehensive strategy or they risk losing sensitive personal information, including credit card details, people’s addresses and other account information. Identity theft and similar attacks can lead to a tarnished image, lost revenue, lawsuits and fines from regulators.

More from Ny Breaking

Related Post