Researchers create AI

A team of researchers has created a self-replicating computer worm that is snaking through the internet and targeting Gemini Pro, ChatGPT 4.0 and LLaVA AI-powered apps.

The researchers developed the worm to demonstrate the risks and vulnerabilities of AI applications, specifically how the links between generative AI systems can help spread malware.

In their reportthe researchers, Stav Cohen of the Israel Institute of Technology, Ben Nassi of Cornell Tech and Ron Bitton of Intuit, came up with the name ‘Morris II’, inspired by the original worm that wreaked havoc on the Internet in 1988.

wZero-click worms unleashed on AI

The worm was developed with three main goals in mind. The first is to ensure that the worm can recreate itself. By using adversarial, self-replicating prompts that prompt the AI ​​applications to execute the original prompt themselves, the AI ​​will automatically replicate the worm every time it uses the prompt.

The second purpose was to deliver a payload or perform a malicious activity. In this case, the worm was programmed to perform one of a number of actions. From stealing sensitive information to crafting emails that were insulting and abusive to sow toxicity and spread propaganda.

Finally, the worm had to be able to jump between hosts and AI applications to propagate itself through the AI ​​ecosystem. The first method targets AI-enabled email applications using retrieval augmented generation (RAG) by sending a poisoned email that is then stored in the target’s database. When the recipient tries to reply to the email, the AI ​​assistant automatically generates a response using the poisoned data and then propagates the self-replicating prompt throughout the ecosystem.

The second method requires input to be output by the generative AI model, which then creates an output that prompts the AI ​​to spread the worm to new hosts. When the next host is infected, it immediately spreads the worm beyond itself.

In tests the researchers conducted, the worm was able to steal Social Security numbers and credit card information.

The researchers sent their paper to Google and OpenAI to raise awareness about the potential dangers of these worms, and while Google did not comment, an OpenAI spokesperson told Wired that “they appear to have found a way to exploit prompt injection-type vulnerabilities by relying on user input that has not been checked or filtered.”

Worms like these highlight the need for more research, testing, and regulation when it comes to rolling out generative AI applications.

More from Ny Breaking

Related Post