LLM services are being hit by hackers looking to sell private data
Using cloud-hosted large language models (LLM) can be quite expensive, which is why hackers have apparently started stealing and selling credentials for the tools.
Cybersecurity researchers Sysdig Threat Research Team recently noticed such a campaign and called it LLMjacking.
In his reportSysdig said it had observed a threat actor exploiting a vulnerability in the Laravel Framework, tracked as CVE-2021-3129. This flaw allowed them to access the network and scan it for Amazon Web Services (AWS) credentials for LLM services.
New methods of abuse
“Once initial access was obtained, they exfiltrated the cloud credentials and gained access to the cloud environment, where they attempted to access on-premises LLM models hosted by cloud providers,” the researchers explain in the report. “In this case, a local Claude (v2/v3) LLM model from Anthropic was targeted.”
The researchers were able to discover the tools the attackers used to generate the requests that called the models. Among them was a Python script that checked the credentials of ten AI services and analyzed which ones were useful. The services include AI21 Labs, Anthropic, AWS Bedrock, Azure, ElevenLabs, MakerSuite, Mistral, OpenAI, OpenRouter and GCP Vertex AI.
They also found that the attackers did not run legitimate LLM queries during the verification phase, but rather did “just enough” to learn what the credentials were capable of, and what quotas existed.
In his news report says The hacker news says the findings are evidence that hackers are finding new ways to weaponize LLMs, beyond the usual quick injections and model poisoning, by monetizing access to LLMs while mailing the bill to the victim.
The bill, the researchers pointed out, could be quite high, up to $46,000 per day for LLM use.
“Using LLM services can be expensive depending on the model and the amount of tokens added to it,” the researchers added. “By maximizing quota limits, attackers can also prevent the compromised organization from using models legitimately, disrupting business operations.”