Microsoft claims its servers have been illegally accessed to create unsafe AI content
- Microsoft’s December 2024 complaint involves 10 anonymous defendants
- “Hacking-as-a-service operation” stole legitimate users’ API keys and bypassed content protections
- A complaint in the District of Virginia resulted in the removal of a Github repository and website
Microsoft has accused an unnamed collective of developing tools to deliberately bypass security programming in its Azure OpenAI Service that powers the AI tool ChatGPT.
The technology giant filed an application in December 2024 complaint in the U.S. District Court for the Eastern District of Virginia against ten anonymous defendants, whom she accuses of violating the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, and federal racketeering law.
Microsoft claims its servers were accessed to help create “offensive,” “harmful and illegal content.” While it provided no further details on the nature of that content, it was clear enough for quick action; a Github repository had been taken offline and claimed in a blog post the court allowed them to seize a website related to the operation.
ChatGPT API keys
In the complaint, Microsoft stated that it first discovered in July 2024 that users were abusing the Azure OpenAI Service API keys used to authenticate them to produce illegal content. It then discussed an internal investigation that revealed the API keys in question had been stolen from legitimate customers.
“The precise manner in which Defendants obtained all API keys used to carry out the misconduct described in this complaint is unknown, but it appears that Defendants have engaged in a pattern of systematic theft of API keys that they were able to steal Microsoft API keys from multiple Microsoft customers,” the complaint reads.
Microsoft claims that with the ultimate goal of launching a hacking-as-a-service product, the defendants created de3u, a client-side tool, to steal these API keys, plus additional software that allows de3u communicate with Microsoft servers.
De3u also worked to bypass Azure OpenAI Services’ built-in content filters and subsequent overhaul of user prompts, allowing DALL-E, for example, to generate images that OpenAI would not normally allow.
“These features, combined with Defendants’ unlawful programmatic API access to the Azure OpenAI service, enabled Defendants to reverse engineer means to circumvent Microsoft’s content and abuse controls,” it wrote in the complaint.
Via TechCrunch