Microsoft claims its servers have been illegally accessed to create unsafe AI content


  • Microsoft’s December 2024 complaint involves 10 anonymous defendants
  • “Hacking-as-a-service operation” stole legitimate users’ API keys and bypassed content protections
  • A complaint in the District of Virginia resulted in the removal of a Github repository and website

Microsoft has accused an unnamed collective of developing tools to deliberately bypass security programming in its Azure OpenAI Service that powers the AI ​​tool ChatGPT.

The technology giant filed an application in December 2024 complaint in the U.S. District Court for the Eastern District of Virginia against ten anonymous defendants, whom she accuses of violating the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, and federal racketeering law.