Microsoft engineer sounds alarm on AI image-generator to US officials and company’s board

A Microsoft engineer is sounding the alarm about offensive and harmful images that he says are too easily created by the company’s artificial intelligence image generator, sending letters on Wednesday to US regulators and the tech giant’s board of directors urging them to take action .

Shane Jones told The Associated Press that he considers himself a whistleblower and that he also met with U.S. Senate staffers last month to express his concerns.

The Federal Trade Commission confirmed it had received his letter on Wednesday but declined further comment.

Microsoft said it is committed to addressing employee concerns about company policies and that it appreciates Jones’ “effort in studying and testing our latest technology to further improve its security.” It said it had recommended that he use the company’s own “robust internal reporting channels” to investigate and address the issues. CNBC was the first to report on the letters.

Jones, one of the top software engineers whose job involves working on AI products for Microsoft’s retail customers, said he spent three months trying to address his security concerns with Microsoft’s Copilot Designer, a tool that creates new images can generate based on written instructions. The tool is derived from another AI image generator, DALL-E 3, created by Microsoft’s close business partner OpenAI.

“One of the most concerning risks with Copilot Designer is when the product generates images that add malicious content despite a benign request from the user,” he said in his letter to FTC Chair Lina Khan. “For example, if Copilot Designer only uses the prompt ‘car crash,’ it tends to randomly include an inappropriate, sexually objectified image of a woman in some of the photos it takes.”

Other harmful content includes violence as well as “political bias, underage drinking and drug use, misuse of corporate brands and copyrights, conspiracy theories and religion, to name a few,” he told the FTC. Jones said he has repeatedly asked the company to pull the product from the market until it is safer, or at least change the age rating on smartphones to make it clear that it is for an adult audience.

His letter to Microsoft’s board of directors asks the company to launch an independent investigation into whether Microsoft is marketing unsafe products “without disclosing the known risks to consumers, including children.”

This isn’t the first time Jones has publicly expressed his concerns. He said Microsoft initially advised him to take his findings directly to OpenAI.

When that didn’t work, he also publicly posted a letter to Microsoft-owned OpenAI on LinkedIn in December, prompting an executive to inform him that Microsoft’s legal team “demanded that I remove the message, which I reluctantly did.” said his colleague. letter to the board.

In addition to the U.S. Senate Commerce Committee, Jones has also brought his concerns to the attorney general in Washington, where Microsoft is headquartered.

Jones told the AP that while the “core problem” lies with OpenAI’s DALL-E model, those using OpenAI’s ChatGPT to generate AI images won’t get the same malicious results because the two companies overlap their products with different protections.

“Many of the issues with Copilot Designer have already been resolved with ChatGPT’s own security measures,” he said via text message.

A number of impressive AI image generators first hit the scene in 2022, including the second generation of OpenAI’s DALL-E 2. That – and the subsequent release of OpenAI’s chatbot ChatGPT – sparked public fascination that put commercial pressure on tech giants like Microsoft and Google are going to release their own versions.

But without effective safeguards, the technology poses dangers, including the ease with which users can generate harmful “deepfake” images of political figures, war zones or non-consensual nudity that falsely appear to show real people with recognizable faces. Google has temporarily suspended the ability of its Gemini chatbot to generate images of people after outrage over the way race and ethnicity were depicted, such as by putting people of color in Nazi-era military uniforms.