Microsoft reins in Bing AI’s Image Creator – and the results don’t make much sense
You may have noticed that Bing AI got a major upgrade to its image creation tool last week (among other recent improvements), but it seems that after taking this big step forward, Microsoft is now taking a step back did.
In case you missed it, Bing’s image creation system has been upgraded to a brand new version – Dall-E 3 – which is much more powerful. So much so that Microsoft noted that the supercharged Dall-E 3 was generating a lot of interest and traffic, so might be slow initially.
However, there is another problem with Dall-E 3, because ash Windows Central noted, Microsoft has reined in the tool significantly since its recent refresh.
Now we were already aware that the image creation tool would use a ‘content moderation system’ to prevent inappropriate photos from being generated, but it seems that the censorship imposed is heavier than expected. This may be a response to the type of content Bing AI users have been trying to get the system to create.
As Windows Central notes, there has been a lot of controversy over an image made of Mickey Mouse carrying out the September 11 attack (not surprisingly).
The problem, however, is that beyond these types of extreme requests, as the article makes clear, some users find that harmless image creation requests are rejected. Windows Central tried to get the chatbot to create an image of a man breaking a server rack with a sledgehammer, but was told this violated Microsoft’s terms for using Bing AI.
While the author of the article noted last week that they could create violent zombie apocalypse scenarios with popular characters (which are copyrighted), while Bing AI did not file a complaint.
Analysis: arbitrary censorship
The point is that censorship here is an overreaction, or apparently so according to reports, we should add. Microsoft left the rules too loose when first implemented, it seems, but has now gone ahead and tightened things up too much.
What this really illustrates is that Bing AI even censors itself, as someone points out Reddit. Bing Image Creator has a ‘surprise me’ button that generates a random image (the equivalent of Google’s ‘I’m Feeling Lucky’ button, which returns a random search). But here’s the kicker: the AI goes ahead, creates an image, and then censors it immediately.
Well, we suppose this is a surprise, to be honest – and one that seems to aptly demonstrate that Microsoft’s censorship of the Image Creator may have gone too far, limiting its usability at least to some extent limited. As we said at the beginning, it’s a matter of taking a step forward and then a quick step back.
Windows Central notes that it has been able to replicate this scenario of Bing’s self-censorship, and that it is not even a rare occurrence; it reportedly happens about a third of the time. It seems like it’s time for Microsoft to do some more refinement in this area, although honestly, as new capabilities are rolled out, tweaks will likely continue for a while – so perhaps that work is already underway.
The danger of Microsoft erring too much on the ‘better safe than sorry’ side of the equation is that it limits the usefulness of a tool that is, after all, intended to explore creativity.
We’ve reached out to Microsoft to check what’s going on with Bing AI in this regard, and will update this story if we hear back.