Meta’s AI is accused of being RACIST: Shocked users say Mark Zuckerberg’s chatbot refuses to imagine an Asian man with a white woman
Just weeks after Google was forced to shut down its “woke” AI, another tech giant is facing criticism for its bot’s racial bias.
Meta’s AI image generator has been accused of being ‘racist’ after users discovered it couldn’t imagine an Asian man with a white woman.
The AI tool, created by Facebook’s parent company, can turn virtually any written prompt into a shockingly realistic image in seconds.
However, users found that the AI was unable to create images of mixed-race couples, despite the fact that Meta CEO Mark Zuckerberg himself is married to an Asian woman.
On social media, commentators have criticized this as an example of the AI’s racial bias, with one describing the AI as “racist software made by racist engineers.”
Meta’s AI image generator is accused of being ‘racist’ after users discovered it couldn’t generate images of an Asian man with a white woman (pictured)
On social media, commentators have criticized this as an example of the AI’s racial bias, with one describing the AI as “racist software made by racist engineers.”
Mia Satto, a reporter at The edgeattempted to generate images with cues such as “Asian man and white friend” or “Asian man and white woman.”
She found that in “dozens” of tests, Meta’s AI was only able to show a white man and an Asian woman once.
In all other cases, Meta’s AI returned images of East Asian men and women instead.
Changing the prompt to request platonic relationships, such as “Asian man with white boyfriend,” also did not return correct results.
Ms Satto wrote: ‘It is egregious that the image generator cannot imagine Asian people standing next to white people.
“Once again, generative AI does not give free rein to the imagination, but rather traps it in a formalization of society’s dumber impulses.”
Users found that when the AI was asked to create an image of a mixed-race couple, it would almost always produce an image of an East Asian man and woman.
X users immediately criticized the AI, suggesting that its inability to produce these images was due to racism programmed into the AI
Ms. Satto does not accuse Meta herself of creating a racist AI, but only adds that the AI shows evidence of bias and tends toward stereotypes.
However, on social media, many went further and labeled Meta’s AI tool as explicitly racist.
One commenter on
Another simply added: “Pretty racist META lol.”
As some commentators have noted, the AI’s apparent bias is particularly surprising given that Mark Zuckerberg, Meta’s CEO, is married to an East Asian woman.
Priscilla Chan, the daughter of Chinese immigrants in America, met Zuckerberg at Harvard before marrying the tech billionaire in 2012.
Some commenters took to X to share photos of Chan and Zuckerberg, joking that they managed to create the images using Meta’s AI.
The failure is especially surprising considering that Meta CEO Mark Zuckerberg is married to an East Asian woman (left) – an arrangement his own AI refuses to imagine (left)
Users found that no amount of prompting could get Meta’s AI to create a racially accurate image
Some X commenters even shared photos of Mark Zuckerberg (right) and his wife Priscilla Chan (left), joking that they were AI-generated
Meta isn’t the first major tech company to be accused of creating a “racist” AI image generator.
In February, Google was forced to pause its Gemini AI tool after critics labeled it “woke” because the AI apparently refused to generate images of white people.
Users found that the AI would generate images of Asian Nazis in 1940s Germany, black Vikings and female medieval knights when given race-neutral requests.
Google said in a statement at the time: “Gemini’s AI image generation generates a wide range of people.
“And that’s generally a good thing, because a lot of people around the world use it. But this is where it misses the point.”
Users also found that the AI had difficulty showing Asian women with individuals of other races
Ms. Satto also claims that Meta’s AI image generator “relied heavily on stereotypes.”
When asked to create images of South Asian individuals, Mr. Satto found that the system often added elements resembling bindis and saris without prompting.
In other cases, the AI repeatedly added “culturally specific clothing” even when not prompted.
Additionally, Ms. Satto found that the AI often depicted Asian men as older, while women were typically depicted as young.
In the one instance where Ms. Satto managed to generate a mixed-race couple, the image “showed a noticeably older man with a young, light-skinned Asian woman.”
It was also pointed out that in many cases Meta’s AI showed a significant age difference between an older man and a much younger woman as a representative Asian relationship.
In the comments on Ms. Satto’s original article, one person shared images of mixed-race couples, which they claim were generated using Meta’s AI.
The commenter wrote: ‘It took me thirty seconds to take a photo of a person of apparent ‘Asian’ descent side by side with a woman of apparent ‘white’ descent. ‘
The images they shared appear to have been created in Meta’s AI image generator, as they show the correct watermark.
They added: ‘These systems are really stupid and you have to push them in certain ways to get what you want.’
However, the user did not share which prompt they used to create these images, nor did they provide specific details about the number of attempts used.
Additionally, only two of the four images the user shared successfully showed a white woman with an Asian man.
One commenter shared images of an Asian man and a white woman that they claim were taken with Meta’s AI. However, they did not share the details of the prompt used to create these images
Generative AIs like Gemini’s image generator and Meta are trained on vast amounts of data from society at large.
If there are fewer images of mixed-race couples in the training data, this could explain why the AI struggles to generate these images.
Some researchers have suggested that, due to racism in society, AIs may learn to discriminate based on the biases in their training data.
In the case of Google’s Gemini, it is believed that Google engineers overcorrected these biases, creating the results that caused such outrage.
However, it is currently unclear why this issue is occurring and Meta has not yet responded to a request for comment.