Google pauses its Gemini AI tool after critics branded it ‘too woke’ for generating images of Asian Nazis in 1940s Germany, black Vikings and female medieval knights
Google is pausing its new Gemini AI tool after users criticized the image generator for being “too woke” by replacing white historical figures with people of color.
The AI tool took off racially diverse Vikings, knights, founding fathers, and even Nazi soldiers.
Artificial intelligence programs learn from the information they have, and researchers have warned that AI is prone to reproducing racism, sexism and other biases of its creators and of society as a whole.
In this case, Google may have overcorrected in its efforts to tackle discrimination, as some users gave it prompt after prompt in failed attempts to get the AI to take a photo of a white person.
X user Frank J. Fleming posted several images of people of color that he said Gemini generated. Each time he said he was trying to get the AI to give him a picture of a white man, and each time.
Google’s communications team issued a statement Thursday announcing that it will pause Gemini’s generative AI feature while the company works to “address recent issues.”
“We are aware that Gemini offers inaccuracies in some historical images,” the company’s communications team wrote in a message to X on Wednesday.
The historically inaccurate footage led some users to accuse the AI of being racist against white people or being too woke.
In its initial statement, Google admitted to “missing the point,” while insisting that Gemini’s racially diverse images are “generally a good thing because people around the world use them.”
On Thursday, the company’s communications team wrote: ‘We are already working to resolve recent issues with Gemini’s image generation feature. While we do this, we’re going to pause human image generation and will re-release an improved version soon.”
But even the pause announcement couldn’t reassure critics, who responded with “go woke, go broke” and other fed-up rhetoric.
Following the initial controversy earlier this week, Google’s communications team released the following statement:
‘We are working to improve these types of images immediately. Gemini’s AI image generation generates a wide range of people. And that’s generally a good thing, because people all over the world use it. But this is where it misses the point.”
One of the Gemini responses that caused controversy was that of ‘German soldiers from 1943’. Gemini showed a white man, two women of color and a black man.
“I’m trying to think of new ways to ask about a white person without explicitly saying so,” wrote user Frank J. Fleming, whose request did not return any photos of a white person.
In one case that upset Gemini users, a user’s request for an image of the Pope was answered with a photo of a South Asian woman and a black man.
Historically, every pope has been a man. The vast majority (over 200 of them) were Italian. Three popes throughout history came from North Africa, but historians have debated their skin color because the most recent, Pope Gelasius I, died in AD 496.
Therefore, it cannot be said with absolute certainty that the image of a black male Pope is historically incorrect, but there has never been a female Pope.
In another, the AI responded to a request for medieval knights with four people of color, including two women. Although European countries were not the only ones to have horses and armor during the Middle Ages, the classic image of a “medieval knight” is Western European.
In perhaps one of the most egregious mishaps, a user asked for a 1943 German soldier and was shown a white man, a black man, and two women of color.
The World War II German army included no women, and certainly no people of color. In fact, it was dedicated to exterminating races that Adolf Hitler considered inferior to the blond, blue-eyed ‘Aryan’ race.
Google launched Gemini’s AI image generation feature in early February, competing with other generative AI programs like Midjourney.
Users could type a prompt in plain language, and Gemini would spit out multiple images in seconds.
In response to Google’s announcement that it would be pausing Gemini’s image generation features, some users posted “Go woke, go broke” and other similar sentiments
X user Frank J. Fleming repeatedly asked Gemini to generate images of people from white-skinned groups in history, including Vikings. Gemini gave results of dark-skinned Vikings, including one woman.
This week, however, an avalanche of users began criticizing the AI for generating historically inaccurate images, instead prioritizing racial and gender diversity.
The week’s events seemed to stem from a comment from a former Google employee, who said it was “embarrassingly difficult to get Google Gemini to acknowledge that white people exist.”
This joke seemed to kick off a wave of attempts by other users to recreate the problem, creating new guys to get mad at.
The problems with Gemini appear to stem from Google’s efforts to address bias and discrimination in AI.
Former Google employee Debarghya Das said, “It’s embarrassingly difficult to get Google Gemini to acknowledge that white people exist.”
Researchers have found that, due to racism and sexism in society and due to the unconscious biases of some AI researchers, supposedly unbiased AIs will learn to discriminate.
But even some users who agree with the mission of increasing diversity and representation noted that Gemini got it wrong.
‘I must point out that in certain cases it is a good thing to portray diversity**,’ wrote one X user. “Representation has material consequences for how many women or people of color will pursue certain fields of study. The stupid move here is that Gemini doesn’t do it in a nuanced way.”
Jack Krawczyk, senior product director for Gemini at Google, posted on X on Wednesday that the historical inaccuracies reflect the tech giant’s “global user base” and that it “takes representation and bias seriously.”
“We will continue to do this for open-ended questions (images of a person walking a dog are universal!),” Krawczyk added. ‘Historical contexts have more nuance and we will continue to adapt to that.’