The Gemini bias fiasco reminds us that AI is not smarter than we make it out to be
If an AI doesn’t know the history, you can’t blame the AI. It always comes down to the data, programming, training, algorithms and every other piece of human-built technology. It’s all that and our perception of the AI’s ‘intentions’ on the other side.
When Google’s newly rebranded Gemini (formerly Bard) started regurgitating people of color to represent white historical figures, people quickly came to the conclusion that something was wrong. Google noticed the error and removed all people generation capabilities from Gemini until it could come up with a solution.
It wasn’t that hard to figure out what was happening here. Since the early days of AI, and by that I mean 18 months ago, we’ve been talking about inherent and ingrained AI biases that end up, often unintentionally, in the hands of programmers who train the big language and big image models on data that reflects their experiences and perhaps not that of the world. Sure, you have a smart chatbot, but it probably has significant blind spots, especially when you consider that the majority of programmers are still male and white (one 2021 study put the percentage of white programmers at 69% and found that only 20% of all programmers were women).
Still, we’ve learned enough about the potential for bias in training and AI results that companies have become much more proactive in preventing such biases in a chatbot or in generative results. Adobe told me earlier this year that it programmed its Firefly Generative AI tool to take into account where someone lives and the racial makeup and diversity of their region to ensure image results reflect their reality.
Doing too much good
Which brings us to Google. It likely programmed Gemini to be racially sensitive, but did so in a way that overcompensated. If there was a weighting system for historical accuracy versus racial sensitivity, Google put its thumb on the scale for the latter.
The example I’ve seen thrown around is that Google Gemini gives a multicultural view of the founding fathers of the US. Unfortunately, men and women of color were not represented in the group that drafted the American Declaration of Independence. In fact, we know it some of those men were slave traders. I’m not sure how Gemini could have accurately depicted these white men while adding that footnote. Still, the programmers got the bias training wrong, and I applaud Google for not just leaving the image generation capabilities of Gemini’s people there to upset people even more.
However, I think it’s worth examining the significant backlash Google received for this blunder. On X (the dumpster fire formerly known as Twitter), including people X CEO Elon Musk, decided that this was Google trying to enforce some kind of anti-white bias. I know, it’s ridiculous. Pushing a bizarre political agenda would never serve Google, home of the search engine for the masses, regardless of your political or social leanings.
What people don’t understand, despite how often developers get it wrong, is that this is still very early in the generative AI cycle. The models are incredibly powerful and in some ways surpass our ability to understand them. We use crazy science experiments every day without any idea of the kind of results we will get.
When developers release a new generative model and AI into the world, I think they only understand about 50% of what the model could do, partly because they can’t take every prompt, conversation, and image request into account.
More wrong ahead – until we get it right
If there is one thing that separates AI from humans, it is that we have almost limitless and unpredictable creativity. AI’s creativity is based solely on what we feed it and while we may be surprised by the results, I think we are better able to surprise programmers and the AI with our clues.
However, this is how AI and the developers behind it learn. We have to make these mistakes. AI needs to create a hand with eight fingers before it can learn that we only have five. AI will sometimes hallucinate, misinterpret the facts and even offend.
If and when that is the case, that is no reason to pull the plug. The AI has no emotions, intentions, opinions, political positions or axes to grind. It is trained to give you the best possible result. It won’t always be the right one, but in the end it will go far more right than wrong.
Gemini produced a poor result, which was a mistake by the programmers, who will now go back and push and pull different levers until Gemini understands the difference between political correctness and historical accuracy.
If they do their job right, future Gemini will give us a perfect picture of the all-white founding fathers, with that crucial footnote about their stance on the enslavement of other people.