Biased and hallucinatory AI models can produce unfair results

“Code a treasure hunt game for me.” “Cover Psy’s ‘Gangnam Style’ in the style of Adele.” “Create a photorealistic close-up video of two pirate ships battling each other while sailing into a cup of coffee.” Even that last prompt isn’t an exaggeration — today’s best AI tools can create all of this and more in minutes, making AI seem like a genuine form of modern magic.

Of course, we know it’s not magic. A huge amount of work, instruction, and information goes into the models that power GenAI and produce its output. AI systems need to be trained to learn patterns from data: GPT-3, ChatGPT’s base model, was trained on 45TB of Common Crawl data, the equivalent of about 45 million 100-page PDF documents. In the same way that we humans learn from experience, training helps AI models better understand and process information. Only then can they make accurate predictions, perform important tasks, and improve over time.