Is ChatGPT sexist? AI chatbot was asked to generate 100 images of CEOs, but only ONE was a woman (and 99% of secretaries were women…)

Imagine a successful investor or a wealthy CEO: who would you imagine?

If you ask ChatGPT, it’s almost certainly a white man.

The chatbot has been accused of ‘sexism’ after it was asked to generate images of people in various high-powered jobs.

A man was chosen 99 times from 100 tests.

In contrast, when asked for a secretary, all but one chose a woman.

ChatGPT accused of sexism after identifying a white man 99 times out of 100 when asked to take a photo of a high-powered job

The research from personal finance site Finder found that it also chose a white person every time, despite not specifying race.

The results do not reflect reality. One in three companies worldwide are owned by women, while 42 percent of FTSE 100 board members in Britain were women.

Business leaders have warned that AI models are ‘riddled with bias’ and have called for stricter guardrails to ensure they do not reflect society’s biases.

It is now estimated that 70 percent of companies use automated applicant tracking systems to find and hire talent.

Concerns have been raised that if these systems are trained in similar ways to ChatGPT, women and minorities in the workforce could suffer.

OpenAI, the owner of ChatGPT, isn’t the first tech giant to come under fire for results that appear to perpetuate outdated stereotypes.

This month, Meta was accused of creating a “racist” AI image generator when users discovered they couldn’t imagine an Asian man with a white woman.

Google, meanwhile, was forced to pause its Gemini AI tool after critics called it “woke” for apparently refusing to generate images of white people.

When asked to paint a picture of a secretary, nine times out of ten the result was a white woman

When asked to paint a picture of a secretary, nine times out of ten the result was a white woman

Why did ChatGPT mostly generate images of only men? An expert explains…

Because two in three ChatGPT users are male, the chatbot – and the tech industry itself – is still dominated by men, according to Ruhi Khan.

The London School of Economics researcher, who has studied the crossover between feminism and AI, said: ‘ChatGPT was not born in a vacuum.

“It emerged in a patriarchal society, was conceptualized and developed by mostly men with their own biases and ideologies, and fed with training data that is also flawed due to its historical nature.

“So it’s no wonder that generative AI models like ChatGPT perpetuate these patriarchal norms by simply replicating them.

“With 100 million users every week, such outdated and discriminatory ideas become part of a narrative that excludes women from spaces they have struggled to occupy for a long time.”

The latest survey asked 10 of the most popular free image generators on ChatGPT paint a picture of a typical person in a series of high-powered jobs.

All image generators – which had clocked millions of conversations – used the underlying OpenAI software Dall-E, but were given unique instructions and knowledge.

More than 100 tests showed an image of a man on almost every occasion – only once was an image of a man shown woman. Then they asked to show ‘someone who works in the financial world’.

When each of the image generators was asked to show a secretary, nine times out of ten it showed a woman and only once a man.

Although race was not specified in the image descriptions, all images were pThe scrolls for the scrolls appeared to be white.

Industry leaders last night called for stronger guardrails built into AI models to protect against such biases.

Derek Mackenzie, CEO of technology recruitment specialist Investigo, said: ‘While generative AI’s ability to process vast amounts of information undoubtedly has the potential to make our lives easier, we cannot escape the fact that many training models are riddled with biases based on the people’s prejudices.

“This is another example that people should not blindly trust the results of generative AI and that the specialist skills needed to create next-generation models and counter built-in human biases are crucial.”

Pauline Buil, from web marketing company Deployteq, said: ‘Despite all the benefits, we must be careful that generative AI does not produce negative outcomes that have serious consequences for society, from copyright infringement to discrimination.

“Harmful results are fed back into AI training models, meaning some of these AI models will one day know that bias exists and needs to be stopped.”

The results do not reflect reality: one in three companies worldwide is owned by women

The results do not reflect reality: one in three companies worldwide is owned by women

Ruhi Khan, a researcher in feminism and AI at the London School of Economics, said ChatGPT “emerged in a patriarchal society, was conceptualized and developed by mainly men with their own biases and ideologies, and fed the training data that is also flawed by its very historical nature.

“AI models like ChatGPT perpetuate these patriarchal norms by simply replicating them.”

OpenAI’s website admits that the chatbot is “not free from biases and stereotypes” and urges users to “carefully review” the content it creates.

A list of points to ‘keep in mind’ states that the model leans towards Western views. It adds that it is an ‘ongoing area of ​​research’ and welcomes feedback on how it can be improved.

The US company also warns that it can also ‘amplify’ users’ biases when interacting with the company, such as strong opinions about politics and religion.

Sidrah Hassan of AND Digital: ‘The rapid evolution of generative AI has led to models fleeing without proper human guidance and intervention.

“To be clear, when I say ‘human guidance,’ it should be diverse and intersectional. Simply having human guidance does not equate to positive and inclusive outcomes.”

An AI spokeswoman said: ‘Bias is a major problem across the industry and we have safety teams dedicated to investigating and mitigating bias and other risks in our models.

‘We are taking a multi-pronged approach to address this, including investigating the best methods for adjusting training data and cues to achieve fairer results, improving the precision of our content filtering systems, and improving both automated and human monitoring.

“We are continually iterating on our models to reduce bias and limit harmful outcomes.”