ChatGPT ‘racially discriminates’ against job seekers by filtering out ‘black names’ in recruitment searches

ChatGPT racially “discriminates” against job seekers by favoring names distinct from different racial groups for different jobs. according to research by Bloomberg News.

Developer OpenAI sells the technology behind its AI-powered chatbot to companies that want to use it to help with HR and recruitment.

Because ChatGPT is trained on large amounts of data, such as books, articles, and social media posts, its results may reflect biases already present in that data.

At least 90 percent of the time, Bloomberg selected real names from census data that are demographically different for certain races and ethnicities and added them to resumes for equivalent jobs.

The resumes were then entered into ChatGPT 3.5, the most popular version of the chatbot, which discriminated against different races depending on the job they were being asked to rank their suitability for.

ChatGPT “racially discriminates” against job seekers by favoring names distinct from different racial groups for different jobs, a Bloomberg News investigation found

The experiments show “that using generative AI for recruitment and hiring poses a serious risk of automated discrimination at scale,” Bloomberg concluded.

When ChatGPT was asked a thousand times to rank eight equally qualified resumes for a real-life financial analyst role at a Fortune 500 company, ChatGPT was least likely to choose the resume with a name that was different from that of Black Americans.

The resumes with names specific to Asian women were ranked by the bot as the best candidate for the financial analyst role more than twice as often as those with names specific to Black men.

The same experiment was conducted with four different job types, including HR business partner, senior software engineer, retail manager, and financial analyst role.

The analysis found that ChatGPT’s gender and racial preferences differed depending on the specific position for which a candidate was being assessed.

According to Bloomberg, Black Americans were the least likely to rank as the top candidates for the positions of financial analyst and software engineer.

The bot rarely ranked names associated with men as top candidates for positions historically dominated by women, such as retail and HR roles.

Hispanic women were almost twice as likely to be ranked as a top candidate for an HR position compared to resumes with names different from men’s.

In response to the findings, OpenAI told Bloomberg that the results produced by using GPT models ‘out-of-the-box’ may not reflect the results produced by customers of their product who are able to measure the responses of the software for their individual hiring needs.

For example, companies can remove names before entering resumes into a GPT model, Open AI explains.

Resumes with names specific to Asian women were ranked by the bot as the best candidates for the financial analyst role more than twice as often than resumes with names specific to Black men

OpenAI also regularly conducts adversarial testing and red-teaming on its models to investigate how bad actors can use them to cause damage, the company added.

SeekOut, an HR technology company, has developed its own AI recruiting tool that takes a job description from a job posting, runs it through GPT, and then shows a ranked list of candidates for the position from places like LinkedIn and Github.

Sam Shaddox, general counsel at SeekOut, told Bloomberg that hundreds of companies are already using the tool, including tech companies and Fortune 10 companies.

“From my perspective, it’s not the right answer to say, ‘Hey, there’s all this bias out there, but we’re just going to ignore it,’” Shaddox said.

“The best solution for this is GPT: a great language learning model technology that can identify some of these biases because then you can actually work to overcome them.”

Emily Bender, a professor of computational linguistics at the University of Washington, is more skeptical.

Bender argues that people tend to believe that machines are unbiased in their decision-making, especially compared to humans, a phenomenon called automation bias.

However, if such systems ‘develop into a pattern of discriminatory hiring decisions, it’s easy to imagine companies that use them saying, “Well, we didn’t have any bias here, we just did what the computer told us.” “

Related Post