Biased bots? US lawmakers take on the ‘Wild West’ of AI recruitment

After unsuccessfully applying for nearly 100 jobs through the HR platform Workday, Derek Mobley noticed a suspicious pattern.

“I was getting all these rejection emails at two or three in the morning,” he told Reuters. “I knew it had to be automated.”

Mobley, a 49-year-old black man with a degree in finance from Morehouse College in Georgia, had previously worked as a commercial loan officer, among other jobs in finance. He applied for mid-level jobs in a variety of industries, including energy and insurance, but when he used the Workday platform, he said he didn’t get a single interview or callback and often had to settle for gig work or warehouse work. shifts to make ends meet.

Mobley believes he was discriminated against by Workday’s artificial intelligence algorithms.

In February, he filed what his lawyers described as the first class action lawsuit of its kind against Workday, alleging that the pattern of rejection he and others experienced indicated the use of an algorithm that discriminates against people who are black and are disabled. or older than 40 years. In a statement to Reuters, Workday said Mobley’s lawsuit was “completely devoid of factual allegations and allegations” and said the company was committed to “responsible AI.”

The question of what “responsible AI” might look like goes to the heart of an increasingly powerful backlash against the unfettered use of automation in the US recruitment market.

In the United States, state and federal authorities are grappling with how to regulate the use of AI in hiring and protect against algorithmic bias.

About 85 percent of major U.S. employers now use some form of automated tool or AI to screen or rank candidates.

This includes the use of resume screeners that automatically scan applicants’ submissions, assessment tools that assess an applicant’s suitability for a job based on an online test, or facial or emotion recognition tools that can analyze a video interview.

In May, the Equal Employment Opportunity Commission (EEOC), the federal agency that enforces civil rights laws in the workplace, released new guidance to help employers prevent discrimination when using automated hiring processes.

In August, the EEOC settled its first-ever automation-based case, fining iTutorGroup $365,000 for using software to automatically reject applicants over the age of 40. City and state authorities are also participating. “Right now, it’s the Wild West out there,” says Matt Scherer, an attorney at the Center for Democracy and Technology (CDT).


Algorithmic blackballing

Technology-enabled biases pose a risk because AI uses algorithms, data, and computer models to mimic human intelligence. It relies on ‘training data’ and if there is bias in that data, which is often historical, it can be replicated in an AI program. For example, in 2018, Amazon abandoned an AI resume screening product that automatically demoted applicants with the word “women” on their resume, as in “captain of the women’s chess club.”

This was because Amazon’s computer models were trained to examine job applicants by observing patterns over a decade. This is the kind of discrimination that worries Brad Hoylman-Sigal, a senator in New York. “Many of these tools have been proven to unlawfully violate employee privacy and discriminate against women, people with disabilities and people of color,” he said.

In April, the FTC and three other federal agencies, including the EEOC, said in a statement that they were looking at potential discrimination arising from data sets that train AI systems and opaque “black box” models that make combating bias difficult.

Some AI advocates acknowledge the risk of bias, but say it can be controlled.