Selection processes have always had personal or ideological biases. AI is not going to solve them

Selection

There was a time when more than one scholar was convinced that it was enough to analyze in detail the skull of a person to find out his character. If you were violent, patient, thoughtful, a good candidate for a roommate or an enemy with whom it was better not to cross, everything was within the reach of someone who knew how to read the reliefs on your brain. Such a doctrine is known as phrenology and although it achieved a certain prestige in the 19th century, today it is considered a pseudoscience.

Seen with perspective, it may seem like a nineteenth-century craziness, but researchers from the University of Cambridge have just warned that in the middle of 2022 some uses we make of artificial intelligence to select personnel in companies may not be much better.

Neither more objective nor fairer . In their study — published in Philosophy and Technology — Cambridge University researchers take a closer look at two of the advantages often associated with AI tools for recruiting. First, they help to objectively evaluate candidates, regardless of biases derived from gender or race. Second, thanks to this bonus, their selection is fairer and helps to encourage meritocracy.

The AI ​​is impartial —it is assumed—, so when applying it, only the most suitable for the vacancies will enter and any possible bias is erased. However , the experts’ conclusion does not point in that direction: “These claims are misleading.” And to prove it they wield several reasons.

When it’s oversimplified . That is, in short, the first risk that the Cambridge researchers warn about. Throughout their study, they explain how, in their attempt to dispense with race and gender, the systems that apply artificial intelligence often “misunderstand” both factors, considering them “isolable attributes”, labels that once silenced in the file cease to be relevant. have influence, and not traits that fit into “wider power systems”.

Another similar danger is that the AI ​​ends up diverting the focus. If the objective of the HR area is to promote a better company culture and delegates a good part of that task to an algorithm, it risks forgetting the essential: addressing the systemic problems that exist within the company itself. Experts speak of an “externalization” of the search for diversity and warn that it can lead to the opposite: a “strengthening of the culture of inequality and discrimination.”

The “ideal candidate”? Another question raised by the study is what exactly it means and implies that AI helps to find the “ideal candidate”, thus defined, of course, according to the criteria of the employer. “In its attempt to ‘not see’ race and gender, the AI ​​may actually be less equipped to understand the candidate’s skills because they have been trained to ‘see’ and observe from the perspective of the employer, who has previously communicated a set of predetermined characteristics and traits that denote ‘good employees’”, he collects.

The risk is that the AI ​​to reduce the groups of candidates serves the opposite of what was proposed, that instead of increasing diversity, it promotes uniformity. “According to the researchers, those with the right information and background could ‘beat the algorithms’ by replicating the behaviors that the AI ​​is programmed to identify,” they abound from the university.

Automated Pseudoscience . The risk, the team from the Cambridge Center for Gender Studies abounds , is that certain uses of AI in hiring processes end up being “little better than an ‘automated pseudoscience’ reminiscent of physiognomy or phrenology”. A very different objective than the logic that they want to contribute to the process and the elimination of any possible bias.

The authors of the Philosophy and Technology study go further and even speak of “a dangerous example of ‘techno-solutionism’”, an attempt to use technology in search of quick solutions to profound problems that require investment and changes in business culture.

A Practical Demonstration . To reinforce their argument, the researchers have developed a tool with AI and available online —you can use it at this link— that, supposedly, helps to profile the personality of a candidate to assess how suitable they are for a position.

Problem? The resource shows how small arbitrary changes, such as an alteration in facial expression or clothing or even lighting and background, yield completely different readings. “They could make the difference between rejection and progression,” she notes. The researchers recognize in any case that the personnel selection tools are usually of a proprietary nature, which makes it very difficult to know exactly how they work.

An increasingly popular resource . The researchers’ warning goes beyond theory and focuses on a tool that is increasingly used, at least according to the figures they handle. A 2020 study of half a thousand organizations in five countries found that 24% had turned to AI for hiring and 56% of recruiters planned to adopt it within months. Another survey from the same year pointed out that 86% of the entities studied were incorporating new technologies in this process.

“The trend was already there when the pandemic started and the accelerated shift to remote work caused by COVID-19 is likely to see a greater deployment of AI tools in HR areas in the future,” says Kerry Mackereth . Along with study co-author Eleanor Drage, she points out that many companies use AI to analyze candidate videos and interpret their personalities. Experts also warn that there are tools that are being applied with little regulation.

The debate is served . The British team is not the first to question the benefits of artificial intelligence in recruiting staff. At least in certain cases. In 2018 , Amazon decided to do without an algorithm for this type of process after verifying that it was not neutral, but rather showed a sexist bias. There are also those who emphasize the advantages of AI in the selection of personnel, such as saving time and costs and those who, directly, are still in an initial phase and their implementation is actually still very low among those looking for employees.

“Artificial intelligence can efficiently help increase the diversity of an organization by filtering from a larger pool of candidates, but it can also lose many good candidates if training rules and data are incomplete and inaccurate,” explains Hayfa Mohzaini. , from the Chartered Institute of Personnel and Development, to the BBC.

Related Post