New Book Shows How 99% of Fortune 500 Companies Use Technology to 'View' Interviews and 'Read' Resumes to Make Hiring Decisions Without Human Oversight
The book, titled 'The Algorithm', has provided insight into how the recruitment world is becoming a 'Wild West', where unregulated AI algorithms make decisions without human supervision.
AI has taken over the job market by reading resumes and reviewing interviews to provide human executives with the best candidates, a new book shows.
The book, titled 'The Algorithm', has pulled back the curtain on how the world of recruitment is becoming a 'Wild West', where unregulated AI algorithms make decisions without human supervision.
Artificial intelligence determines who is hired and who is fired Monitoring everything from what people post on social media to the tone of their voice in interviews, the book's author Hilke Schellmann told DailyMail.com.
Algorithms can now determine not only who gets job interviews, but thanks to constant monitoring in the workplace, who gets promoted or fired (and they can even alert your boss if you're getting divorced).
Schellmann said ZipRecruiter's CEO told him a few years ago that the technology screened at least 75 percent of resumes.
“That was 2021; it's probably 100 percent now. We know that 99 percent of Fortune 500 companies are already using AI tools in hiring,” she says.
Schellman said it's inevitable that if you apply for a job today, your resume will be screened by AI long before it's handled by a human — and offered tips on how to be seen in an AI-driven workplace.
AI tools that recruiters use are unreliable, and even recruiters don't know how they work. She added that the AI-powered application process is rife with discrimination and uses “obviously weird” keywords, she explained.
Artificial intelligence already determines who gets hired and who gets fired by monitoring everything from what people post on social media to the tone of their voice in interviews, the book's author Hilke Schellmann told DailyMail.com
The AI tools are often 'black boxes' in which recruiters cannot see how they work.
The technology can develop unusual ideas about a candidate's likelihood of success or predict success based on church attendance or different nationalities – creating a process rife with discrimination.
This means, for example, that women or people with disabilities may be discriminated against during the hiring process, but because people do not know which AI tools have been used, it is difficult for them to respond.
“The suppliers who build AI tools do not want to be scrutinized and are reluctant to talk about any problems,” Schellmann said.
'They want to talk about it in fancy marketing terms, right? How great it is to find the best people, but they don't want to open the black box for testing or transparency.
“The companies that use these AI tools often don't want to talk about them because they feel they will suffer reputational damage if applicants are angry that AI is being used and that no humans are looking at their applications.”
And she said machines account for the majority of rejections.
A former employment lawyer, Matthew Scherer, whom Schellmann spoke with said the tools used are “not ready for prime time.”
That's because the technology is “very basic” and cannot fully predict real-world outcomes, such as a person's success at a job.
Schellmann described many technologies used to filter resumes as “snake oil.”
'We know it saves money. “We know it saves labor, but we haven't seen evidence yet that it chooses the most qualified candidates,” she said.
Many organizations are also using AI to review video interview recordings, looking for things like the “wrong tone,” Schellman says.
“Unfortunately, this is largely legal,” she continued.
“The European Union is a bit stricter with the General Data Protection Regulation (GDPR) and other laws, but the United States is still the Wild West in this regard, aside from some local laws we see.
“There's one in Illinois where you have to let people know using AI and video interviews. But in general there is not much regulation in this area yet.'
AI tools used in interviews highlight “biomarkers” (such as tone of voice or movements) that supposedly correspond to emotions.
“When you and I are talking, this tool can detect whether you are anxious or depressed based on the tone of your voice or facial expression. Yes, and the intonation of the voice,” Schellmann said.
'What does it mean, a facial expression during a job interview?
'It doesn't make you good or bad at a job. We use these technological signals because we can, but they often don't have much meaning.'
Employers now also routinely scan social media networks like X and LinkedIn using AI algorithms, looking for details such as references to songs with violent lyrics.
“That could mean you're labeled as a violent person and someone who shouldn't be hired,” Schellmann said.
Many companies do this as part of an application screening process, but others use such scans on employees all the time.
“Some of these tools also detect if you are prone to self-harm,” the author revealed.
Algorithms can now determine not only who gets job interviews but, thanks to constant monitoring in the workplace, who gets promoted or fired
“That could be illegal in the United States because you can't ask people about medical conditions before hiring them.
'It's also a question: why would a company want to know whether you are prone to self-harm? Is that actually useful? Do they help these people, their employees? Or do they punish them?'
Companies use AI algorithms to assess people's personalities based on their social media posts, analyzing language to assess what people are “really like,” Schellmann said.
'Don't companies want to look under people's hoods? They want to know who you are before they hire you,” Schellmann explains.
Schellmann said the ability to “see under the hood of people” is something organizations have coveted for decades, leaving them dependent on untested or bogus technologies such as handwriting analysis.
The same work is done by AI algorithms that delve into videos of job interviews.
Organizations looking to hire someone who “learns quickly” (so they can adapt to a changing technological world) often rely on such technologies to predict who will be a good fit, Schellmann said.
But relying on personality (and on untested AI algorithms to equip people with that specific trait) is a mistake, Schellman argues.
Schellman said, “What we know from science is that personality is about five percent predictive of job success. So that is very, very little.
'We often overcome our personality. I'm quite shy and I need to work on that. When I go to receptions and parties, I have to work on approaching strangers. We can overcome our personality at work and other places.
'The question is actually whether we should use it. But it is easy to use. It's super cheap. It's just an easy way to do it, and that's why they do it.”