Artificial intelligence (AI) is now so advanced that we can no longer tell the difference between fake faces and photos of real people, a new study warns.
In experiments with US citizens, more people thought AI-generated faces were human than the faces of real people.
Experts are concerned that ‘hyper-realistic’ images could fuel disinformation and identity theft online by creating authentic-looking profiles of people.
In the study, the researchers compared five AI faces with five human faces.
Can you tell which of these people are real? Scroll down for the answers.
In the study, the researchers compared five AI faces with five human faces. Can you tell which of these people are real? Scroll down for the answers
The new research was led by Dr Amy Dawel, a cognitive and clinical psychologist at the Australian National University (ANU) in Canberra.
AI faces fool people into thinking they are real because they mimic typical human ideals of what we look like, the authors said.
“It turns out that there are still physical differences between AI and human faces, but people tend to misinterpret them,” says Dr Dawel.
‘For example, white AI faces are more proportionate and people see this as a sign of humanity.
‘However, we cannot rely on these physical signals for long.
“AI technology is developing so quickly that the differences between AI and human faces will likely disappear soon.”
For the study, Dr. Dawel and colleagues recruited 124 US residents, all of whom were white and between the ages of 18 and 50.
They were shown 100 real faces and 100 AI faces generated using StyleGAN2, an AI tool created by the American company Nvidia.
After deciding whether a face was AI or human, participants rated their confidence on each trial from 0 (not at all) to 100 (completely).
White AI faces are rated as human noticeably more often than images of real people. Perceptual properties of faces that contribute to this phenomenon of ‘hyperrealism’ include facial proportions, familiarity and memorability
Worryingly, four of the five faces most often rated as human by participants were actually AI.
Meanwhile, four of the five faces most often rated as AI were actually human.
“White (but not non-white) AI faces are markedly more likely to be judged as human than images of real people,” the authors say.
‘We point to the perceptual properties of faces that contribute to this phenomenon of hyperrealism, including facial proportions, familiarity and memorability.’
Worryingly, people who thought the AI faces were real most often were the most confident that their judgments were correct.
“This means that people who mistake AI deceivers for real people do not know they are being deceived,” said co-author Elizabeth Miller of ANU.
Ironically, the task of accurately telling the difference between AI faces and real faces could lie in the hands of a machine, the authors suggest.
“Since humans can no longer detect AI faces, society needs tools that can accurately identify AI impostors,” says Dr. Dawel.
“Educating people about the perceived realism of AI faces can help make audiences appropriately skeptical about the images they see online.”
The authors are concerned about racial bias because AI algorithms are disproportionately trained on white faces and can therefore appear more realistic.
AI images of black faces may be more inaccurate, although this would mean they are easier to distinguish from photos of real black people.
“If white AI faces are consistently perceived as more realistic, this technology could have serious consequences for people of color by ultimately reinforcing racial biases online,” says Dr. Dawel.
‘This problem is already visible in current AI technologies used to create professional-looking portrait photos.
“When used on people of color, the AI changes their skin and eye color to that of white people.”
The research was published today in the journal Psychological Science.