Four of these faces are produced entirely by AI… can YOU tell who’s real? New research shows that almost 40% of people were wrong

Recognizing the difference between a real photo and an AI-generated image is becoming increasingly difficult as deepfake technology becomes more realistic.

Researchers from the University of Waterloo in Canada wanted to determine whether humans can distinguish AI images from real ones.

They asked 260 participants to label ten images collected by a Google search and ten images generated by Stable Diffusion or DALL-E – two AI programs used to create deepfake images – as real or fake.

The researchers noted that they expected 85 percent of participants to be able to identify the images accurately, but only 61 percent of people guessed correctly.

Researchers asked 260 participants to determine whether an image was real or fake, but almost 40 percent of people got it wrong

The research, published in Springer Link, found that the most common reasons people identified the images as real or fake were by looking at details such as the eyes and hair, while other more general reasons were that the photo looked ‘ looked strange’.

Participants were allowed to look at the photos without restriction and focus on the small details, something they probably wouldn’t do if they were just scrolling online – also known as ‘doomscrolling’.

However, the survey asked participants not to think too much about their answers and said it is encouraged to “pay the attention you would pay to a photo with a news headline.”

“People are not as adept at making the distinction as they think,” says Andreea Pocol, a doctoral candidate in computer science at the University of Waterloo and lead author of the study.

Researchers chose 10 FAKE AI-generated images

Researchers chose 10 FAKE AI-generated images

The researchers said they were motivated to conduct the study because not enough research had been done on the topic. So they published a survey asking people to identify the real versus AI-generated images on Twitter, Reddit, and Instagram, among others.

In addition to the images, participants could justify why they thought it was real or fake before submitting their answers.

The study found that nearly 40 percent of participants misclassified the images, showing “that people are not good at separating real images from fake ones, making it easy to spread false and potentially dangerous stories.”

They also divided the participants by gender (male, female, or other) and found that female participants performed the best, with an accuracy of about 55 to 70 percent, while male participants had an accuracy of 50 to 65 percent.

Researchers chose 10 REAL images

Researchers chose 10 REAL images

Meanwhile, those who identified as “other” had a narrower range of guessing the fake versus real images with 55 to 65 percent accuracy.

Participants were then divided into age groups and found that ages 18 to 24 had an accuracy rate of 0.62. It showed that as participants got older, the odds of guessing correctly decreased, falling to just 0.53 for people aged 60 to 64. old.

According to the study, this research is important because “deepfakes have become more sophisticated and easier to create in recent years,” leading to concerns about their potential impact on society.

The research comes at a time when AI-generated images, or deepfakes, are becoming more common and realistic, affecting not only celebrities but also everyday people, including teenagers.

For years, celebrities have been the target of deepfakes, with fake sexual videos of Scarlett Johanson appearing online in 2018, and two years later, actor Tom Hanks was targeted by AI-generated images.

Then, in January this year, pop star Taylor Swift was targeted with fake pornographic deepfake images that went viral online and were viewed 47 million times on X before being taken down.

Deepfakes also surfaced at a New Jersey high school when a male teen shared fake pornographic photos of his female classmates.

“Disinformation is not new, but the tools for disinformation are constantly evolving and evolving,” says Pocol.

‘It can reach a point where people, no matter how trained they are, will still have difficulty distinguishing real images from fakes.

‘That is why we must develop instruments to identify and counter this. It’s like a new AI arms race.”