Fake babies, real horror: Deepfakes from the Gaza war increase fears about AI’s power to mislead

WASHINGTON — Among the images of Gaza’s bombed houses and devastated streets, some stood out for the utter horror: bloodied, abandoned babies.

These images, which have been viewed millions of times online since the start of the war, are deepfakes created with the help of artificial intelligence. If you look closely, you can see clues: fingers curling strangely, or eyes glistening in an unnatural light – all telltale signs of digital deception.

However, the outrage that the images were intended to provoke is all too real.

Images from the war between Israel and Hamas have vividly and painfully illustrated the potential of AI as a propaganda tool, used to create lifelike images of massacres. Since the start of the war last month, digitally altered messages spread on social media have been used to make false claims of responsibility for victims or to mislead people about atrocities that never happened.

Although most of the false claims circulating online about the war did not require artificial intelligence and came from more conventional sources, technological advances are coming more and more frequently and with little oversight. That highlighted AI’s potential to become another form of weapon, and offered a glimpse of what will happen during future conflicts, elections and other major events.

“It’s going to get worse — a lot worse — before it gets better,” said Jean-Claude Goldenstein, CEO of CREOpoint, a technology company based in San Francisco and Paris that uses AI to assess the validity of online claims. The company has created a database of the most viral deepfakes that have emerged from Gaza. “Photos, video and audio: with generative AI it becomes an escalation like never before.”

In some cases, photos from other conflicts or disasters have been reused and considered new. In other cases, generative AI programs have been used to create images from scratch, such as a baby crying amid bombing wreckage, that went viral in the early days of the conflict.

Other examples of AI-generated footage include videos showing alleged Israeli rocket attacks, or tanks driving through destroyed neighborhoods, or families searching through the rubble for survivors.

In many cases, the fakes appear intended to provoke a strong emotional response by involving the bodies of babies, children or families. In the bloody first days of the war, supporters of both Israel and Hamas claimed that the other side had victimized children and babies; deepfake images of crying babies provided photographic “evidence” that was quickly cited as evidence.

The propagandists who create such images are adept at tapping into people’s deepest impulses and fears, said Imran Ahmed, CEO of the Center for Countering Digital Hate, a nonprofit that has tracked war-related disinformation. Whether it is a deepfake baby, or an actual image of a baby from another conflict, the emotional impact on the viewer is the same.

The more disgusting the image, the more likely a user is to remember and share it, inadvertently spreading the disinformation further.

“People are now being told: look at this picture of a baby,” Ahmed said. “The disinformation is designed to engage you.”

Similarly, misleading AI-generated content began to spread after Russia invaded Ukraine in 2022. An altered video appeared to show Ukrainian President Volodymyr Zelensky ordering Ukrainians to surrender. Such claims have continued to circulate over the past week, demonstrating how persistent, even easily debunked, misinformation can be.

Every new conflict or election season presents new opportunities for disinformation peddlers to demonstrate the latest AI advances. That has many AI experts and political scientists warning of the risks next year, when several countries hold major elections, including the US, India, Pakistan, Ukraine, Taiwan, Indonesia and Mexico.

The risk that AI and social media could be used to spread lies among American voters has alarmed lawmakers from both parties in Washington. At a recent hearing on the dangers of deepfake technology, U.S. Rep. Gerry Connolly, Democrat of Virginia, said the U.S. should invest in funding the development of AI tools designed to counter other AI.

“We as a nation need to get this right,” Connolly said.

Around the world, a number of startup tech companies are working on new programs that can detect deepfakes, watermark images to prove their origins, or scan text to verify any misleading claims that may have been inserted by AI.

“The next wave of AI will be how do we verify the content that’s out there. How can you detect misinformation, how can you analyze text to determine whether it is reliable?” says Maria Amelie, co-founder of Factiverse, a Norwegian company that created an AI program that can scan content for inaccuracies or biases introduced by other AI programs.

Such programs would be of immediate interest to educators, journalists, financial analysts and others interested in rooting out falsehoods, plagiarism or fraud. Similar programs are designed to detect doctored photos or videos.

While this technology holds promise, those who use AI to lie are often one step ahead, according to David Doermann, a computer scientist who led an effort at the Defense Advanced Research Projects Agency to respond to the national security threats posed by AI manipulated images.

Doermann, who is now a professor at the University at Buffalo, said an effective response to the political and social challenges of AI disinformation will require better technology as well as better regulation, voluntary industry standards and extensive investments in digital literacy programs to help internet users identify their identity. to find out. devise ways to distinguish truth from fantasy.

“Every time we release a tool that detects this, our adversaries can use AI to obscure that trace evidence,” says Doermann. “Detecting and trying to take this stuff down is no longer the answer. We need a much bigger solution.”

Related Post