Google is dramatically increasing its efforts to combat the appearance of explicit images and videos created with AI in search results. The company wants to make it clear that AI-generated deepfakes without mutual consent are not welcome in its search engine.
The actual images may be otherwise offensive or insulting, but regardless of the details, Google has a novel approach to removing this kind of material, burying it far from page one results if erasure isn’t possible. Notably, Google has experimented with using its own AI to generate images for search results, but those images don’t contain real people and certainly don’t contain anything racy. Google has partnered with experts in the field and those who have been targeted by non-consensual deepfakes to make its response system more robust.
Google has allowed individuals to request the removal of explicit deepfakes for a while, but the proliferation and improvement of generative AI image makers means more needs to be done. The system for requesting removals has been streamlined to make it easier to submit requests and speed up the response. When a request is received and confirmed as valid, Google’s algorithms will also work to filter out similarly explicit results related to the individual.
The victim also won’t have to manually sift through all the variations of a search query that could return the content. Google’s systems will automatically scan for and remove any duplicates of that image. And it won’t be limited to one specific image file. Google will proactively put a lid on related content. This is especially important given the nature of the internet, where content can be duplicated and distributed across multiple platforms and websites. Google already does this when it comes to real but non-consensual images, but the system will now cover deepfakes as well.
The approach also bears some similarities to Google’s recent efforts to crack down on unauthorized deepfakes, explicit or otherwise, on YouTube. Previously, YouTube would simply label such content as either created by AI or potentially deceptive, but now the depicted person or their attorney can file a privacy complaint, and YouTube will give the video owner a few days to remove it themselves before YouTube considers the complaint.
Deepfakes buried deep
Content removal isn’t 100% perfect, as Google knows all too well. That’s why explicit deepfake search results hunting also includes an updated ranking system. The new ranking pushes back against search terms with a chance of finding explicit deepfakes. Google Search will now attempt to reduce the visibility of explicit fake content and websites associated with spreading it in search results, especially if the search query contains someone’s name.
For example, let’s say you were looking for a news article about how a specific celebrity’s deepfakes went viral, and they testified before lawmakers about the need for regulation. Google Search will try to make sure you see those news stories and related articles about the issue, and not the deepfakes that are under discussion.
Google is not alone
Given the complex and evolving nature of generative AI technology and its potential for abuse, tackling the spread of harmful content requires a multifaceted approach. And Google isn’t alone in facing this problem or working on solutions. They’ve appeared on Facebook, Instagram, and other meta-platforms, and the company has updated its policies as a result, with its Oversight Board recently recommending that it amend its guidelines to directly cover explicit content generated by AI and improve its own appeals process.
Lawmakers are also responding to the problem, with the New York state legislature passing a bill targeting AI-generated non-consensual pornography as part of its “revenge porn” laws. Nationally, the Nurture Originals, Foster Art, and Keep Entertainment Safe Act of 2024 (NO FAKES Act) was introduced in the U.S. Senate this week to address both explicit content and non-consensual uses of deepfake images and voices. Similarly, Australia’s legislature is working on a bill to criminalize the creation and distribution of non-consensual explicit deepfakes.
Still, Google can already point to some success in the fight against explicit deepfakes. The company claims that initial tests with these changes have managed to reduce the appearance of explicit deepfakes by more than 70%. However, Google has not yet declared victory over explicit deepfakes.
“These changes are important updates to our protections in Search, but there is more work to do to address this issue and we continue to develop new solutions to help people affected by this content,” Google product manager Emma Higham explained in a blog post after“And as this challenge goes beyond search engines, we will continue to invest in industry partnerships and expert engagement to address this as a society.”