AI-generated and edited images will soon be labeled in Google search results

Google has announced that it will be rolling out a new feature to help users “better understand how a particular piece of content was created and modified.”

This comes after the company joined the Coalition for Content Provenance and Authenticity (C2PA) – a group of major brands that aims to combat the spread of misleading information online – and helped develop the latest Content references standard. Amazon, Adobe and Microsoft are also members of the committee.

Google says it will use its current Content Credentials guidelines, or an image’s metadata, within its search parameters to label images that have been generated or edited by AI, providing more transparency for users. This metadata includes information such as the image’s origin, and when, where and how it was taken.

However, the C2PA standard, which allows users to track the provenance of different media types, has been rejected by many AI developers such as Black Forrest Labs, the company behind the Flux model that Grok from X (formerly Twitter) uses to generate images.

This AI marking will be implemented through the current About this image window, which means it will also be available to users via Google Lens and Android’s “Circle to Search” feature. When it’s live, users will be able to click the three dots above an image and select “About this image” to see if it’s been generated by AI. So it won’t be as clear as we’d hoped.

Is this enough?

While Google had to do this something About AI images in search results, the question remains whether a hidden label is enough. If the feature works as advertised, users will have to take additional steps to verify that an image was created with AI before Google acknowledges it. Those who aren’t already aware of the About This Image feature may not even realize that a new tool is available to them.

While there are examples of deepfakes in videos, such as earlier this year when a Financial officer scammed and paid $25 million For a group posing as its CFO, AI-generated images are almost as problematic. Donald Trump recently posted digitally rendered images of Taylor Swift and her fans falsely supported her in his presidential campaign, and Swift was the victim of sexual abuse based on images when AI-generated nude photos of her went viral.

While it’s easy to complain that Google isn’t doing enough, even Meta isn’t so eager to let the cat out of the bag. The social media giant recently updated the policy of making labels less visible and moving the relevant information to a post’s menu.

(Image credit: Ny Breaking/Sharmishta Sarkar)

While this upgrade to the About This Image tool is a positive first step, additional aggressive measures are needed to keep users informed and protected. More companies, such as camera makers and AI tool developers, will also need to adopt and utilize C2PA watermarks to ensure this system is as effective as possible, as Google will be relying on that data. A few camera models, such as the Leica M-11P and Nikon Z9, have the Content Credentials features built in, while Adobe has implemented a beta version in both Photoshop and Lightroom. Again, though, it will be up to the user to utilize the features and provide accurate information.

In a study by the University of WaterlooOnly 61% of people could tell the difference between AI-generated and real images. If those numbers are correct, Google’s labeling system won’t be providing more transparency to more than a third of people. Still, it’s a positive step for Google in the fight to reduce online misinformation, but it would be nice to see the tech giants make these labels a lot more accessible.

You might also like…

Related Post