Meta will start flagging AI-generated images on Facebook, Instagram and Threads in an effort to maintain online transparency.
The tech giant already labels content created by its Imagine AI engine with a visible watermark. In the future, it will do something similar for images coming from third-party sources like OpenAI, Google, and Midjourney, to name a few. It is unknown what exactly these labels will look like, although: looking at the announcement post, it can simply consist of the words ‘AI Info’ next to the generated content. Meta states that this design is not yet final, which suggests that this could change once the update officially launches.
In addition to visible labels, the company is also working on tools to “identify invisible markers” in images from third-party generators. Imagine if AI does this too by embedding watermarks in the metadata of the content. Its purpose is to include a unique tag that cannot be manipulated by editing tools. Meta states that other platforms have plans to do the same and want a system in place to detect the tagged metadata.
Audio and video labeling
So far it’s all been about brand images, but what about AI-generated audio and video? Google’s Lumiere is capable of creating incredibly realistic clips and OpenAI is working on implementing video creation in ChatGPT. Is there anything to detect more complex forms of AI content? Yeah, sort of.
Meta admits that there is currently no way to detect AI-generated audio and video at the same level as images. The technology just isn’t there yet. However, the industry is “working on this possibility”. Until then, the company will rely on the honor system. Users must disclose whether the video clip or audio file they want to upload has been produced or edited by artificial intelligence. Failure to do so will result in a ‘penalty’. In fact, if a medium is so realistic that it risks misleading the audience, Meta will add “a more prominent label” with important details.
Future updates
As for its own platforms, Meta is also working to improve first-party tools.
The company’s AI Research laboratory FAIR is developing a new type of watermarking technology called Stable Signature. Apparently it is possible to remove the invisible marks from the metadata of AI-generated content. Stable Signature should stop this by making watermarking an integral part of the “image generation” process. Additionally, Meta has started training various LLMs (Large Language Models) in their community standards so that the AIs can determine whether certain parts of the content violate policies.
Expect the social media labels to be rolled out in the coming months. The timing of the publication should come as no surprise: 2024 is a key election year for many countries, especially the United States. Meta tries to limit the spread of misinformation on its platforms as much as possible.
We reached out to the company to learn more about the penalties a user could face for not adequately flagging their post and whether it plans to mark images from a third-party source with a visible watermark. This story will be updated at a later date.
Until then, check out Ny Breaking’s list of the best AI image generators for 2024.