Meta is introducing a tool to identify AI-generated images shared on its platforms, amid a global rise in synthetic content spreading misinformation.
Due to different systems on the Internet, Mark Zuckerberg’s company wants to extend labels to others such as Google, OpenAI, Microsoft and Adobe.
Meta said it will fully roll out the labeling feature in the coming months and plans to add a feature that will allow users to flag AI-generated content.
However, the US presidential race is in full swing, leading some to wonder whether the labels will appear in time to prevent the spread of fake content.
The move comes after Meta’s Oversight Board urged the company to take steps to label manipulated audio and video that could mislead users.
Meta is introducing a tool to identify AI-generated content created on its platform
Meta rolled out an AI image generator in September last year and will identify all images that used the generator
“The Council’s recommendations go further in that they advised the company to expand its manipulated media policy to include audio, clearly identify the harm it is trying to reduce, and begin labeling these types of messages more broadly than had been announced,” an Oversight Board spokesperson said. Dan Chaison told Dailymail.com.
He continued, “Labeling allows Meta to leave more content behind and protect free speech.
‘However, it is important that the company clearly defines the issues it wants to address, as not all changed items are objectionable and there is no immediate risk of real harm.
“That harm could include inciting violence or misleading people about their voting rights.”
Meta said Tuesday it is working with industry partners on technical standards that will make it easier to identify images and ultimately video and audio generated by artificial intelligence tools.
What remains to be seen is how well it will work at a time when it’s easier than ever to create and distribute AI-generated images that can cause harm — from election misinformation to non-consensual fake celebrity nudes.
AI-generated images have become increasingly worrying.
Thousands of internet users are being tricked into sharing fake images, like French President Emmanuel Macron protesting
Thousands of internet users are being tricked into sharing fake images of French President Emmanuel Macron protesting Donald Trump’s arrest by police in New York City.
Nick Clegg, Meta’s president of global affairs, said it is important to roll out these labels now, at a time when elections are taking place around the world that could lead to misleading content.
“As the distinction between human and synthetic content blurs, people want to know where the line is,” says Clegg.
“People are often encountering AI-generated content for the first time, and our users have told us they appreciate transparency around this new technology.
“So it’s important that we help people know when the photorealistic content they see was created using AI.”
Clegg also explained that Meta will work to tag “images from Google, OpenAI, Microsoft, Adobe, Midjourney and Shutterstock as they implement their plans to add metadata to images created with their tools.”
Several fake images contain misleading and sometimes dangerous information that could incite violence if left unchecked.
The Oversight Board said Meta’s current rigged media policy “lacks compelling justification, is incoherent and confusing to users, and fails to clearly specify the harms it seeks to prevent.”
“As it stands now, the policy makes little sense,” Michael McConnell, co-chairman of the board, told Bloomberg.
“It bans altered videos of people saying things they are not saying, but does not ban posts depicting a person doing something they did not do. It only applies to video created via AI, but excludes other fake content.”
A misleading image of Donald Trump’s arrest went viral, sparking outrage from people who believed the image was real
Last year, one image showed former President Donald Trump being arrested outside a New York City courthouse, sparking an outpouring of people who believed the image was real.
Meta’s Oversight Board said the move to label AI-generated images is a win for media literacy and will give users the context they need to identify misleading content.
The board is still in discussions with Meta about expanding the labels to media and audio and calls on the company to clearly indicate the harm associated with the misleading media.
Meta did not respond to the Oversight Board’s request for the company to implement additional labels to identify any changes to posted content.
The idea is that by labeling misleading content, Meta does not have to remove the posts, which in turn can protect people’s right to free speech and their right to express themselves.
However, changes like the robocall that imitated President Joe Biden’s voice telling New Hampshire voters not to vote during the primaries would still justify the administration’s decision to remove the content.
To combat misleading information, Meta is exploring the development of technology that automatically detects AI-generated content.
“This work is especially important as this is likely to become an increasingly hostile area in the coming years,” Clegg said.
‘People and organizations that actively want to mislead people with AI-generated content will look for ways to circumvent the security measures put in place to detect it.
‘In our sector and in society in general, we will have to continue to look for ways to stay one step ahead.’
Dailymail.com has contacted Meta for comment.