YouTube can recognize the AI-generated faces and songs you see everywhere
YouTube continues to tap-dance to the onslaught of AI-generated content appearing on the platform with a new set of tools to spot when AI-generated people, voices and even music appear in videos. The newly upgraded Content ID system expands from checking for copyright infringement to also sniffing out synthetic voices performing songs. There are also new ways to spot when deepfake faces pop up in videos.
The “synthetic singing” voice identification tool for the Content ID system is fairly simple. The AI automatically detects and manages AI-generated imitations of singing voices and alerts users to the tool. Google plans to roll out a pilot version of this system early next year before it goes live more widely.
In the visual content space, YouTube is testing a way for content creators to detect AI-generated videos that feature their faces without their consent. The idea is to give artists and public figures more control over how AI versions of their faces are deployed, particularly on the video platform. Ideally, this would prevent the spread of deepfakes, or unauthorized manipulations.
Both features build on a policy quietly added to YouTube’s Terms of Service in July to address AI-generated impersonation. Affected individuals can request the removal of videos that contain deepfake aspects of themselves through YouTube’s privacy request process. It was a major change from simply labeling the video as AI or deceptive content, and it improved the removal policy to address AI.
“These two new capabilities build on our track record of developing technology-driven approaches to address rights issues at scale,” YouTube Vice President of Creator Products Amjad Hanif wrote in a blog post. “We’re committed to bringing this same level of protection and empowerment into the AI era.”
YouTube’s AI Infusion
The downside to AI detection tools is for creators who have seen their videos scraped to train AI models without their consent. Some YouTube creators are upset about how their work is being picked up for training by OpenAI, Apple, Nvidia, and Google itself without any requests or compensation. The exact plan is still in the early stages of development, but will likely address Google scraping at the very least.
“We will continue to take steps to ensure that third parties respect[YouTube’s Terms of Service]including continued investment in the systems that detect and prevent unauthorized access, up to and including blocking access for those who scrape,” Hanif wrote. “That said, as the generative AI landscape continues to evolve, we recognize that creators may want more control over how they work with third-party companies to build AI tools. That’s why we’re developing new ways to give YouTube creators choice over how third parties can use their content on our platform.”
The announcements are part of YouTube’s moves to make AI both a deeply integrated part of the platform and one that people trust. That’s why these protection announcements often come right before or after plans like YouTube’s Brainstorm with Gemini tool to generate inspiration for a new video. And that’s without even mentioning anticipated features like an AI music generator, which will itself pair nicely with the new tool for removing copyrighted music from your video without deleting it entirely.