The FCC wants the AI ​​voice that calls you to say it’s a deepfake

AI can mimic a human voice so well that deepfakes can trick many people into thinking they’re hearing a person talking. AI voices have inevitably been abused to automate phone calls. The U.S. Federal Communications Commission (FCC) is trying to combat the more malicious versions of these attempts, with a proposal aimed at strengthening consumer protections against unwanted and illegal AI-generated robocalls.

The FCC’s plan would help define AI-generated phone calls and text messages, allowing the commission to then set limits and rules. For example, requiring AI voices to report that they are fake when making calls would be mandatory.

AI’s usefulness in less savory communications and activities makes it no surprise that the FCC is busy regulating them. It’s also part of the FCC’s efforts to combat robocalls as a nuisance and a means to commit fraud. AI makes these schemes harder to detect and avoid, which prompted the proposal, which would require public disclosure of AI-generated voices and words. The conversation would have to start with the AI ​​explaining the artificial origins of both what it’s saying and the voice used to say it. Any group that doesn’t would then face a hefty fine.

The new plan supplements the FCC’s Declaratory Ruling from earlier this year, which found that voice cloning technology in robocalls is illegal without the consent of the person being called. That ruling stemmed from the intentional confusion created by a deepfake voice clone of President Joe Biden combined with caller ID spoofing to spread misleading information to New Hampshire voters ahead of the January 2024 primary.

Asking AI for help

In addition to pursuing the sources of the AI ​​calls, the FCC said it also wants to roll out tools to alert people when they’re receiving AI-generated robocalls and robotexts, particularly those that are unwanted or illegitimate. That could include better call filters that prevent them from happening in the first place, AI-powered detection algorithms or improved caller ID to identify and flag AI-generated calls. For consumers, the FCC’s proposed regulations provide a welcome layer of protection against the increasingly sophisticated tactics used by scammers. By requiring transparency and improving detection tools, the FCC aims to reduce the risk of consumers falling victim to AI-generated scams.

Artificial voices created with AI have also been used for many positive purposes. For example, they can give people who have lost their voices the ability to speak again and open up new communication options for people with visual impairments. The FCC acknowledged that in its proposal, even as it has cracked down on the negative impact the tools can have.

“Faced with a rising tide of misinformation, nearly three-quarters of Americans say they are concerned about misleading AI-generated content. That’s why the Federal Communications Commission has focused its work on AI by grounding it in a key principle of democracy: transparency,” FCC Chair Jessica Rosenworcel said in a proposition. “The concerns about these technological developments are real. And rightly so. But if we focus on transparency and take swift action when we find fraud, I think we can look beyond the risks of these technologies and seize the benefits.”

You might also like…

Related Post