Michigan to join state-level effort to regulate AI political ads as federal legislation pends

LANSING, MI — Michigan is joining an effort to curb deceptive use of artificial intelligence and manipulated media through state-level policies, as Congress and the Federal Election Commission continue to debate more sweeping regulations ahead of the 2024 elections.

Campaigns at the state and federal level will be needed to clearly identify which political ads airing in Michigan were created using artificial intelligence under legislation expected to be signed in the coming days by Gov. Gretchen Whitmer, a democrat. It would also ban the use of AI-generated deepfakes within 90 days of an election without a separate disclosure identifying the media as manipulated.

Deepfakes are fake media that misrepresent someone doing or saying something they did not do. They are created using generative artificial intelligence, a type of AI that can create compelling images, videos or audio clips in seconds.

There are growing concerns that generative AI will be used in the 2024 presidential race to deceive voters, impersonate candidates and undermine elections on a scale and speed never seen before.

Candidates and committees in the race are already experimenting with the rapidly advancing technology, which can create convincing fake images, video and audio clips in seconds and has become cheaper, faster and easier for the public to use in recent years.

The Republican National Committee released a fully AI-generated ad in April intended to show the future of the United States if President Joe Biden is re-elected. It was revealed in small print to have been created with AI, and featured fake but realistic photos of boarded up storefronts, armored military patrols on the streets, and a massive increase in immigration causing panic.

In July, Never Back Down, a super PAC targeting Republican Florida Governor Ron DeSantis, used an AI voice cloning tool to imitate former President Donald Trump’s voice, making it appear as if he was posting a social media post said he posted, despite never saying the statement out loud. .

Experts say this is just a glimpse of what could happen if campaigns or third-party actors decide to use AI deepfakes in more malicious ways.

So far, states like California, Minnesota, Texas and Washington have passed laws regulating deepfakes in political ads. Similar legislation has been introduced in Illinois, Kentucky, New Jersey and New York, according to the nonprofit Public Citizen.

Under Michigan law, any person, committee or other entity distributing an ad for a candidate would be required to clearly indicate whether generative AI is being used. The disclosure would have to be in the same font size as the majority of text in print ads, and in television ads “must appear for at least four seconds in letters the size of the majority of any text.” to a legislative analysis from the State House Fiscal Agency.

Deepfakes used within 90 days of the election will require a separate disclaimer informing the viewer that the content has been manipulated to depict speech or behavior that did not occur. If the medium is a video, the disclaimer must be clearly visible and appear throughout the video.

Campaigns could face a misdemeanor punishable by up to 93 days in jail, a fine of up to $1,000, or both for the first violation of the proposed laws. The Attorney General or the candidate harmed by the misleading media may submit a request for damages to the competent court.

Federal lawmakers on both sides have emphasized the importance of legislating deepfakes in political advertising and have held meetings to discuss it, but Congress has yet to pass anything.

A recent bipartisan bill in the Senate, co-sponsored by Democratic Sen. Amy Klobuchar of Minnesota, Republican Sen. Josh Hawley of Missouri and others, would ban “materially misleading” deepfakes related to federal candidates, with exceptions for parody and satire.

Michigan Secretary of State Jocelyn Benson flew to Washington DC in early November to participate in a bipartisan discussion on AI and elections and called on senators to pass Klobuchar and Hawley’s federal Deceptive AI Act. Benson said she also encouraged senators to return home and lobby their state legislatures to pass similar legislation that makes sense for their states.

Federal law is limited in its ability to regulate AI at the state and local level, Benson said in an interview, adding that states also need federal funds to address the challenges of AI.

“All of this will become a reality if the federal government gives us money to hire someone to just handle AI in our states, and similarly teach voters how to spot deepfakes and what to do if you find them,” he said. Benson. “That solves a lot of the problems. We can’t do it alone.”

In August, the Federal Election Commission took a procedural step toward potentially regulating AI-generated deepfakes in political ads under its existing rules against “fraudulent misrepresentation.” Although the commission has held a public comment period on the petition, filed by Public Citizen, it has not yet issued a ruling.

Social media companies have also announced some guidelines aimed at curbing the spread of harmful deepfakes. Meta, which owns Facebook and Instagram, announced earlier this month that it will require political ads on the platforms to disclose whether they were created using AI. Google unveiled a similar AI labeling policy for political ads played on YouTube or other Google platforms in September.


Swenson reported from New York. Associated Press writer Christina A. Cassidy contributed from Washington.


The Associated Press receives support from several private foundations to improve its explanatory reporting on elections and democracy. See more about AP’s democracy initiative here. The AP is solely responsible for all content.