Deep Fake video of Biden in drag promoting Bud Light goes viral, as experts warn of tech’s risks

>

Deep fake videos of President Joe Biden and Republican frontrunner Donald Trump show how the 2024 presidential race could be the first serious test of American democracy’s resilience to artificial intelligence.

Videos of Biden dressed as trans star Dylan Mulvaney promoting Bud Light and Trump teaching tax evasion in a quiet Albuquerque nail salon show that even the country’s most powerful figures are not safe from AI identity theft.

Experts say that while it is relatively easy to spot these counterfeits today, it will be impossible in the coming years because technology is evolving so quickly.

There have already been glimpses of the real harm of AI. Earlier this week, an AI-crafted image of black smoke emanating from the Pentagon sent shockwaves through the stock market before media fact-checkers were finally able to correct the record.

Experts believe the uncanny accuracy of AI-generated voices and faces means it will be 'increasingly difficult to identify disinformation'

A Deep Fake of Biden pre-gaming in drag, posted by @drunkamerica on Instagram, received 223,107 likes in the past five days. Experts believe the uncanny accuracy of AI-generated voices and faces means it will be ‘increasingly difficult to identify disinformation’

“It is becoming increasingly difficult to identify disinformation, especially sophisticated AI-generated Deep Fake,” said Cayce Myers, a professor at Virginia Tech’s School of Communication.

“To detect this disinformation, users need to be more media literate and more adept at researching the truth of any claim,” says Professor Myers, who has studied Deep Fake technology and its increasing prevalence.

“The cost barrier for generative AI is also so low that now almost anyone with a computer and the internet has access to AI,” said Myers.

Myers emphasized the role both tech companies and the average citizen will have to play to prevent these waves of creepy, believable counterfeits from overwhelming American democracy in 2024.

“Surveying sources, understanding the warning signs of disinformation, and being diligent about what we share online is a personal way to combat the spread of disinformation,” Myers said. “But that won’t be enough.”

“Companies producing AI content and social media companies where disinformation is being spread will need to implement some level of guardrails to prevent the spread of widespread disinformation.”

There are fears that videos of politicians heralding words they never said could be used as a powerful disinformation tool to influence voters.

Notorious troll farms in Russia and other parts of the world hostile to the US are being used to sow dissent on social media.

It’s been five years since BuzzFeed and director and comedian Jordan Peele produced a creepy Deep Fake satire of former President Barack Obama to draw attention to the technology’s alarming potential.

“They could make me say things like, I don’t know, [Marvel supervillain] “Killmonger was right,” or “Ben Carson is in the sunken place,” Peele said in his expert Obama impression.

A Deep Fake spoof of former President Trump placed his voice and likeness on AMC Network's shady attorney Saul Goodman from the Breaking Bad and Better Call Saul series.  The video, from YouTube channel CtrlShiftFace, has garnered 24,000 likes since posting

A Deep Fake spoof of former President Trump placed his voice and likeness on AMC Network’s shady attorney Saul Goodman from the Breaking Bad and Better Call Saul series. The video, from YouTube channel CtrlShiftFace, has garnered 24,000 likes since posting

Or, how about this: “Simply put, President Trump is a total and complete jerk.”

But it’s not just academics, comedians and news outlets making these claims.

Leading policy experts have reiterated their concerns in recent years with increasing urgency.

“A well-timed and thoughtfully scripted deepfake or series of deepfakes could influence an election,” experts wrote for the Council on Foreign Relations in 2019.

WHAT ARE DEEPFAKES?

The technology behind deepfakes was developed in 2014 by Ian Goodfellow, who was the director of machine learning at Apple’s Special Projects Group and a leader in the field.

The word comes from the collaboration of the terms ‘deep learning’ and ‘fake’ and is a form of artificial intelligence.

The system studies a target person in photos and videos, allowing it to capture multiple angles and mimic their behavior and speech patterns.

The technology gained attention during election season, as many feared developers would use it to undermine the reputation of political candidates.

The group also warned that Deep Fakes could soon “provoke violence in a city primed for civil unrest, amplify insurgent narratives of an enemy’s alleged atrocities, or exacerbate political divisions in a society.”

While Virginia Tech’s Myers acknowledges that programs like Photoshop have been capable of similar lifelike fakes for years, he says the difference is in the disinformation AI that can be mass-created with ever-increasing sophistication. “

“Photoshop allows fake images,” said Myers, “but AI can create modified videos that are very compelling. Since disinformation is now a widespread source of online content, this kind of fake news content can reach a much wider audience, especially if the content goes viral.”

Just like the “Better Call Trump” and Biden videos have Bud Light.

Myers has argued that in the near future we will see much more disinformation, both visual and written, serious and comedic.

But help — in the form of government regulation of any kind — doesn’t seem to be on the way.

This Wednesday, former Google CEO Erich Schmidt, a longtime White House adviser who recently co-chaired the US National Security Commission on AI, said he doubts the US will appoint a new regulatory body to reign in AI. .

“The problem is lawmakers don’t want to make a new law regulating AI before we know where the technology is going,” Myers said.

Dozens of verified accounts, such as WarMonitors, BloombergFeed and RT, passed the photo showing black smoke rising from the ground next to a white building

Dozens of verified accounts, such as WarMonitors, BloombergFeed and RT, passed the photo showing black smoke rising from the ground next to a white building

HOW TO DISCOVER A DEEPFAKE

1. Unnatural eye movements. Eye movements that don’t look natural — or a lack of eye movement, such as a lack of blinking — are huge red flags. It’s a challenge to mimic the blinking in a way that looks natural. It is also challenging to mimic a real person’s eye movements. That’s because a person’s eyes usually follow the person they’re talking to.

2. Unnatural facial expressions. If something doesn’t look right on a face, it could indicate facial distortion. This happens when one image is stitched over another.

3. Inconvenient positioning of facial features. If someone’s face is pointing one way and their nose is pointing the other, you should be skeptical about the authenticity of the video.

4. A lack of emotion. You can also recognize what’s known as “face distortion” or image stabbing when someone’s face doesn’t seem to show the emotion associated with what they’re supposedly saying.

5. Awkward-looking body or posture. Another sign is if a person’s body shape doesn’t look natural, or if the head and body are placed awkwardly or inconsistently. This is perhaps one of the easier to spot inconsistencies, as deepfake technology usually focuses on facial features rather than the entire body.

6. Unnatural exercise or body shape. If someone looks distorted or off when they turn sideways or move their head, or if their movements are jerky and disjointed from one frame to the next, you should suspect the video is fake.

7. Unnatural colors. Abnormal skin tone, discoloration, weird lighting, and misplaced shadows are all signs that what you’re seeing is probably fake.

8. Hair that doesn’t look real. You will not see any frizz or flyaway hair. Why? Fake images cannot generate these individual characteristics.

9. Teeth that don’t look real. Algorithms may not be able to generate individual teeth, so the lack of outlines of individual teeth may be a clue.

10. Blurring or misalignment. If the edges of images are blurry or images are misaligned, such as where someone’s face and neck meet their body, you know something is wrong.

11. Inconsistent sound or sound. Deepfake creators usually spend more time on the video than on the audio. The result can be poor lip sync, robotic sounding voices, strange word pronunciation, background digital noise, or even the absence of audio.

12. Images that look unnatural when slowed down. If you’re watching a video on a screen larger than your smartphone or if you have video editing software that can slow down the playback of a video, you can zoom in and view images more closely. For example, if you zoom in on the lips, you can see if they’re actually talking or if it’s a bad lip sync.

13. Hashtag Differences. There is a cryptographic algorithm that allows creators to prove that their videos are authentic. The algorithm is used to insert hashtags at certain places in a video. If the hashtags change, you should suspect video manipulation.

14. Digital fingerprints. Blockchain technology can also create a digital fingerprint for videos. While not foolproof, this blockchain-based verification can help establish the authenticity of a video. This is how it works. When a video is created, the content is registered in a ledger that cannot be changed. This technology can help prove the authenticity of a video.

15. Reverse image search. An original image search, or a reverse image search using a computer, can find similar videos online to help determine if an image, audio, or video has been altered in any way. While reverse video search technology is not yet publicly available, investing in a tool like this can be beneficial.