AI Joe Rogan promotes libido booster for men in ‘illegal’ deepfake video
>
Joe Rogan is known to read a few sponsored ads at the beginning of his podcast, but a video of him promoting a libido booster for men is a deepfake – and people fear this is the start of new scams and a wave of misinformation.
The ‘eerily real’ clip shows Rogan discussing the brand Alpha Grind with guest Professor Andrew D. Huberman on The Joe Rogan Experience podcast, stating the product is all over TikTok and is available for purchase on Amazon.
The 28-second video does look realistic, but some segments reveal it was created by artificial intelligence – some commentary jumps instead of naturally flowing.
Huberman responded to the video on Twitter, saying: ‘They created a false conversation. We never had. We were talking about something very different.’
The deepfake has sparked an uproar on Twitter, with many users noting that it is illegal to recreate someone with artificial intelligence to promote a product.
The ‘eerily real’ clip shows Joe Rogan discussing the brand Alpha Grind with guest Professor Andrew D. Huberman on The Joe Rogan Experience podcast
One user, amazed by the deepfake ad, said: ‘Moderation for deepfakes will become more prevalent within the advertising realm soon. Bullish on advertisement monitoring software.’
The clip shows Rogan and Huberman talking about the male enhancer that claims to increase testosterone, noting it ‘increases the size and makes a difference down there.’
The video pans to Amazon to show users where they can find Alpha Grind – and the clip shows a 15 percent off coupon for the enhancer.
DailyMail.com has contacted Rogan and Huberman for comment.
Jimmy Farley shared the video on Twitter, noting it was his first time seeing a deepfake ad on TikTok.
‘How tf is this legal?’ he shared in his tweet.
Rob Freund, a lawyer for brands, chimed in, sharing that the video is not legal.
‘Your right of publicity protects against use of your name or likeness in an ad without your permission,’ Freund shared in a tweet.
The clip also shows users how to find Alpha Grind on Amazon
And at least in California, you don’t have to be a model or celebrity—everyone has a statutory right to recover for violations of their publicity rights. Separately, under Section 43(a) of the Lanham Act, false endorsement occurs when a person’s identity is connected with a product or service in such a way that consumers are likely to be misled about that person’s sponsorship or approval of the product or service.
‘Here, Huberman’s and Rogan’s likenesses are used to endorse a product that they don’t actually endorse, so both publicity and false endorsement issues are at play here.’
The deepfake of Rogan is just one of many released to the masses – one in 2022 showed Meta CEO Mark Zuckerberg thanking democrats for their ‘service and inaction’ on antitrust legislation.
The eerie, convincing clip is the work of advocacy group Demand Progress Action, which used deepfake technology to turn an actor into Zuckerberg – who thanks Democratic leaders Nancy Pelosi and Chuck Schumer for holding up two significant pieces of antitrust legislation this year.
The deepfake of Rogan is just one of many released to the masses – one in 2022 showed Meta CEO Mark Zuckerberg thanking democrats for their ‘service and inaction’ on antitrust legislation
‘Over the past five years, Congress has held over 30 hearings designed to hold Big Tech accountable,’ fake Zuckerberg says in the ad, which the liberal group plans to use for television ads in New York and Washington, D.C.
‘Sometimes you land a punch.’
And in March 2021, deepfake Tom Cruise took over TikTok.
An account appeared on the app, dubbed ‘deeptomcruise,’ which shows a number of videos depicting Cruise doing a magic trick, playing golf and reminiscing about the time he met the former President of the Soviet Union.
While these videos are for entertainment, deepfakes pose security threats.
Dr Tim Stevens, director of the Cyber Security Research Group at King’s College London, said deepfake AI – which can create hyper-realistic images and videos of people – had the potential to undermine democratic institutions and national security.
Stevens said states like Russia could exploit the widespread availability of these tools to ‘troll’ target populations to achieve foreign policy objectives and ‘undermine’ the national security of countries.
He added: ‘The potential is there for AIs and deepfakes to affect national security.
‘Not at the high level of defence and interstate warfare but in the general undermining of trust in democratic institutions and the media.
‘They could be exploited by autocracies like Russia to decrease the level of trust in those institutions and organizations.’