YouTube videos made with AI are spreading malware
YouTube has seen a recent surge in the number of videos containing harmful links to infostealers in their descriptions, with many using AI-generated personas to fool viewers into trusting them.
Cyber intelligence firm CloudSEK reports (opens in new tab) that, since November 2022, there has been a massive 200-300% increase in content uploaded to the video hosting website that dupes viewers into installing well-known malware such as Vidar, RedLine and Raccoon.
The videos pretend to be tutorials showing how to download illegal copies of popular paid-for design software for free, such as Adobe Photoshop, Premiere Pro, Autodesk 3ds Max, and AutoCAD.
Appearing trustworthy
The tutorial videos have been growing in sophistication, from screen recordings and audio-only walkthroughs, to now using AI to create a realistic depiction of a person guiding the viewer through the process, all in an attempt to appear more trustworthy.
CloudSEK notes that AI-generated videos in general are on the rise, used for legitimate educational, recruitment and promotional purposes, but now for nefarious ends as well.
Infostealers, as the name suggests, penetrate a user’s system and steal valuable personal information, such as passwords and payment details, and are spread via malicious downloads and links, such as those in the description of videos as in this case. This data is then uploaded to the threat actor’s server.
CloudSEK address the fact that,with 2.5 billion users a month, YouTube is a prime target for threat actors who, in order to eschew the platform’s automated content review process, work to deceive the algorithm in various ways.
These include using region-specific tags, adding fake comments to make videos seem legitimate, and simply swarming the platform with multiple videos to compensate for any removed and banned videos. CloudSEK found that 5-10 of these malicious videos are uploaded every hour.
In order to optimize for SEO, many hidden links are also used, as well as making use of random keywords in various languages so that the YouTube algorithm ends up recommending them.
Also, in order to cover up the malicious nature of the links, link shortening services such as bit.ly are used, as well as links to file hosting services such as MediaFire.
“The threat of infostealers is rapidly evolving and becoming more sophisticated,” said CloudSEK researcher Pavan Karthick. “In a concerning trend, these threat actors are now utilizing AI-generated videos to amplify their reach, and YouTube has become a convenient platform for their distribution.”
CloudSEK suggests that “traditional string-based rules will prove ineffective against malware that dynamically generates strings and/or uses encrypted strings.”
Instead, it recommends that firms adopt a more manual approach, where the tactics and techniques of threat actors are closely monitored in order to correctly identify threats.
In addition, CloudSEK suggests that awareness campaigns should be conducted, sharing simple advice such as refraining from clicking on unknown links and using multi-factor authentication to sure up accounts, ideally with an authenticator app.