>
Creating fake social media accounts to deceive people is hardly a new tactic, but there’s something sinister about this new campaign that makes it stand out from the crowd.
An in-depth analysis posted on the KrebsOnSecurity blog claims cybercriminals have used artificial intelligence (AI) to create profile pictures of non-existent people and link that information to stolen job descriptions (opens in new tab) from real people on LinkedIn.
That way, they create fake profiles that are nearly impossible for most people to identify as fake.
Numerous usage scenarios
Users have noticed a growing trend of suspicious accounts trying to access various LinkedIn invite-only groups. Group owners and admins can only see what’s going on after receiving dozens of such requests at once and see that almost all profile pictures look the same (as in, same angle, same face size, same smile, etc).
The researchers say they’ve reached out to LinkedIn’s customer support, but so far the platform hasn’t found its silver bullet. One of the ways it addresses this challenge is to ask certain companies to send a full employee list and then ban all accounts that claim to work there.
In addition to not being able to determine who is behind this attack by fake professionals, the researchers are also struggling to understand what the point is. Apparently most accounts are not checked. They don’t post stuff and don’t respond to messages.
Cybersecurity firm Mandiant believes that hackers are using these accounts to gain roles in cryptocurrency companies as the first stage in a multi-stage attack that aims to drain the company’s funds.
Others think this is part of the old romance scam, where victims are enticed by beautiful photos to invest in fake crypto projects and trading platforms.
Furthermore, there is evidence that groups such as Lazarus are using fake LinkedIn profiles to spread infostealers, malware and other viruses to job seekers, especially in the cryptocurrency industry. And finally, some believe the bots could be used to amplify fake news in the future.
Commenting on KrebsOnSecurity’s research, LinkedIn said it is considering the idea of domain verification to address the growing problem: “This is an ongoing challenge and we are constantly improving our systems to stop counterfeits before they come online,” LinkedIn said in a statement. written explanation. pronunciation.
“We stop the vast majority of fraudulent activity we detect in our community – about 96% of fake accounts and about 99.1% of spam and scams. We are also exploring new ways to protect our members, such as extending email domain verification. Our community is about authentic people having meaningful conversations and always increasing the legitimacy and quality of our community.”
Through: KrebsOnSecurity (opens in new tab)