Bing AI chat messages are being hijacked by ads pushing malware
Malvertising has found its way into Bing’s chatbot/search engine. Cybersecurity researchers at Malwarebytes recently observed a malicious ad being served as part of the Chat GPTAI-powered response to a search query.
Malvertising is a simple practice: hackers try to trick ad networks into serving ads that look legitimate but are actually malicious. It is a game of impersonation, where the advertisements, the sites they link to and the content provided there appear as something they are not (usually they are software, streaming services or tools related to cryptocurrency).
Until now, malvertising has been seen on the usual search engines, Google, Bing, etc., despite the companies making gigantic efforts to keep their search results clean, for obvious reasons. However, the rise of Chat-GPT – especially since its integration into Bing – has changed things.
New dog, old tricks
Microsoft integrated Chat-GPT into Bing earlier this year and even started monetizing it a few months ago, much in the same way other search engines monetized their digital real estate. When a user types a search query, they get a result, combined with a few sponsored links (where it was clearly stated as sponsored). Bing, in all its AI-powered glory, is no different.
In this particular case, when Malwarebytes researchers asked Bing for the Advanced IP Scanner tool, they were given a link that ultimately redirected them to “advenced-ip-scanner(.)com” (note the “e” instead of the “a”), where victims would download an installer. The purpose of that installer was to retrieve the ultimate payload, but it appears to no longer exist as the researchers were unable to obtain the actual malware.
“Threat actors continue to use search ads to redirect users to malicious sites hosting malware,” the researchers warned. “While Bing Chat is a different search experience, it will serve some of the same ads as a traditional Bing search.”
Through BleepingComputer