More security flaws are being found in popular AI chatbots – and they could mean hackers can get to all your secrets

If a hacker can monitor internet traffic between his target and the target’s cloud-based AI assistant, he can easily pick up the conversation. And if that conversation contained sensitive information, that information would also end up in the hands of the attackers.

This is according to a new analysis by researchers at the Offensive AI Research Lab at Ben-Gurion University in Israel, who have found a way to carry out side-channel attacks on targets using all Large Language Model (LLM) assistants except Google Gemini.