More security flaws are being found in popular AI chatbots – and they could mean hackers can get to all your secrets
If a hacker can monitor internet traffic between his target and the target’s cloud-based AI assistant, he can easily pick up the conversation. And if that conversation contained sensitive information, that information would also end up in the hands of the attackers.
This is according to a new analysis by researchers at the Offensive AI Research Lab at Ben-Gurion University in Israel, who have found a way to carry out side-channel attacks on targets using all Large Language Model (LLM) assistants except Google Gemini.
That includes OpenAI’s powerhouse, Chat-GPT.
The “padding” technique
“Currently, anyone can read private chats sent by ChatGPT and other services,” Yisroel Mirsky, head of the Offensive AI Research Lab, told me. ArsTechnica.
“This includes malicious actors on the same Wi-Fi or LAN network as a client (for example, the same coffee shop), or even a malicious actor on the Internet: anyone who can observe the traffic. The attack is passive and can occur without the knowledge of OpenAI or their client. OpenAI encrypts their traffic to prevent these types of eavesdropping attacks, but our research shows that the way OpenAI uses encryption is flawed and thus the contents of the messages are exposed.”
Basically, in an effort to make the tool as quickly as possible, the developers opened the doors to scammers who picked up the content. When the chatbot starts sending back its response, it doesn’t send it all at once. It sends small fragments, in the form of tokens, to speed up the process. These tokens may be encrypted, but because they are sent one at a time, attackers can analyze them as they are generated.
The researchers analyzed the size, length of the tokens, the order in which they arrive and more. The analysis and subsequent refinement resulted in decoded responses that were virtually identical to those seen by the victim.
The researchers suggested developers do one of two things: either stop sending tokens one at a time, or scale them all to the length of the largest possible packet, making analysis impossible. This technique, which they called ‘padding’, was adopted by OpenAI and Cloudflare.