Chatbot vs chatbot – researchers train AI chatbots to hack each other, and they can even do it automatically

28 countries agree to develop AI safely and responsibly at

Normally, AI chatbots have security measures in place to prevent them from being used maliciously. This may include banning certain words or phrases or limiting answers to certain questions.

However, researchers now claim that they have been able to train AI chatbots to 'jailbreak' each other, allowing them to bypass protections and answer malicious questions.