Microsoft’s own evil team ‘attacked’ over 100 generative AI products: here’s what they learned


  • Microsoft created an AI Red Team in 2018 because it foresaw the rise of AI
  • A red team represents the enemy; and takes on the hostile personality.
  • The team’s latest whitepaper hopes to address common vulnerabilities in AI systems and LLMs

For the past seven years, Microsoft has been tackling risks in artificial intelligence systems through its dedicated AI “red team.”

Created to anticipate and address the growing challenges of advanced AI systems, this team takes on the role of threat actors, with the ultimate goal of identifying vulnerabilities before they can be exploited in the real world.