Microsoft’s new AI tool aims to find and correct AI-generated text that is factually incorrect

Microsoft has unveiled a new tool that can prevent AI models from generating content that is factually incorrect. These forms of hallucinations are also known as hallucinations.

The new Correction feature builds on Microsoft’s existing “groundedness detection,” which essentially cross-references AI text with a supporting document input by the user. The tool will be available as part of Microsoft’s Azure AI Safety API and can be used with any text-generating AI model, such as OpenAI’s GPT-4o and Meta’s Llama.