Microsoft on Tuesday launched a new artificial intelligence (AI) capability that will identify and correct when an AI model generates incorrect information. Called “The Fix,” the feature is integrated into Azure AI Content Safety’s ground detection system. Since this feature is only available through Azure, it’s likely aimed at the tech giant’s enterprise customers. The company is also working on other methods to reduce cases of AI hallucinations. The feature can also display an explanation of why a segment of text has been flagged as incorrect information.
Microsoft’s “Corrections” feature launched.
In a blog post, the Redmond-based tech giant detailed a new feature it claims combats AI hallucinations, a phenomenon in which an AI responds to a query with incorrect information and fails to recognize its inaccuracy.
The feature is available through Microsoft’s Azure services. The Azure AI Content Safety system has a tool called ground detection. Identifies whether the generated response is based on reality or not. While the tool itself works in many different ways to detect cases of hallucination, the Fix feature works in a specific way.
For the fix to work, users must be connected to Azure underlying documents used in Q&A scenarios based on document summarization and retrieval-expansion generation (RAG). Once connected, users can enable the feature. After that, whenever an unfounded or incorrect sentence is generated, the feature will trigger a correction request.
To put it simply, core documents can be understood as guidelines that an artificial intelligence system must follow while generating an answer. This can be source material for a query or a larger database.
The feature will then evaluate the statement against the underlying document and if it is found to be incorrect information, it will be filtered out. However, if the content is consistent with the underlying document, the feature can rewrite the sentence to ensure it is not misinterpreted.
Additionally, users will also have the option to enable rezoning when first setting up the feature. If you enable this, the AI feature will prompt it to add an explanation as to why it thought the information was incorrect and needed to be corrected.
A company spokesperson told The Verge that the Correction feature uses small language models (SLM) and large language models (LLM) to match results to underlying documents. “It’s important to note that grounding detection does not solve ‘accuracy,’ but helps match generative AI outputs to grounded documents,” the publication quoted the spokesperson as saying.