Amazon Web Services (AWS) Launches Automated Reasoning Checks in Preview to Combat AI Hallucinations


Amazon Web Services (AWS) launched a new service at its ongoing re:Invent conference that will help enterprises reduce instances of artificial intelligence (AI) hallucination. Launched on Monday, the Automated Reasoning checks tool is available in preview and can be found within the Amazon Bedrock Guardrails. The company claimed that the tool mathematically validates the accuracy of responses generated by large language models (LLMs) and prevents factual errors from hallucinations. It is similar to the Grounding with Google Search feature which is available on both the Gemini API as well as the Google AI Studio.

AWS Automated Reasoning Checks

AI models can often generate responses that are incorrect, misleading, or fictional. This is known as AI hallucination, and the issue impacts the credibility of AI models, especially when used in an enterprise space. While companies can somewhat mitigate the issue by training the AI system on high-quality organisational data, the pre-training data and architectural flaws can still make the AI hallucinate.

AWS detailed its solution to AI hallucination in a blog post. The Automated Reasoning checks tool has been introduced as a new safeguard and is added in preview within Amazon Bedrock Guardrails. Amazon explained that it uses “mathematical, logic-based algorithmic verification and reasoning processes” to verify the information generated by LLMs.

The process is pretty straightforward. Users will have to upload relevant documents that describe the rules of the organisation to the Amazon Bedrock console. Bedrock will automatically analyse these documents and create an initial Automated Reasoning policy, which will convert the natural language text into a mathematical format.

Once done, users can move to the Automated Reasoning menu under the Safeguards section. There, a new policy can be created and users can add existing documents that contain the information that the AI should learn. Users can also manually set processing parameters and the policy’s intent. Additionally, sample questions and answers can also be added to help the AI understand a typical interaction.

Once all of this is done, the AI will be ready to be deployed, and the Automated Reasoning checks tool will automatically verify in case the chatbot provides any incorrect responses. Currently, the tool is available in preview in only the US West (Oregon) AWS region. The company plans to roll it out to other regions soon.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *