google 1 1717135222103.jpg
google 1 1717135222103.jpg

Google explains bizarre reactions to AI reviews, reveals measures to improve feature

Google on Thursday (May 30) released an explanation for the debacle caused by its artificial intelligence (AI) search tool – AI Insights – which saw incorrect answers being generated for multiple queries. The AI ​​search feature was unveiled at Google I/O 2024 on May 14, but reportedly faced scrutiny soon after for providing bizarre answers to search queries. In a detailed explanation, Google revealed the likely cause of the problem and the steps taken to resolve it.

Google’s answer

In a blog post, Google began by explaining how the AI ​​Insights feature works differently from other chatbots and Large Language Models (LLM). According to the company, AI Previews simply does not generate “results based on training data.” Instead, it says it’s integrated into its “core web ranking systems” and is intended to perform traditional “search” tasks from the index. Google also claimed that its AI-powered search tool “generally does not hallucinate.”

“Because accuracy is paramount in Search, AI reviews are built to show only information that is backed up by top web results,” the company said.

So what happened? According to Google, one of the reasons was the inability of the AI ​​Reviews feature to filter out satirical and nonsensical content. Referring to the search query “How many stones should I eat” which returned results suggesting a person consume one stone a day, Google said that before the search “virtually no one asked that question”.

This, according to the company, has created a “data gap” where high-quality content is limited. Along with this query, satirical content was also published. “So when someone asked that question in Search, an AI Preview appeared that faithfully linked to one of the few websites that addressed that question,” Google explained.

The company also admitted that AI Overviews took references from forums, which while “a great source of authentic first-hand information”, can lead to “less useful advice”, such as using pizza glue to make cheese sticks. In other cases, the search feature also misinterpreted the language on web pages, leading to incorrect answers.

Google said it “worked quickly to address these issues, either through improvements to our algorithms or through processes in place to remove responses that do not comply with our policies.”

Steps taken to improve AI review

Google has taken the following steps to improve the responses to queries generated by its AI Insights feature:

  1. It has built better detection mechanisms for nonsensical queries, limiting the inclusion of satirical and nonsensical content.
  2. The company says it has also updated systems to limit the use of user-generated content in replies that could offer misleading advice.
  3. AI Previews for heavy news topics will not appear where “freshness and fact” are key.

Google also claimed that it monitored feedback and external reports for a small number of AI Review responses that violate its content. However, it said the chances of this happening were “less than one in every 7 million unique queries”.


Affiliate links may be automatically generated – see our ethics statement for details.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *