google ai overviews 1716533993262.jpg
google ai overviews 1716533993262.jpg

Google’s AI reviews say they suffer from AI hallucinations, advise using pizza glue

Google’s brand new AI-powered search tool, AI Overviews, is facing backlash for providing inaccurate and somewhat bizarre answers to user queries. In a recently reported incident, a user contacted Google because the cheese didn’t stick to his pizza. While they were surely expecting a practical solution to their culinary woes, Google’s AI Reviews feature presented a rather unexpected solution. According to recent posts on X, this was not an isolated incident with the AI ​​tool suggesting bizarre responses to other users as well.

Cheese, pizza and AI hallucinations

The problem came to light when a user allegedly typed into Google, “cheese doesn’t stick to pizza.” Solving a culinary problem, the AI ​​Preview feature of the search engine suggested several ways to make cheese sticks, such as mixing the sauce and letting the pizza cool. However, one of the solutions turned out to be truly bizarre. According to the screenshot shared, the user was suggested to “add ⅛ cup of non-toxic glue to the sauce to get more stickiness”.

After further investigation, the source was allegedly found and turned out to be a Reddit comment from 11 years ago, which appeared to be a joke rather than expert culinary advice. However, Google’s AI Previews feature, which still carries the “Generative AI is experimental” tag at the bottom, provided it as a serious suggestion to the original query.

Another incorrect answer from AI Overviews appeared a few days ago when a user allegedly asked Google, “How many rocks should I eat”. Quoting UC Berkeley geologists, the tool suggested, “it is recommended to eat at least one rock a day because rocks contain minerals and vitamins that are important for digestive health.”

The problem behind false answers

Problems like this have been popping up regularly in recent years, especially since the boom in artificial intelligence (AI) began, resulting in a new problem known as AI hallucinations. While companies claim that AI chatbots can make mistakes, the number of cases where these tools distort the facts and give factually incorrect and even bizarre answers is increasing.

However, Google is not the only company whose AI tools have given incorrect answers. OpenAI’s ChatGPT, Microsoft’s Copilot, and Perplexity’s AI chatbot all reportedly suffered from AI hallucinations.

In more than one case, the source was discovered as a Reddit post or comment years ago. The companies behind the AI ​​tools are also aware of this, with Alphabet CEO Sundar Pichai telling The Verge, “these are the things we have to get better at.”

Speaking about AI hallucinations during an event at IIIT Delhi in June 2023, Sam Altman, (OpenAI)( CEO and co-founder) said, “It will take us about a year to perfect the model. It’s a balance between creativity and accuracy, and we’re trying to minimize the problem I (currently) trust the answers coming from ChatGPT the least than anyone else on Earth.”


Affiliate links may be automatically generated – see our ethics statement for details.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *