Skip to main content

Google just broke search

AI Overviews being shown in Google Search.
Google

Google AI Overviews were announced a couple of weeks ago at Google I/O, and they’ve already proven to be rather controversial. The aim to provide high-quality answers to your questions summarized from the web, but a series of recent X (formerly Twitter) threads show how big of a fail it’s already proven to be.

The response that went viral involves a very dubious pizza recipe. As reported, when prompting Google for an answer to the issue of “cheese not sticking to pizza,” the AI Overview suggests adding nontoxic glue to your pizza to prevent the cheese from sliding off. The exact words the AI overview gave are as follows: “You can also add about 1/8 cup of non-toxic glue to the sauce to give it more tackiness.” Where did the Google AI overview get the info as a source? An 11-year-old Reddit comment from this thread, in what was clearly a joke.

Recommended Videos

https://t.co/W09ssjvOkJ pic.twitter.com/6ALCbz6EjK

— SG-r01 (@heavenrend) May 22, 2024

The words “cheese not sticking to pizza” generated this unexpected and funny response, and the internet is having a field day with it. The Google AI Overview response has since gone viral, with someone even trying glue pizza just to make the point.

It should be noted that we’ve seen a massive uptick in Reddit and forum posts showing up higher in Google searches. It’s also worth noting that Reddit recently signed a $60 million deal to let Google train its models on Reddit content. It’s not hard to connect the dots on how this might have happened.

It’s not just Reddit though. Another AI Overview was posted online with an answer to “how many rocks should I eat each day,” which pulls information directly from The Onion.

her pic.twitter.com/FGbvO923gk

— Tim Onion (@oneunderscore__) May 23, 2024

Part of the problem is the absolute conviction in which AI Overviews delivers its answers. It doesn’t bring up a link to an Onion article and let you do the adjudication. Instead, it treats every source like it’s Wikipedia and delivers information in complete confidence.

Google claims that its AI Overviews give users high-quality information and that such errors are uncommon. Here is the official response provided to Digital Trends by Google: “The vast majority of AI Overviews provide high quality information, with links to dig deeper on the web. Many of the examples we’ve seen have been uncommon queries, and we’ve also seen examples that were doctored or that we couldn’t reproduce. We conducted extensive testing before launching this new experience, and as with other features we’ve launched in Search, we appreciate the feedback. We’re taking swift action where appropriate under our content policies, and using these examples to develop broader improvements to our systems, some of which have already started to roll out.”

It seems as if some of these AI Overviews will be hard to correct if they can’t be reproduced. The “broader improvements” will need to be the solution, and since Google says they’re already in the works, hopefully we’ll begin to see some better searches soon. We’ll have to see if Google responds to the situation further beyond that statement. After receiving negative feedback around its image generation in Gemini earlier this year, Google had to apologize and pull it down to fix its issues.

For now, though, these are both good reminders of how careful we need to be when trusting AI engines for information. Google AI Overview started rolling out to everyone in the U.S. earlier this month, and with more countries coming soon. But with answers like this, there may be more people reaching for a way to turn it off than Google expected.

Judy Sanhz
Judy Sanhz is a Digital Trends computing writer covering all computing news. Loves all operating systems and devices.
Google Gemini is good, but this update could make it downright sci-fi
Google Gemini running on an Android phone.

Ever since seeing the "Welcome home, sir" scene in Iron Man 2, many of us have wanted a smart setup with a Jarvis-like assistant. While some may have hoped that Alexa would provide that kind of functionality, so far, the assistant is just too limited. That might change with the launch of Gemini 2.0 and Google's Project Jarvis, though.

In a sense, this new project is Jarvis. The system works by taking stills of your screen and interpreting the information on it, including text, images, and even sound. It can auto-fill forms or press buttons for you, too. This project was first hinted at during Google I/O 2024, and according to 9to5Google, it's designed to automate web-based tasks. Jarvis is an AI agent with a narrower focus than a language learning model like ChatGPT — an AI that demonstrates human-like powers of reasoning, planning, and memory.

Read more
One of the hottest AI apps just came to the Mac (and it’s not ChatGPT)
the Perplexity desktop app

Perplexity announced Thursday the release of a new native app for Mac that will put its "answer engine" directly on the desktop, with no need for a web browser.

Currently available through the Apple App Store, the Perplexity desktop app promises a variety of features "exclusively for Mac." These include Pro Search, which is a "guided AI search for deeper exploration," the capability for both text and voice prompting, and "cited sources" for every answer.

Read more
Google’s AI detection tool is now available for anyone to try
Gemini running on the Google Pixel 9 Pro Fold.

Google announced via a post on X (formerly Twitter) on Wednesday that SynthID is now available to anybody who wants to try it. The authentication system for AI-generated content embeds imperceptible watermarks into generated images, video, and text, enabling users to verify whether a piece of content was made by humans or machines.

“We’re open-sourcing our SynthID Text watermarking tool,” the company wrote. “Available freely to developers and businesses, it will help them identify their AI-generated content.”

Read more