Google has been taking heat for some of the inaccurate, funny, and downright weird answers that it’s been providing via AI Overviews in search.
AI Overviews are the AI-generated search results that Google started rolling out more broadly earlier this month, with mixed results — apparently, a user looking for help in getting cheese to stick to their pizza was told to add glue (the advice was pulled from an old Reddit post), and someone else was told to eat “one small rock per day” (from The Onion)
Don’t be disappointed if you don’t get those answers yourself, or if you can’t replicate other viral searches, as Google is working to remove inaccurate results — a company spokesperson said in a statement that the company is taking “swift action” and is “using these examples to develop broader improvements to our systems.”
“The vast majority of AI Overviews provide high quality information, with links to dig deeper on the web,” the spokesperson said. “Many of the examples we’ve seen have been uncommon queries, and we’ve also seen examples that were doctored or that we couldn’t reproduce. We conducted extensive testing before launching this new experience, and as with other features we’ve launched in Search, we appreciate the feedback.”
So yes, it’s probably safe to assume that these results will get better over time, and that some of the screenshots you’re seeing on social media were created for laughs.
But seeing all these AI search results made me wonder: What are they actually for? Even if everything was working perfectly, how would they be better than regular web search?
Clearly, Google is trying to bring users the answers they need without making them scroll through multiple web pages. In fact, the company wrote that in early tests of AI Overviews, “people use Search more, and are more satisfied with the results.”
But the idea of killing the “10 blue links” is an old one. And while Google has already made them less central, I think it would be premature to bury those blue links for good.
Let’s take a very self-serving search: “what is techcrunch” gave me a summary that’s mostly accurate, but weirdly padded like a student trying to meet a page count minimum, with traffic numbers that seemed to come from a Yale career website. Then if we move on to “how do i get a story in techcrunch,” the overview quotes an outdated article about how to submit guest columns (which we no longer accept).
The point isn’t just to find even more ways AI Overviews are getting things wrong, but to suggest that many of its errors will be less spectacular and entertaining, and more mundane instead. And although — to Google’s credit — the Overviews do include links to the pages that provided the source material for the AI answers, figuring out which answer comes from which source takes us back to lots of clicking.
Google also says the inaccurate results getting called out on social media are often in data voids — subjects where there’s not a lot of accurate information online. Which is fair, but underlines the fact that AI, like regular search, needs a healthy open web full of accurate information.
Unfortunately, AI could be an existential threat to that same open web. After all, there’s much less incentive to write an accurate how-to article or break a big investigative story if people are just going to read an AI-generated summary, accurately or otherwise.
Google says that with AI Overviews, “people are visiting a greater diversity of websites for help with more complex questions” and that “the links included in AI Overviews get more clicks than if the page had appeared as a traditional web listing for that query.” I’d very much like that to be true. But if it isn’t, then no amount of technical improvements would make up for vast swaths of the web that could disappear.