Google responded to the bad press surrounding its recently released AI overviews with a new one blog entry by their new head of search, Liz Reid. Google explained how AI overviews work, where the strange AI overviews came from, the improvements Google has made and will continue to make to its AI overviews.
However, Google said that searchers “are more satisfied with their search results and are asking longer and more complex questions that they know Google can now help with,” and basically these AI generalizations aren’t going anywhere .
As good as the highlights. Google said that AI overviews are “highly effective” and, according to its internal tests, the “accuracy rate of AI overviews is on par with featured snippets.” Featured snippets also use AI, Google said numerous times.
No hallucinations. General descriptions of AI are generally not mind-blowing, wrote Google’s Liz Reid. AI overviews don’t “invent things the way other LLM products might,” he added. AI general descriptions only work poorly when Google “misinterprets queries, misinterprets a nuance of language on the web, or doesn’t have a lot of information available,” he wrote.
Why the “strange results”. Google explained that it tested AI Overviews “extensively” before releasing it and felt comfortable releasing it. But Google said people tried to get the AI overviews to get strange results. “We’ve also seen meaningless new searches, seemingly intended to produce erroneous results,” Google wrote.
Also, Google wrote that people faked many of the examples, but manipulated screenshots that showed fake AI responses. “These AI overviews never appeared,” Google said.
Some strange examples came up and Google will make improvements in such cases. Google will not manually adjust the AI overviews, but will improve the models to work with many more queries. “We don’t simply ‘fix’ queries one by one, but work on updates that can help a broad set of queries, including new ones we haven’t seen yet,” Google wrote.
Google talked about “data gaps,” which we covered many times here. The example: “How many rocks should I eat?” it was a query that no one had previously searched for and had no real content. Google explained: “However, in this case, there is satirical content about this topic… which it was republished on a geological software vendor’s website. So when someone put that question into Search, an AI overview appeared that was faithfully linked to one of the only websites that addressed the question.”
Improvements to AI overviews. Google shared some of the improvements it has made to AI Overviews, explaining that it will continue to make improvements in the future. Here’s what Google said it’s done so far:
Created better detection mechanisms for nonsensical queries that shouldn’t show an AI overview, and limited the inclusion of satire and humor content. It updated its systems to limit the use of user-generated content in responses that could provide misleading advice. Added activation restrictions for queries where AI overviews were not that helpful. For topics like news and health, Google said it already has strong guardrails in place. For example, Google said it aims not to show AI Overviews for hard news topics, where freshness and realness are important. In the case of health, Google said it rolled out additional refinements to improve our quality protections.
Finally, “We’ll continue to improve when and how we display AI Overviews and strengthen our protections, even for extreme cases, and we’re very grateful for the ongoing feedback,” Liz Reid concluded.
Why we care It looks like AI overviews aren’t going anywhere, and Google will continue to show them to searchers and roll them out to more countries and users in the future. You can expect them to improve over time as Google continues to listen to feedback and improve its systems.
Until then, I’m sure we’ll be seeing more examples of inaccurate and sometimes humorous AI overviews similar to what we saw when the highlights were released.
[ad_2]
Source link