Google’s Gary Illyes and others answered many AI-related questions at Google Search Central Live Tokyo 2023, sharing new insights into Google’s approaches and recommendations for AI-generated content.
Japanese search marketing expert Kenichi Suzuki (Twitter profile) presented at Search Central Tokyo 2023 and subsequently published a blog post in Japanese summarizing the main ideas of the event.
Some of what was shared is now well known and documented, like Google doesn’t care if the content is AI generated or not.
For both AI-generated content and translated content, what matters most to Google is the quality of the content.
How Google treats AI-generated content
AI-generated content tagging
What may be less well known is whether or not Google distinguishes between AI-generated content.
The Googler, presumably Gary Illyes, responded that Google does not tag AI-generated content.
Should publishers label AI-generated content?
Currently the asks the EU social media companies to voluntarily flag AI-generated content to combat fake news.
I Google currently recommends (but does not require) publishers to tag AI-generated images using IPTC image data metadata, adding that image AI companies will begin adding them automatically in the near future.
But what about the content of the text?
Are publishers required to label their text content as AI-generated?
Surprisingly, the answer is no, it is not required.
Kenichi Suzuki wrote that as far as Google is concerned, there is no need to explicitly tag AI content.
The Googler said that they leave it up to publishers to make that judgment call on whether or not it’s a better user experience.
The English translation of what Kenichi wrote in Japanese is:
“From Google’s point of view, it is not necessary to explicitly label AI-generated content as AI-generated content, as we evaluate the nature of the content.
If you consider it necessary from the user’s point of view, you can specify it.”
He also wrote that Google warned against publishing AI content as is without a human editor reviewing it before publishing.
They also recommended taking the same approach with translated content, which should also be reviewed by a human before publishing.
Natural content ranks first
One of the most interesting comments from Google was a reminder that their algorithms and signals are based on human content, so they will rank natural content at the top.
The English translation of the original Japanese is:
“ML (machine learning) based algorithms and signals are learning from content written by humans for humans.
So understand native content and show it at the top.”
How does Google handle AI content and EEAT?
EEAT is an acronym that stands for experience, expertise, authority and trust.
It’s something that was first mentioned in Google’s Search Quality Evaluator Guidelines, recommending that evaluators look for evidence that the author is writing from a position of subject matter expertise.
An artificial intelligence, at this time, cannot claim expertise in any subject or product.
Therefore, it seems impossible for an AI to meet the quality threshold of certain types of content that require expertise.
The Googler replied that they are having internal discussions about it and have not yet come to a policy.
They said they will announce a policy once they are settled.
AI policies are evolving
We live in a time of transition due to the availability of AI and its lack of reliability.
Mainstream media companies that rushed to test AI-generated content have quietly slowed down to reevaluate.
ChatGPT and similar generative AI like Bard were not specifically trained to create content.
So perhaps it’s no surprise that Google currently recommends that publishers continue to monitor the quality of their content.
Read the original article by Kenichi Suzuki:
What I learned on Google #SearchCentralLive Tokyo 2023
Featured image by Shutterstock/takayuki
[ad_2]
Source link