Google responds to evidence of review algorithm bias

Google responds to evidence of review algorithm bias

Google responded to a small publisher whose article offered a step-by-step explanation of how large corporate publishers are manipulating Google’s review system algorithm and getting away with it, demonstrating what appears to be bias towards the big brands which negatively affects small independent publishers.

Google HouseFresh Algorithm Exposure

The story begins with a post titled How Google Is Killing Indie Sites Like Ours published on the HouseFresh website. It posted what it claimed was evidence that several corporate review sites gamed Google’s algorithm by creating the perception of practice reviews for what HouseFresh contends were not real reviews.

For example, he noted how many of the editors rated an expensive air purifier that HouseFresh (and Consumer Reports) reviewed and found performed worse than less expensive alternatives, used more energy and required spending $199.98 a year on replacements of purifiers. However, the big brand sites gave reviews of the product, presumably because the high cost translates into higher income for affiliates.

Remarkably, they showed how product photos from different big brand publishers came from the same photographer in what appears to be the exact same location, clearly implying that the same individual publishers did not review the product.

HouseFresh provided a detailed takedown of what they insist are instances of Google favoring fake reviews.

Here is a partial list of sites that HouseFresh assumes have correctly classified low-quality reviews:

Better Homes & Gardens Real Simple Dotdash Meredith BuzzFeed Reddit with a spam link sent by a user with a suspended account Popular Science

HouseFresh published a lucid and rational account showing how Google’s review system algorithms supposedly give big brands a pass, while small independent websites that post honest reviews consistently lose traffic under each successive wave of new algorithms. Google.

Google responds

Google’s SearchLiaison provided a response to X (formerly Twitter) that took the allegations seriously.

The following facts stand out in the answer:

Google does not perform manual checks on claims made on web pages (except as part of a reconsideration request following a manual action).

Google algorithms do not use phrases designed to imply practical review as a ranking signal.

link search he tweeted:

“Thank you. I appreciated the thoughtfulness of the post, the concerns, and the detail.

I’ve passed this on to our search team along with my thoughts that I’d like to see us do more to ensure we’re showing a better diversity of results that includes both small and large publications.

A note on an otherwise excellent piece of writing. The article suggests that we do some sort of “manual checking” of claims made by pages. we don’t This reference and link is about manual reviews we do if a page has a manual *spam* action and submits a reconsideration request. This is completely different from how our automated ranking systems look to reward content.

Somewhat related, just making a claim and talking about a “rigorous testing process” and following an “EEAT checklist” does not guarantee a higher ranking or somehow automatically make a page perform better.

We talk about EEAT because it’s a concept that aligns with how we try to rank good content. But our automated systems don’t look at a page and see a statement like “I tried!” and I think it’s better for that alone. Rather, the things we talk about with EEAT are related to what people find useful in content. Doing things in general for people is what our automated systems aim to reward, using different cues.

More here: developers.google.com/search/docs/fundamentals/creating-helpful-content#eat

Thanks again for the post. I hope we do better in the future with these kinds of issues.”

Does Google show preference for big brands?

I’ve been working practically in SEO for 25 years and there was a time in the early 2000s when Google showed a bias towards big corporate brands based on how much PageRank the web page contained. Subsequently, Google reduced the influence of PageRank scores which in turn reduced the amount of irrelevant big brand sites cluttering the search results pages (SERPs).

It wasn’t an example of Google preferring big brands as trustworthy. It was an instance of their algorithms not working as intended.

There may be signals in Google’s algorithm that inadvertently favor big brands.

If I had to guess what types of signals are responsible, I would guess that they would be signals related to user preferences. Google Navboost’s recent testimony in Google’s antitrust lawsuit made it clear that user interactions are an important ranking signal.

This is my speculation on what I think might be going on, that Google’s reliance on user signals is having an inadvertent result, something I’ve been pointing out for years (read Google’s Froot Loops algorithm).

Read the discussion on Twitter:

What do BuzzFeed, Rolling Stone, Forbes, PopSci and Real Simple have in common?

Read the HouseFresh article:

How Google is killing independent sites like ours

Featured image by Shutterstock/19 STUDIO

FAQ

Does introducing a rigorous content testing process influence Google rankings?

While presenting a rigorous testing process and claims of thoroughness in content is beneficial to user perception, it alone does not influence Google rankings. Google’s answer clarifies this aspect:

Algorithms focus on factors related to the usefulness of the content as perceived by users, beyond in-depth testing claims. Claims of a “rigorous testing process” are not in themselves signals of classification. Content creators should focus on truly serving the needs of their audience and providing value, as this aligns with Google’s ranking principles.

What steps does Google take to verify the accuracy of website claims?

Google does not perform manual checks on the factual accuracy of claims made by web pages. Its algorithms focus on evaluating the quality and relevance of content using automated ranking systems. Google’s EEAT concept is designed to align with the classification of useful content, but does not involve manual review unless there is a specific request to reconsider the spam action. This separates fact-checking from automated content rating mechanisms.



[ad_2]

Source link

You May Also Like

About the Author: Ted Simmons

I follow and report the current news trends on Google news.

Leave a Reply

Your email address will not be published. Required fields are marked *