I noticed that an emphatic comment by Google Search VP Hyung-Jin Kim at SMX Next in November 2022 went without much discussion in the SEO community until today.
He said (my emphasis):
“EAT is a template for how we value an individual site. We do this with every query and every result. It’s pervasive in everything we do.” He added, “EAT is a core part of our metrics.“
EAT, and subsequently EEAT, are discussed by SEOs all the time. Most are quick to say that they are not part of any Google ranking system. Google spokespersons also confirm these statements. These are quality concepts that are passed on to human quality raters whose reports are used to confirm that ranking systems are delivering the best results in the SERPs. Assessors receive a copy of the Guidelines for Search Quality Evaluators.
I shared the SMX Next quote in about five forums and chat groups. Each has a potential audience of hundreds to thousands. I focused on the second quote that EAT is a core part of Google’s metrics.
How could it apply to “every query and every result” if it’s not part of a ranking system?
I figured it must be a quality assurance process done after a SERP is issued. The process can be as follows:
An AI process examines each index page for evidence of expertise, authority, and trustworthiness. Maybe they already added Experience. This evaluation is performed continuously while tracking the site and other sites that cite or link to it. Each factor is given a numerical score that can change with each scan. Each element in a SERP (site fragment, carousel image, and URL result) would have this score. This should have a high value relative to the results that follow and the subsequent “pages” of continuous scrolling. SERP results are obviously curated by separate ranking systems, so I’m speculating that EEAT serves as quality control after the fact. Accordingly, it does not slow down the delivery of SERPs. If an adverse trend is observed, it is analyzed in detail and a classification system is modified or the EEAT factors are tweaked.
It might be totally out of place, but interpreting the Google researcher’s words is not the purpose of this article.
Of the hundreds of SEOs who could have noticed my invitation to the discussion, about five responded with a serious response. They are old friends and I have met three of them in person many times. Other responses included weak humor, sarcasm, or skepticism of anything Google said. Then crickets.
Where is SEO curiosity headed?
I’m surprised that such an uncommon statement by a Googler hasn’t led to further discussion, even when I tried to raise this topic recently.
What happened to the legendary SEO curiosity of guessing the “200 ranking factors?” More than one author would survey the SEO community to find and rank the top ranking factors. We loved adding our own observations to the body of knowledge.
A lot of energy and curiosity goes into building brilliant tools with Python, especially with a good dose of AI. Part of this work seems to be reinventing the wheel.
There is a vigorous discussion about SEO tools every day. Is there a better tool for keyword research, as marketers would have us believe? Can AI writing tools really benefit all SEO niches?
There is no shortage of experts who grow their mailing lists by inviting us to steal their “secrets”. A lot of misinformation is being passed around as fact.
We’ve lost the early explorers dissecting all the search engine patents and trying to correlate them with their SERP observations. I miss pioneers like Ted Ulle and Bill Slawski, who would analyze algorithm updates and try to identify possible ways to avoid getting caught in the Google network.
Get the daily search newsletter marketers trust.
Be more curious
SEO curiosity isn’t completely dead. Using the EEAT example, many say that these factors are not part of Google’s ranking systems. It’s okay to have a healthy skepticism about anything search engine spokespeople do.
Channel your curiosity into an investigation. You may not want to share the results if they are not meaningful. Then we just saw Cyrus Shepard examine 50 sites to find correlations between features found on websites and the winners and losers of Google’s algorithm update.
Shepard found that “experience” was one of the characteristics of “winning” websites. But haven’t SEOs echoed the mantra that EEAT is not part of ranking algorithms?
Maybe not in a direct way, but any algorithm that looks at experience is sending a positive signal to a ranking algorithm. Since relatively few pages are product or site reviews, it makes sense to keep an experience algorithm separate from a ranking algorithm.
I have the privilege of watching a curious SEO, Daniel K. Cheung, build an EEAT attribute matrix to audit a page. Until now, you’ve found it necessary to give each attribute a numerical value so that some can be shown to have a greater impact on a page than others.
For example, an attribute might be the presence of a video of the author using the reviewed product. This could have a greater impact than a still image of the same scene. It doesn’t matter if the actual method used by Google is much more nuanced. This curiosity gives us ideas to try.
A checklist for assessing EEAT (shared with permission)
Be skeptical
You could argue that Shepard’s sample of 50 isn’t large enough. fair enough One of the big SEO tool makers could have their crawlers look at a million websites and tell us if they agree with it.
Don’t wait for a tool company to do the study – pick 100 or more sites and do your own testing. Rinse and repeat until you’re ready to announce your findings.
The views expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.
[ad_2]
Source link