For more than a century, people have been trying to steal the Coca-Cola formula. Maybe just a little less legendary? The formula for Google’s SEO algorithm. After all, Google placement is vital. Generate organic traffic. Get more eyes on your content. And it helps determine how many people end up seeing your business.
Recently, the secrets behind this algorithm they leakedwith Mike King at iPullRank noting:
The internal documentation of the Google Search Content Warehouse API has been leaked. Google’s internal microservices appear to mirror what Google Cloud Platform offers, and the internal version of the deprecated AI Warehouse document documentation was accidentally published publicly in a code repository for the customer library.
wow
As you can imagine, these leaks are reverberating throughout the marketing world. For years, SEO specialists have tried to reverse engineer what was really going on with Google’s search results. How much of what Google claimed was true? What wasn’t? And what did the algorithm actually take into account to help drive SEO success?
Now it looks like we have some answers. So let’s break them down and see what we can find out about how Google actually organizes your location.
What’s in the SEO Something Leak?
In accordance with iPullRankthere are “2,596 modules” in the API documentation, including over 14,000 functions.
In this context, a module can be related to a component such as YouTube or a video search. Google’s code is then stored in a large repository, meaning that any machine on the Google network can access and run the code if it wants to.
This helped readers get an idea of how the overall structure of Google works. But perhaps even more compelling about the leaks is that, according to the article, the “API docs reveal some remarkable lies from Google.” Here are some specific ones that address point by point:
Domain authority. Domain authority refers to a search engine score that predicts how likely a website is to appear in results based on the overall strength of that domain. Google was embarrassed with the domain authority. Some people even denied that there was such a thing as a general domain authority ranking that could affect search results, although many SEOs thought otherwise. But the leaks found a rating called “siteAuthority”. What this means is up to interpretation, but it seems that there may be some sort of ranking of authority built into the system. Clicks for ranking. Does Google rank search engines based on how they click on links? Google must have this information on hand, and there appears to be testimony from Google experts in the past that hint at some sort of click-based ranking system. But others have denied that this is so. In the algorithm, there appear to be variables for “badClicks” and “goodClicks” that could promote or demote a link based on its click qualities. The sandbox. Some people have claimed that if a domain has some poor signals (lack of trust, less reliable domain age), then it is put in a “sandbox”, a bit like being punished. Or you can think of it as a penalty box. However, “hostAge” is an attribute included in the documentation. This may not lead to the long, drawn out “sandbox” experiences that many have reported, but there does seem to be some truth to the claims. Using Chrome Google, of course, owns Chrome, which could flood the search engine with information about Internet behavior. So you might think it’s only natural that Google would use this information to feed their algorithm, right? Well, that too has been controversial. iPullRank claims that there is a module here “that appears to be related to sitelink generation” with a Chrome-related attribute. In other words, Chrome data could feed into Google’s rankings as well.
Here are some of the key points. However, the post also drifts away. What is this monolithic thing known as “Google’s algorithm” after all? “It’s a series of microservices,” the post says. It’s not really a monolith. Google’s results are the result of an interaction of different systems, such as the web crawling system or the ranking system. Understanding that all of these different elements play into a website’s overall ranking is key to good SEO.
What are the takeaways?
Given everything we’ve learned about the algorithm, what are the revelations that may affect the future of SEO?
Links still matter. Over the years, people have been imagining that links have decreased in relevance to Google. “To get a quick background, Google’s index is stratified into tiers where the most important, regularly updated and accessed content is stored in flash memory,” writes Mike King. But Google must prioritize content. And there may be “levels” in the index that suggest higher quality links, as ranked by Google. Google remembers… a lot. Google’s file system is huge and impressive, a sort of Wayback Machine for the Internet. The algorithms seem to suggest that this is true. However, when taking quick information from a site, it may only include the twenty most recent versions of the page, suggesting that more frequent updates may have a greater effect for someone doing an SEO refresh. Trust is still important. Home page trust is still a key factor – high-quality, relevant links are more important than high-volume links. This is not a game changer for SEOs, but it seems to have been verified in the leaks.
Jason Bernard of Search Engine Land he also highlighted the keys to this leak, especially noting the key keys based on personality. For example, the name “isAuthor” can highlight if the entity in question (such as a website) is also the author of the document; this tends to rank better for news articles. Bernard therefore recommends a new three-tiered approach to SEO: optimize website content (traditional SEO), take responsibility for a website as a “website owner” and then take responsibility for pages as an author
This means duplicating a “personal brand”, which people tend to trust. And if people tend to trust it, that means Google’s algorithm, as leaked, probably won’t be far off.
[ad_2]
Source link