In a recent statement on LinkedIn, Google analyst Gary Illyes shared his mission for the year: to figure out how to crawl the web even less.
This comes on the heels of a Reddit post discussing the perception that Google is tracking itself less than in previous years.
While Illyes clarifies that Google is crawling about the same amount, he stresses the need for smarter programming and a focus on the URLs that are most likely to deserve crawling.
Illyes’ statement aligns with the ongoing discussion among SEO professionals about the concept of “crawl budget,” which assumes that sites must stay within a limited number of pages that search engines can crawl daily to index your pages.
However, Google’s Search Relations team recently debunked this misconception in a podcast, explaining how Google prioritizes crawling based on several factors.
Crawl prioritization and search demand
In a podcast published two weeks ago, Illyes explained how Google decides how much to track:
“If the search demand goes down, that also correlates with the crawl limit going down.”
While he didn’t provide a clear definition of “search demand,” it likely refers to the demand for search queries from Google’s perspective. In other words, if there is a decrease in searches for a particular topic, Google may have less reason to crawl websites related to that topic.
Illyes also emphasized the importance of convincing search engines that a website’s content is worth getting.
“If you want to increase how much we crawl, you have to somehow convince search that your stuff is worth looking for, which is basically what the programmer is hearing.”
While Illyes didn’t explain how to achieve this, one interpretation could be to ensure that content remains relevant to user trends and stays up-to-date.
Focus on quality
Google previously clarified that a fixed “crawl budget” is largely a myth.
Instead, search engine crawling decisions are dynamic and driven by content quality.
As Illyes said:
“The programming is very dynamic. As soon as we get signals from search indexing that the quality of content has increased on so many URLs, we would only start to increase demand.”
The way forward
Illyes’ mission to improve crawling efficiency by reducing the amount of crawling and bytes per wire is a step toward a more sustainable and practical web.
While seeking input from the community, Illyes invites suggestions of interesting drafts or Internet standards from the IETF or other standards bodies that could contribute to this effort.
“Reducing tracking without sacrificing tracking quality would benefit everyone,” he concludes.
Why SEJ cares
Illyes’ statement about crawl reduction reinforces the need to focus on quality and relevance. SEO is not just about technical optimizations, but also about creating valuable, user-centric content that meets search demand.
By understanding the dynamic nature of Google’s crawling decisions, we can all make more informed decisions when optimizing our websites and allocating resources.
How this can help you
With the knowledge shared by Illyes, there are several actionable steps you can take:
Prioritize quality: Focus on creating high-quality, relevant and engaging content that meets user intent and matches current search demand.
Keep content current: Regularly refresh and update your content to ensure it remains valuable to your target audience.
Monitor search demand trends: Adapt your content strategy to address emerging trends and topics, ensuring your website remains relevant and crawlable.
Implement good technical practices: Make sure your website has a clean, well-structured architecture and a strong internal linking strategy to facilitate efficient crawling and indexing.
As you refine your SEO strategies, remember the key points from Illyes’ statements and the insights provided by the Google Search Relations team.
With these insights, you’ll be equipped to succeed if and when Google reduces its crawling frequency.
Featured Image: Skorzewiak/Shutterstock
[ad_2]
Source link