Google has updated its emergency and non-emergency image removal guide with added details that bring new clarity to the documentation.
Removing images from the search index
Google offers several ways to remove images from the search index in both an emergency and non-emergency manner.
There are several relatively trivial changes, but here are the topics that had varying levels of changes:
How to quickly delete images. What to do when there is no access to the CDN hosting the images or if the CMS does not provide a way to block indexing. More details on using robots.txt for images. How to use wildcards in robots.txt A warning about using the robots noimageindex tag.
How to quickly remove images from the index
The first addition to the documentation is the following paragraph:
“For emergency image removal
To quickly remove images hosted on your site from Google search results, use the Removals tool. Please note that unless you also remove the images from your site or otherwise block them as described in the non-emergency image removal section, the images may reappear on Google search results after the removal request expires.
When there is no access to the images in CDN or by CMS
The next scenario is when an image is hosted on a CDN, but for whatever reason it cannot be accessed or the CMS prevents the image from being locked.
This is the added paragraph:
“If you don’t have access to the site hosting your images (eg a CDN) or your CMS doesn’t provide a way to block images with the noindex X-Robots-Tag or robots.txt HTTP header, you may need to delete the images of your site.
Images and robots.txt
The following changes are small additions to two paragraphs that, as a whole, make the message clearer, with the addition of the phrase “for example” and some other relatively trivial additional words.
Changed the following passage about the robots.txt structure:
“Rules can include special characters for more flexibility and control. The * character matches any sequence of characters, and patterns can end in $ to indicate the end of a path.
To this:
“Rules can include special characters for more flexibility and control. Specifically, the * character matches any sequence of characters that allows you to match multiple image paths with a single rule.
Switch to Robots.txt wildcards guide
The next change is more substantial because it provides more detail on how to use wildcards. Wildcards in this context relate to the use of the * symbol, which means that it can be any character.
This part:
“# Wildcard in filename for
# images that share a common suffix:”
It becomes this:
“# Wildcard in filename for
# images that share a common suffix. For example,
# animal-picture-UNICORN.jpg i
# animal-picture-SQUIRREL.jpg
# in the “images” directory.
# will match this pattern.”
New paragraph about the Noimageindex Robots tag
The last of the significant changes is a passage that provides a warning about using noimageindex.
This is the new passage:
“Note that adding the robots noimageindex tag to a particular page will also prevent images embedded on that page from being indexed. However, if the same images also appear on other pages, they may be indexed via these pages. To ensure that a particular image is blocked regardless of where it appears, use the noindex X-Robots-Tag HTTP response header.”
Google Search Central update documentation
This is the latest in a series of ongoing updates to Google’s documentation. Long web pages are edited to make them more concise. Others, like this web page, are edited to make them clearer.
Read the newly updated guide on how to remove images from Google’s index:
Remove images hosted on your site from search results
Featured image by Shutterstock/Piotr Swat
[ad_2]
Source link