Seo, in its a lot of standard sense, trusts one thing above all others: Search engine spiders crawling and indexing your website.
But nearly every website is going to have pages that you don’t wish to consist of in this exploration.
In a best-case scenario, these are doing nothing to drive traffic to your website actively, and in a worst-case, they could be diverting traffic from more vital pages.
Fortunately, Google allows webmasters to tell online search engine bots what pages and material to crawl and what to overlook. There are several ways to do this, the most common being using a robots.txt file or the meta robots tag.
We have an excellent and comprehensive explanation of the ins and outs of robots.txt, which you ought to definitely check out.
However in high-level terms, it’s a plain text file that resides in your site’s root and follows the Robots Exclusion Protocol (ASSOCIATE).
Robots.txt provides spiders with directions about the website as an entire, while meta robotics tags include directions for particular pages.
Some meta robots tags you may utilize include index, which tells online search engine to include the page to their index; noindex, which informs it not to include a page to the index or include it in search results; follow, which advises an online search engine to follow the links on a page; nofollow, which informs it not to follow links, and an entire host of others.
Both robots.txt and meta robots tags are useful tools to keep in your toolbox, however there’s also another way to instruct search engine bots to noindex or nofollow: the X-Robots-Tag.
What Is The X-Robots-Tag?
The X-Robots-Tag is another method for you to control how your websites are crawled and indexed by spiders. As part of the HTTP header response to a URL, it manages indexing for an entire page, in addition to the specific components on that page.
And whereas using meta robotics tags is relatively straightforward, the X-Robots-Tag is a bit more complex.
But this, naturally, raises the question:
When Should You Use The X-Robots-Tag?
According to Google, “Any directive that can be utilized in a robotics meta tag can also be defined as an X-Robots-Tag.”
While you can set robots.txt-related directives in the headers of an HTTP response with both the meta robotics tag and X-Robots Tag, there are specific circumstances where you would wish to utilize the X-Robots-Tag– the two most typical being when:
- You want to manage how your non-HTML files are being crawled and indexed.
- You want to serve directives site-wide rather of on a page level.
For instance, if you want to obstruct a particular image or video from being crawled– the HTTP action approach makes this simple.
The X-Robots-Tag header is also useful due to the fact that it allows you to integrate several tags within an HTTP response or utilize a comma-separated list of instructions to define regulations.
Perhaps you do not desire a specific page to be cached and want it to be not available after a certain date. You can utilize a mix of “noarchive” and “unavailable_after” tags to advise search engine bots to follow these guidelines.
Basically, the power of the X-Robots-Tag is that it is much more flexible than the meta robotics tag.
The advantage of utilizing an X-Robots-Tag with HTTP actions is that it allows you to use routine expressions to carry out crawl instructions on non-HTML, along with apply parameters on a bigger, global level.
To help you comprehend the distinction between these directives, it’s valuable to categorize them by type. That is, are they crawler directives or indexer instructions?
Here’s an useful cheat sheet to explain:
|Crawler Directives||Indexer Directives|
|Robots.txt– uses the user agent, allow, prohibit, and sitemap directives to define where on-site search engine bots are allowed to crawl and not permitted to crawl.||Meta Robots tag– permits you to define and prevent search engines from revealing particular pages on a site in search engine result.
Nofollow– enables you to specify links that should not pass on authority or PageRank.
X-Robots-tag– allows you to control how specified file types are indexed.
Where Do You Put The X-Robots-Tag?
Let’s state you want to block particular file types. A perfect method would be to include the X-Robots-Tag to an Apache setup or a.htaccess file.
The X-Robots-Tag can be added to a site’s HTTP reactions in an Apache server configuration via.htaccess file.
Real-World Examples And Uses Of The X-Robots-Tag
So that sounds fantastic in theory, however what does it look like in the real life? Let’s have a look.
Let’s say we wanted search engines not to index.pdf file types. This setup on Apache servers would look something like the below:
In Nginx, it would look like the listed below:
location ~ * . pdf$
Now, let’s look at a various scenario. Let’s state we want to utilize the X-Robots-Tag to obstruct image files, such as.jpg,. gif,. png, and so on, from being indexed. You could do this with an X-Robots-Tag that would look like the below:
Please keep in mind that understanding how these directives work and the impact they have on one another is crucial.
For instance, what takes place if both the X-Robots-Tag and a meta robotics tag are located when crawler bots discover a URL?
If that URL is blocked from robots.txt, then certain indexing and serving instructions can not be discovered and will not be followed.
If directives are to be followed, then the URLs including those can not be prohibited from crawling.
Check For An X-Robots-Tag
There are a few various approaches that can be used to look for an X-Robots-Tag on the website.
The most convenient method to inspect is to install an internet browser extension that will tell you X-Robots-Tag information about the URL.
Screenshot of Robots Exclusion Checker, December 2022
Another plugin you can use to figure out whether an X-Robots-Tag is being used, for instance, is the Web Designer plugin.
By clicking the plugin in your browser and navigating to “View Reaction Headers,” you can see the various HTTP headers being utilized.
Another approach that can be used for scaling in order to identify issues on websites with a million pages is Yelling Frog
. After running a site through Shrieking Frog, you can browse to the “X-Robots-Tag” column.
This will reveal you which sections of the site are utilizing the tag, in addition to which specific directives.
Screenshot of Shouting Frog Report. X-Robot-Tag, December 2022 Utilizing X-Robots-Tags On Your Website Understanding and controlling how online search engine interact with your site is
the foundation of search engine optimization. And the X-Robots-Tag is an effective tool you can utilize to do just that. Just be aware: It’s not without its dangers. It is very easy to slip up
and deindex your entire site. That said, if you’re reading this piece, you’re most likely not an SEO beginner.
So long as you use it carefully, take your time and examine your work, you’ll find the X-Robots-Tag to be a helpful addition to your arsenal. More Resources: Featured Image: Song_about_summer/ Best SMM Panel