Robots.txt is a text file which sits on the root of your domain and instructs search engine crawlers which page or directories of the site to crawl and which not to

There are certain files in your blog like the admin folder which you might want to disallow the search engines from indexing. Blogs which offer printer versions of their articles that allow their visitors to print their articles easily should make use of robots.txt file. RSS feeds and comment feeds should also be taken care of.

You should only allow the search engines to index one version of your content as this might be considered as duplicate content by Search engines.

This can be achieved with the help of robots.txt file

To find your robots.txt file simply look up

You can also check the status through Google Webmaster tool> Tools> Analyze Robots

To exclude all robots from indexing your admin folder, feeds and comment feeds, your robots.txt file would look something like this

User-agent: *

Disallow: /wp-
Disallow: /feed/
Disallow: /comments/feed/

Let us know if you’re using the robots.txt file on your blog?