Troubleshoot Robots.txt Index Coverage in Search Console

Read too:

Troubleshoot Robots.txt Index Coverage in Search Console
Lately Google again actively update periodically on its products. Not just display, things that change in the product also includes various aspects, such as PageSpeed Insights that update the workings.

This change also affects us Blogger users ... 

One example is the last few weeks until now almost all Blogger users will get Gmail that contains the issue of new index coverage, more or less shaped Gmail Issues Index Coverage like this:

Dear owner of, 

Search Console has identified that your site is affected by 1 new issue related to Index coverage. This means that Index coverage can be adversely affected in Google Search results. We recommend that you review and consider fixing this issue. 

New problem found:

Indexed, even if blocked by robots.txt

Maybe more or less like that shape. Well, this time I want to discuss the problem of index coverage above, from understanding to solution alias how to solve it. Let us discuss the understanding first.

What Is The Index Coverage Problem?

Index coverage issues are a problem with Google's robot crawl when crawling pages on your blog. Most of these problems are caused by the robots.txt file on your blog. To check your existing robots.txt on your blog, please go to the following link:

Rename-your-domain with your blog URL. Just add /robots.txt behind only.

This problem arises because a Google bot can not crawl your blog. In other words, your blog remains indexed and entered in the Google search engine, but there are some elements in the article that are not indexed to the maximum such as description. Of course, this is annoying SEO.

Troubleshoot Robots.txt Index Coverage in Search Console

Not all website owners get this Gmail. But I can be sure 99% of Blogger users will get it. Why is that?

Because by default Blogger sets out to block robots.txt from entering it and crawling it up to index it. Take a look at one of the following robots.txt examples:

User-agent: Mediapartners-Google

User-agent: *
Disallow: / search
Allow: /


In the code that I marked yellow, it appears that the page that begins search will get disallow. This means the search page will not be indexed by search engines. The search pages I mean include post-label pages, search results, and navigation to old/new posts.

Not without reason, the search page does not need to index, because it is less good for SEO and advertising. For more details, I have already discussed in this article:  Dangers Search Page Which Indexed by Search Engines FoAds. & Ads.

Well, this is what Gmail means from Google Search Console earlier, the Google robot cannot crawl the search page on your blog. It does not matter, does not adversely affect your SEO blog, it even adds well.

So, what should I do?

So the right step you can do is to just let Gmail, and do not mutually robots.txt. Leave it default, because that's fine.

Title : Troubleshoot Robots.txt Index Coverage in Search Console

If this article is felt interesting and useful, please share it by pressing the button below :)

Share this

Related Posts

Next Post »

1 komentar:

Write komentar
March 25, 2018 delete

Thanks my blog now have SEO


Insert your comment below...