Adding your Blogger sitemap in robots.txt file
First of all you should know what is the Robots.txt. It is a text file which contains few lines of simple code. It is saved on your website or blog’s server and it instruct the web crawlers to how to index and crawl your blog in the search results. So you can restrict any web page on your blog from web crawlers so that it can’t get indexed in search engines like your blog labels page. Always remember that search crawlers scan the robots.txt file before crawling any web page.
Normally each blog hosted on blogger have its default robots.txt file which is something look like this:
Normally above code section is divided in to three sectionsUser-agent: Mediapartners-Google
Disallow:
User-agent: *
Disallow: /search
Allow: /
Sitemap: http://example.blogspot.com/feeds/posts/default?orderby=UPDATED
1 User-agent: Mediapartners-Google
This code is for Google Adsense robots and it help them to send better ads on your blog. Either you are using Google Adsense on your blog or not simply leave it as it is.
This is for all robots marked with asterisk (*). In default settings our blog’s labels links are restricted to indexed by search crawlers that means the web crawlers will not index our labels page links because of below code.
- User-agent: *
Disallow: /search
That means the links having keyword search just after the domain name will be ignored. See below example which is a link of label page named SEO Tricks.
http://freedomforbloggingtricks.blogspot.co.uk/search/label/SEO Tricks
And if we remove Disallow: /search from the above code then crawlers will access our entire blog to index and crawl all of its content and web pages.
Here Allow: / refers to the Homepage that means web crawlers can crawl and index our blog’s homepage.
How to Disallow Particular Post
Now suppose if we want to exclude a particular post from indexing then we can use below lines in the code.
Disallow: /yyyy/mm/post-url.html
In here yyyy and mm refers to the publishing year and month of the post respectively. For example if we have published a post in year 2013 in month of November then we have to use below format.
Disallow: /2013/11/post-url.html
Unless you can simply copy the post URL and remove the blog name from the beginning.
How to Disallow Particular Page
If we need to disallow a particular page then we can use the same method as above. Simply copy the page URL and remove blog address from it which will something look like this:
Disallow: /p/page-url.html
- Sitemap: http://example.blogspot.com/feeds/posts/default?orderby=UPDATED
Sitemap: http://example.blogspot.com/atom.xml?redirect=false&start-index=1&max-results=500
Sitemap: http://example.blogspot.com/atom.xml?redirect=false&start-index=500&max-results=1000OK if you think that it is very complicated don't worry follow bellow steps and use codes what I give.
Step 1 Log in to your blogger >> Dashboard >> Setting
Step 2 Now select "Search preferences" under Settings category.
Step 3 Select "Custom robots.txt and select "Yes" ratio button.
Step 4 copy past below codes
User-agent: *
Disallow: /search
Allow: /
Sitemap: http://sureyea.blogspot.in/atom.xml?redirect=false&start-index=1&max-results=500
Step 5 Save changes
Check below video to further clarifications.
Check below video to further clarifications.
User-agent: *
Disallow: /search
Allow: /
Sitemap: http://sureyea.blogspot.in/atom.xml?redirect=false&start-index=1&max-results=500
User-agent: *
Disallow: /search
Allow: /
Sitemap: http://sureyea.blogspot.in/atom.xml?redirect=false&start-index=1&max-results=500
User-agent: *
Disallow: /search
Allow: /
Sitemap: http://sureyea.blogspot.in/atom.xml?redirect=false&start-index=1&max-results=500
User-agent: *
Disallow: /search
Allow: /
Sitemap: http://sureyea.blogspot.in/atom.xml?redirect=false&start-index=1&max-results=500
Category: SEO
0 comments