This is a guest post by Kelly Wilson.

There are times when you don’t want Google to index some of your website’s web pages. While this may sounds contradictory to the usual search engine optimization strategy where the goal is to make Google index your web page for search, there are some pages on your website that you simply don’t want Google to index it for search for varied reasons.

There are good reasons why people don’t like Google from indexing some of their web pages. It could be that websites may have too many links on their content that the search engine will consider it as a spam or may be testing a different web design and platform for the time being.

It is also possible that you want to avoid duplicate content on your web pages that can hurt your Google ranking.

Whatever the case, these 5 ways on how to prevent Google from indexing your web pages will be useful when you don’t want Google to index your website.

1. Using the header status code to block Google

The header status code will give you the ability to send both your website visitors and Google algorithm to different areas of your website. Using the server status code of 403 forbidden, the server will not respond to the user’s request when visiting your website while the 301 moved permanently code will send any requests made from your site to a different URL.

The latter is more favorable to your search engine optimization strategy as it will redirect your website visitors to a new page on your website while it will send a signal to the Google search engine to deindex your URL.

2. Use the meta robot noindex tag to block individual page on your website

Using the meta robot nonindex tag is the more straightforward way of telling Google not to index your web page. The effect will only involve blocking Google to index your web page but your visitors can still visit and view that particular page. The noindex tag only works in preventing the search engine crawler from indexing a specific web page and not all of your web pages.

It also has no significant impact on the viewing experience of your website visitors. In order to block Google, exclude a particular web page from the search engine and add this code to the web page <head>

<meta name="robots" content="noindex, nofollow">

This will block Google from indexing your web page as well as the links associated on it. You can monitor your website performance on search using search ranking tools (check SE Ranking) to determine that your website still performs well on search even after taking this action.

3. Limit your content to be accessible on JavaScript and cookies

You can also protect your web page content from being indexed by the Google search engine by making it only accessible to JavaScript and cookies that keep the search engine bots from crawling your web page.

Requiring the more complicated JavaScripts to run in order to crawl your content is more effective as the majority of the search engines are incapable of executing them.

4. Using the robot.txt disallow

The robot.txt disallow can prevent the Google search engine to index contents that have never been indexed to be indexed without preventing your website visitors from viewing the web page. The code will signal the search engine not to index the files and folders that are specified not to be indexed.

While Google can no longer index your web page, it has no impact on your website visitor’s ability to view and navigate through the same page.

5. Protect your content with a password

Password protected web pages can only be accessed by its users who have the password. Not even the search engine bots can access the pages that are protected. However, this measure should be balanced with the quality of customer service that you want your website visitors to experience because using password to protect your web page content will restrict access on some of your visitors.

Share this post...Share on Facebook2Tweet about this on TwitterShare on Google+2Pin on Pinterest0Share on StumbleUpon0