The Robots.txt File For Your Website

For a search engine to keep their listings up to date, and present the most accurate search engine results, they perform an action known as a ‘crawl’ and visiting the robots.txt on it’s way. This crawl action, is search engines essentially sending a ‘bot’ (sometimes known as a ‘spider’) out to crawl the Internet. The bot will then find new pages, updated pages or pages it did not previously know to exist. The end result of the crawl is that the search engine results page is updated, and all of the pages found on the last crawl are now included. It’s simply a method of finding sites on the Internet.

However, there may be some instances where you have a website page you do not want included in search engine results. For example, you may be in the process of building a page, and do not want it listed in search engine results until it is completed. In these instances, you need to use a file known as robots.txt to tell a search engine bot to ignore your chosen pages within your website.

What Does The Robots.txt File Do?

Robots.txt is basically a way of telling a search engine “don’t come in here, please”. When a bot finds a robots.txt file, it will ‘read’ it and will duly ignore all the URLs contained within. Therefore pages within the file do not appear in search results. It isn’t a failsafe; robots.txt is a request for bots to ignore the page, rather than a complete block, but most bots will obey the information found within the file. When you are readied for the page to be included in a search engine, you simply modified your robots.txt file and remove the URL of the designated page.

You can create a robots.txt file for your website with our Robots.txt File Generator.

Leave a Reply

Your email address will not be published.