There are many a blog post out there detailing the advantages and features of robots.txt files, but does your site even need one? And does it have any impact on search engine optimization strategy.
Robots.txt is used to block search engine spiders from crawling certain pages or directories. On most sites private information is already blocked because it is delivered using SSL, so there is no point to having a robots.txt file.
But Steve you say, Google recommends using a robots.txt file to “tell the crawlers which directories can or cannot be crawled.” Well sure, but if you have no problem with Google-bot crawling everything, there is no need for the robots.txt file, right? Why tell it to do what it is already going to try to do? — Crawl everything!
There are really only two cases in which I recommend using a robots.txt file. The first case is (and I think the Google Webmaster guidelines says it best) “to prevent crawling of search results pages or other auto-generated pages that don’t add much value for users coming from search engines.” The other situation where there is the need for a robots.txt file is that if you cannot verify your sites XML sitemap for some reason (that should really be fixed), you can use robots.txt to specify its location on your server.