The robots exclusion protocol or simply robots.txt is a standard which is used by all the websites to communicate with web robots and crawlers. This specifies how to inform the web robots about the website areas which should not be processed or scan by web crawlers and robots. Robots plays most important role in search engine to categorize websites and their pages. Not all robots cooperate with the standard; email harvesters, spambots, malware, and robots that scan for security vulnerabilities may even start with the portions of the website where they have been told to stay out. The standard is different from but can be used in conjunction with, Sitemaps, a robot inclusion standard for websites.