Manual:Robots.txt

< Apache configuration The Robots Exclusion Standard allows advising web robots by means of the file /robots.txt, e.g for this project /robots.txt.

Nice robot
In your robots.txt file, you would be wise to deny access to the script directory, hence diffs, old revisions, contribs lists, etc etc, which could severely raise the load on the server.

Using URL rewriting
If using a system like on Wikipedia where plain pages are arrived at via /wiki/Some_title and anything else via /w/index.php?title=Some_title&someoption=blah, it's easy:

User-agent: * Disallow: /w/

Be careful, though! If you put this line by accident:

Disallow: /w

you'll block access to the /wiki directory, and search engines will drop your wiki!

Not using URL rewriting
If not using URL rewriting, this could be difficult to do very cleanly. Here we choose an aggressive example of keeping robots' noses out of non-core namespaces, User-agent: * Disallow: /index.php?diff= Disallow: /index.php?oldid= Disallow: /index.php?title=Help Disallow: /index.php?title=Image Disallow: /index.php?title=MediaWiki Disallow: /index.php?title=Special: Disallow: /index.php?title=Template Disallow: /skins (The lines without the colons at the end also zap the respective Talk pages. Note that non-English wikis may need to, in addition, add various translations of the above, in various coding representations...) We also tack on Disallow: /*& as some robots like Googlebot accept this wildcard extention to the robots.txt standard, which indeed stops most of what we don't want robots sifting through, just like the /w/ solution above.

Problems
Unfortunately, there are three big problems with robots.txt:

Rate control
You can only specify what paths a bot is allowed to spider. Even just allowing the plain page area can be a huge burden when two or three pages per second are being requested by one spider over two hundred thousand pages.

Some bots have a custom specification for this; Inktomi responds to a "Crawl-delay" line which can specify the minimum delay in seconds between hits. (Their default is 15 seconds.)

Bots that don't behave well by default could be forced into line with some sort of request throttling.

Don't index vs don't spider

 * Note: it seems this section may be outdated. At any rate,  match no documents, while  match loads.

Most search engine spiders will consider a match on a robots.txt 'Disallow' entry to mean that they should not return that URL in search results. Google is a rare exception, which is technically to specs but is very annoying: it will index such URLs and may return them in search results, albeit without being able to show the content or title of the page or anything other than the URL.

This means that sometimes "edit" URLs will turn up in Google results, which is very VERY annoying.

The only way to keep a URL out of Google's index is to let Google crawl the page and see a meta tag specifying robots="noindex". Although this meta tag is already present on the edit page HTML template, Google does not spider the edit pages (because they are forbidden by robots.txt) and therefore does not see the meta tag.

With our current system, this would be difficult to special case. It would be technically possible to exclude the edit pages from the disallow line in robots.txt, but this would require reworking some functions.

Evil bots
Sometimes a custom-written bot isn't very smart or is outright malicious and doesn't obey robots.txt at all (or obeys the path restrictions but spiders very fast, bogging down the site). It may be necessary to block specific user-agent strings or individual IPs of offenders.

Consider also request throttling.

Probably the best option in the case of 'bad bots' is to write a spider trap - I don't have much time to explain or write specific directions, I hope somebody can who knows or who can google it and hack up a description. Idea is that you deny a yourdomain.tld/trap/ directory to robots in robots.txt then write a small script that logs any IP that tries to access the /trap/ directory and adds that IP to the robots.txt in the previous folder. Thus, any robot ignoring robots.txt is IP banned permanently!

A somewhat outdated description of a spider trap is available here.

Blocking via .htaccess
If the robot does not obey robots.txt, we may still for example enforce the above Disallow: /*& line via Apache's .htaccess file:  RewriteEngine on RewriteCond %{QUERY_STRING} & RewriteCond %{HTTP_USER_AGENT} http://.*\.com [OR] RewriteCond %{REMOTE_ADDR} ^124\.115\.0 RewriteRule. - [F]  Here we block access to any page with a "&" in the URL for all robots and the specific nasty IP address match. We are guessing here that all HTTP_USER_AGENTs with a URL embedded are robots (although the reverse is not true). One could also guess that all HTTP_USER_AGENTs with a email address are robots, and match on "@" too.

Revenge via .htaccess
Some go further and actually take revenge via .htaccess. However such schemes might end up becoming springboards.