Manual:Robots.txt/ja

robots.txt files are part of the Robots Exclusion Standard. They tell web robots how to index a site. A robots.txt file must be placed in the web root of a domain.

Prevent all indexing
This code prevents all bots from indexing all pages on your site: If you only want to block a certain spider, replace the asterisk with the spider's user agent.

Prevent indexing of non-article pages
MediaWiki generates many pages that are only useful for live humans: old revisions and diffs tend to duplicate content found in articles. Edit pages and most special pages are dynamically generated, which makes them useful only to human editors and relatively expensive to serve. If not directed otherwise, spiders may try to index thousands of similar pages, overloading the webserver.

With short URLs
It is easy to prevent spiders from indexing non-article pages if you are using Wikipedia-style short URLs. Assuming articles are accessible through /wiki/Some_title and everything else is available through /w/index.php?title=Some_title&someoption=blah : Be careful, though! If you put this line by accident: you'll block access to the /wiki directory, and search engines will drop your wiki!

Without short URLs
If you are not using, restricting robots is a bit harder. If you are running PHP as CGI and you have not beautified URLs, so that articles are accessible through /index.php?title=Some_title: If you are running PHP as an Apache module and you have not beautified URLs, so that articles are accessible through /index.php/Some_title: The lines without the colons at the end restrict those namespaces' talk pages.

Non-English wikis may need to add various translations of the above lines.

You can also try because some robots like Googlebot accept this wildcard extension to the robots.txt standard, which stops most of what we don't want robots sifting through, just like the /w/ solution above.

Allow indexing of raw pages by the Internet Archiver
You may wish to allow the Internet Archiver to index raw pages so that the raw wikitext of pages will be on permanent record. This way, it will be easier, in the event the wiki goes down, for people to put the content on another wiki. You would use:

Rate control
You can only specify what paths a bot is allowed to spider. Even allowing just the plain page area can be a huge burden when two or three pages per second are being requested by one spider over two hundred thousand pages.

Some bots have a custom specification for this; Inktomi responds to a "Crawl-delay" line which can specify the minimum delay in seconds between hits. (Their default is 15 seconds.)

Evil bots
Sometimes a custom-written bot isn't very smart or is outright malicious and doesn't obey robots.txt at all (or obeys the path restrictions but spiders very fast, bogging down the site). It may be necessary to block specific user-agent strings or individual IPs of offenders.

More generally, request throttling can stop such bots without requiring your repeated intervention.

An alternative or complementary strategy is to deploy a spider trap.

Spidering vs. indexing
While robots.txt stops (non-evil) bots from downloading the URL, it does not stop them from indexing it. This means that they might still show up in the results of Google and other search engines, as long as there are external links pointing to them. (What's worse, since the bots do not download such pages, noindex meta tags placed in them will have no effect.) For single wiki pages, the  magic word might be a more reliable option for keeping them out of search results.