Manual:Robots.txt/de

robots.txt-Dateien sind Teil des Robots-Exclusion-Standards und kann mit helfen. Sie sagen Web-Bots, wie eine Seite gecrawlt werden soll. Eine robots.txt-Datei muss im Webstamm einer Domain abgelegt werden.

Beispiele


Verhindern Sie alles Crawlen
Dieser Code hindert alle Bots am Indizieren aller Seiten auf Ihrer Site:

If you only want to block a certain spider, replace the asterisk with the spider's user agent.



Crawlen von Nicht-Artikelseiten verhindern
MediaWiki generates many pages that are only useful for live humans: old revisions and diffs tend to duplicate content found in articles. Edit pages and most special pages are dynamically generated, which makes them useful only to human editors and relatively expensive to serve. If not directed otherwise, spiders may try to index thousands of similar pages, overloading the webserver.



Mit Short-URLs
It is easy to prevent spiders from crawling non-article pages if you are using Wikipedia-style short URLs. Assuming articles are accessible through  and everything else is available through  :

Sei aber vorsichtig! Wenn Sie diese Zeile versehentlich setzen:

you'll block access to the /wiki directory, and search engines will drop your wiki!

Beachten Sie, dass diese Lösung auch dazu führt, dass CSS, JavaScript und Bilddateien blockiert werden, so dass Suchmaschinen wie Google keine Vorschau von Wiki-Artikeln anzeigen können. To work around this, instead of blocking the entire  directory, only   need be blocked:

This works because CSS and JavaScript is retrieved via. Alternatively you could do it as it is done on the Wikimedia farm:



Ohne Short-URLs
If you are not using short URLs, restricting robots is a bit harder. If you are running PHP as CGI and you have not beautified URLs, so that articles are accessible through :

If you are running PHP as an Apache module and you have not beautified URLs, so that articles are accessible through :

The lines without the colons at the end restrict those namespaces' talk pages.

Non-English wikis may need to add various translations of the above lines.

You may wish to omit the  restriction, as this will prevent images belonging to the skin from being accessed. Search engines which render preview images, such as Google, will show articles with missing images if they cannot access the  directory.

Sie können auch versuchen

because some robots like Googlebot accept this wildcard extension to the robots.txt standard, which stops most of what we don't want robots sifting through, just like the /w/ solution above. This does, however, suffer from the same limitations in that it blocks access to CSS, preventing search engines from correctly rendering preview images. It may be possible to solve this by adding another line,  however at the time of writing this is untested.



Indizierung von Rohseiten durch das Internetarchiv zulassen
You may wish to allow the Internet Archiver to index raw pages so that the raw wikitext of pages will be on permanent record. This way, it will be easier, in the event the wiki goes down, for people to put the content on another wiki. You would use:

Probleme


Ratenkontrolle
You can only specify what paths a bot is allowed to spider. Even allowing just the plain page area can be a huge burden when two or three pages per second are being requested by one spider over two hundred thousand pages.

Some bots have a custom specification for this; Inktomi responds to a "Crawl-delay" line, which can specify the minimum delay in seconds between hits. (Their default is 15 seconds.)



Böse Bots
Sometimes a custom-written bot isn't very smart or is outright malicious and doesn't obey robots.txt at all (or obeys the path restrictions, but spiders very fast, bogging down the site). It may be necessary to block specific user-agent strings or individual IPs of offenders.

More generally, request throttling can stop such bots without requiring your repeated intervention.

An alternative or complementary strategy is to deploy a spider trap.



Spidering vs. Indexing
While robots.txt stops (non-evil) bots from downloading the URL, it does not stop them from indexing it. This means that they might still show up in the results of Google and other search engines, as long as there are external links pointing to them. (What's worse, since the bots do not download such pages, noindex meta tags placed in them will have no effect.) For single wiki pages, the  magic word might be a more reliable option for keeping them out of search results.