Manual:robots.txt/ru

From MediaWiki.org
(Redirected from Robots.txt/ru)
Jump to navigation Jump to search
This page is a translated version of the page Manual:Robots.txt and the translation is 21% complete.
Outdated translations are marked like this.
Other languages:
Deutsch • ‎English • ‎Tiếng Việt • ‎dansk • ‎español • ‎français • ‎polski • ‎português do Brasil • ‎русский • ‎العربية • ‎ไทย • ‎中文 • ‎日本語

Файл robots.txt часть Robots Exclusion Standard, и может помочь с search engine optimization. Он сообщает веб-роботам как нужно индексировать веб-сайт. Файл robots.txt должен быть помещен в корневой каталог домена.

Примеры

Запретить всё индексировать

Этот код запрещает всем ботам индексацию страницы вашего сайта:

User-agent: *
Disallow: /

Если вы хотите заблокировать только определенный паук (поисковый робот), замените звездочку на название паука (user agent).

Запрет индексирования всего, кроме самих статей

MediaWiki генерирует много страниц, которые полезны только для людей: старые версии страниц и их различия только дублируют содержимое найденное в основных статьях. Страницы редактирования и большинство Специальных страниц создаются динамически, что делает их полезными только для людей. Если не назначено иное, поисковые роботы будут пытаться индексировать тысячи подобных страниц, перегружая веб-сервер.

С коротким URL

It is easy to prevent spiders from crawling non-article pages if you are using Wikipedia-style short URLs. Assuming articles are accessible through /wiki/Some_title and everything else is available through /w/index.php?title=Some_title&someoption=blah:

User-agent: *
Disallow: /w/

Но будьте осторожны! Если вы поместите эту строку случайно:

Disallow: /w

вы сможете заблокировать доступ к /w каталогу и поисковые системы не смогут зайти и проиндексировать вашу Вики!

Be aware that this solution will also cause CSS, JavaScript and image files to be blocked, so search engines like Google will not be able to render previews of wiki articles. To work around this, instead of blocking the entire /w directory, only index.php need be blocked:

User-agent: *
Disallow: /w/index.php?

This works because CSS and JavaScript is retrieved via /w/load.php. Alternatively you could do it as it is done on the Wikimedia farm:

User-agent: *
Allow: /w/load.php?
Disallow: /w/

Without short URLs

If you are not using short URLs , restricting robots is a bit harder. If you are running PHP as CGI and you have not beautified URLs, so that articles are accessible through /index.php?title=Some_title:

User-agent: *
Disallow: /index.php?diff=
Disallow: /index.php?oldid=
Disallow: /index.php?title=Help
Disallow: /index.php?title=Image
Disallow: /index.php?title=MediaWiki
Disallow: /index.php?title=Special:
Disallow: /index.php?title=Template
Disallow: /skins/

If you are running PHP as an Apache module and you have not beautified URLs, so that articles are accessible through /index.php/Some_title:

User-agent: *
Disallow: /index.php?
Disallow: /index.php/Help
Disallow: /index.php/MediaWiki
Disallow: /index.php/Special:
Disallow: /index.php/Template
Disallow: /skins/

The lines without the colons (:) at the end restrict those namespaces' talk pages.

Non-English wikis may need to add various translations of the above lines.

You may wish to omit the /skins/ restriction, as this will prevent images belonging to the skin from being accessed. Search engines which render preview images, such as Google, will show articles with missing images if they cannot access the /skins/ directory.

Вы также можете попробовать

Disallow: /*&

because some robots like Googlebot accept this wildcard extension to the robots.txt standard, which stops most of what we don't want robots sifting through, just like the /w/ solution above. This does however, suffer from the same limitations in that it blocks access to CSS, preventing search engines from correctly rendering preview images. It may be possible to solve this by adding another line Allow: /load.php however at the time of writing this is untested.

Allow indexing of raw pages by the Internet Archiver

You may wish to allow the Internet Archiver to index raw pages so that the raw wikitext of pages will be on permanent record. This way, it will be easier, in the event the wiki goes down, for people to put the content on another wiki. You would use:

# Allow the Internet Archiver to index action=raw and thereby store the raw wikitext of pages
User-agent: ia_archiver
Allow: /*&action=raw

Problems

Rate control

You can only specify what paths a bot is allowed to spider. Even allowing just the plain page area can be a huge burden when two or three pages per second are being requested by one spider over two hundred thousand pages.

Some bots have a custom specification for this; Inktomi responds to a "Crawl-delay" line which can specify the minimum delay in seconds between hits. (Their default is 15 seconds.)

Evil bots

Sometimes a custom-written bot isn't very smart or is outright malicious and doesn't obey robots.txt at all (or obeys the path restrictions but spiders very fast, bogging down the site). It may be necessary to block specific user-agent strings or individual IPs of offenders.

More generally, request throttling can stop such bots without requiring your repeated intervention.

An alternative or complementary strategy is to deploy a spider trap.

Spidering vs. indexing

While robots.txt stops (non-evil) bots from downloading the URL, it does not stop them from indexing it. This means that they might still show up in the results of Google and other search engines, as long as there are external links pointing to them. (What's worse, since the bots do not download such pages, noindex meta tags placed in them will have no effect.) For single wiki pages, the __NOINDEX__ magic word might be a more reliable option for keeping them out of search results.

Warning: Display title "Manual:robots.txt/ru" overrides earlier display title "Руководство:Robots.txt".