Підручник:Параметр Maxlag

From MediaWiki.org
Jump to navigation Jump to search
This page is a translated version of the page Manual:Maxlag parameter and the translation is 22% complete.

Other languages:
English • ‎español • ‎français • ‎polski • ‎português • ‎Ελληνικά • ‎русский • ‎українська • ‎中文 • ‎日本語

If you are running MediaWiki on a replicated database cluster (like Wikimedia is), then high edit rates may cause the replica servers to lag. One way to mitigate replication lag is to have all bots and maintenance tasks automatically stop whenever lag goes above a certain value. In MediaWiki 1.10, the maxlag parameter was introduced which allows the same thing to be done in client-side scripts. In 1.27, it was updated to only work for api.php requests.

Параметр maxlag може передаватися до api.php через параметр URL або дані POST. Це — ціле число секунд. Наприклад, це посилання показує метадані про сторінку "MediaWiki", тільки якщо затримка перевищує 1 секунду, тоді як це (з -1 у кінці) показує фактичну затримку без метаданих.

Якщо вказаний лаг перевищено на час запиту, то повертається код стану 503 (або 200 при запитах API, див. T33156), з тілом відповіді у такому форматі:

{
    "error": {
        "code": "maxlag",
        "info": "Waiting for $host: $lag seconds lagged",
        "*": "See https://www.mediawiki.org/w/api.php for API usage"
    }
}

Встановлено такі заголовки HTTP:

  • Retry-After: a recommended minimum number of seconds that the client should wait before retrying
  • X-Database-Lag: The number of seconds of lag of the most lagged replica

Recommended usage for Wikimedia wikis is as follows:

  • Use maxlag=5 (5 seconds). This is an appropriate non-aggressive value, set as default value on Pywikibot. Higher values mean more aggressive behaviour, lower values are nicer.
  • If you get a lag error, pause your script for at least 5 seconds before trying again. Be careful not to go into a busy loop.
  • It's possible that with this value, you may get a low duty cycle at times of high database load. That's OK, just let it wait for off-peak. We give humans priority at times of high load because we don't want to waste their time by rejecting their edits.
  • Unusually high or persistent lag should be reported to #wikimedia-tech connect on irc.freenode.net.
  • Interactive tasks (where a user is waiting for the result) may omit the maxlag parameter. Noninteractive tasks should always use it. See also API:Etiquette#Use maxlag parameter.

Note that the caching layer (Varnish or squid) may also generate error messages with a 503 status code, due to timeout of an upstream server. Clients should treat these errors differently, because they may occur consistently when you try to perform a long-running expensive operation. Repeating the operation on timeout would use excessive server resources and may leave your client in an infinite loop. You can distinguish between cache-layer errors and MediaWiki lag conditions using any of the following:

  • X-Database-Lag header is distinctive to replication lag errors in MediaWiki
  • No Retry-After in Varnish errors
  • X-Squid-Error header should be present in squid errors
  • The response body in replication lag errors will match the regex /Waiting for [^ ]*: [0-9.-]+ seconds? lagged/

Для цілей тестування ви можете навмисно розробити програмне забезпечення, що відмовляється від запиту, передаючи від'ємне значення, наприклад, у такій URL-адресі: //www.mediawiki.org/w/api.php?action=query&titles=MediaWiki&format=json&maxlag=-1.

The maxlag parameter is checked in MediaWiki.php , and also applies to the action API.