Manual:Maxlag parameter/zh

如果您正在重复的数据库群（就像维基媒体一样的）运行MediaWiki，大比率编辑可能导致slave服务器延迟. One way to mitigate slave lag is to have all bots and maintenance tasks automatically stop whenever lag goes above a certain value. In MediaWiki 1.10, the maxlag parameter was introduced which allows the same thing to be done in client-side scripts. 在1.27版中，它被更新为只为 请求工作.

The maxlag parameter can be passed to api.php through a URL parameter or POST data. It is an integer number of seconds. For example, [/w/api.php?action=query&titles=MediaWiki&format=json&maxlag=1 this link] shows metadata about the page "MediaWiki" unless the lag is greater than 1 second while [/w/api.php?action=query&titles=MediaWiki&format=json&maxlag=-1 this one] (with -1 at the end) shows you the actual lag without metadata.

If the specified lag is exceeded at the time of the request, a 503 status code is returned (or 200 during API requests, see T33156), with a response body with the following format:

以下HTTP标头已设置：


 * Retry-After: a recommended minimum number of seconds that the client should wait before retrying
 * X-Database-Lag: The number of seconds of lag of the most lagged slave

Recommended usage for Wikimedia wikis is as follows:


 * Use maxlag=5 (5 seconds). This is an appropriate non-aggressive value, set as default value on Pywikibot. Higher values mean more aggressive behaviour, lower values are nicer.
 * If you get a lag error, pause your script for at least 5 seconds before trying again. Be careful not to go into a busy loop.
 * It's possible that with this value, you may get a low duty cycle at times of high database load. That's OK, just let it wait for off-peak. We give humans priority at times of high load because we don't want to waste their time by rejecting their edits.
 * Unusually high or persistent lag should be reported to on irc.freenode.net.

Note that the caching layer (Varnish or squid) may also generate error messages with a 503 status code, due to timeout of an upstream server. Clients should treat these errors differently, because they may occur consistently when you try to perform a long-running expensive operation. Repeating the operation on timeout would use excessive server resources and may leave your client in an infinite loop. You can distinguish between cache-layer errors and MediaWiki lag conditions using any of the following:


 * X-Database-Lag header is distinctive to slave lag errors in MediaWiki
 * No Retry-After in Varnish errors
 * X-Squid-Error header should be present in squid errors
 * The response body in slave lag errors will match the regex

For testing purposes, you may intentionally make the software refuse a request by passing a negative value, such as in the following URL: [/w/api.php?action=query&titles=MediaWiki&format=json&maxlag=-1 /w/api.php?action=query&titles=MediaWiki&format=json&maxlag=-1].

The maxlag parameter is checked in Wiki.php, and also applies to the action API.