Manual:Maxlag parameter

If you are running MediaWiki on a replicated database cluster (like Wikimedia is), then high edit rates may cause the slave servers to lag. One way to mitigate slave lag is to have all bots and maintenance tasks automatically stop whenever lag goes above a certain value. In a server side script this is done using. In MediaWiki 1.10, the maxlag parameter was introduced which allows the same thing to be done in client-side scripts.

The maxlag parameter can be passed to index.php through a URL parameter, POST data or a cookie. It is an integer number of seconds. For example, this link edits this page unless the lag is greater than 1 second while this one (with -1 at the end) shows you the actual lag without editing.

If the specified lag is exceeded at the time of the request, a 503 status code is returned (or 200 during API requests, see ), with a response body with the following format:



The following HTTP headers are set:


 * Content-Type: text/plain
 * Retry-After: a recommended minimum number of seconds that the client should wait before retrying.
 * X-Database-Lag: The number of seconds of lag of the most lagged slave

Recommended usage for Wikimedia wikis is as follows:


 * Use maxlag=5 (5 seconds). This is an appropriate non-aggressive value, used by most of our server-side scripts and set as default value on Pywikipediabot. Higher values mean more aggressive behaviour, lower values are nicer.
 * If you get a lag error, pause your script for at least 5 seconds before trying again. Be careful not to go into a busy loop.
 * It's possible that with this value, you may get a low duty cycle at times of high database load. That's OK, just let it wait for off-peak. We give humans priority at times of high load because we don't want to waste their time by rejecting their edits.
 * Unusually high or persistent lag should be reported to #wikimedia-tech on irc.freenode.net.

Note that Squid may also generate error messages with a 503 status code, due to timeout of an upstream server. Clients should treat these errors differently, because they may occur consistently when you try to perform a long-running expensive operation. Repeating the operation on timeout would use excessive server resources and may leave your client in an infinite loop. You can distinguish between squid errors and MediaWiki lag conditions using any of the following:


 * X-Database-Lag header is distinctive to slave lag errors in MediaWiki
 * No Retry-After in squid errors
 * X-Squid-Error header should be present in squid errors
 * The response body in slave lag errors will match the regex

For testing purposes, you may intentionally make the software refuse a request by passing a negative value, such as in the following URL:.

The maxlag parameter is checked in index.php, and also applies to the API.