Manual:Maxlag parameter/es

If you are running MediaWiki on a replicated database cluster (like Wikimedia is), then high edit rates may cause the slave servers to lag. One way to mitigate slave lag is to have all bots and maintenance tasks automatically stop whenever lag goes above a certain value. En MediaWiki 1.10, se introdujo el parámetro maxlag para permitir que se haga esto mismo en scripts del lado del cliente. En 1.27, se actualizó para que solo funcionase para peticiones a.

El parámetro maxlag se puede pasar a api.php a través de un parámetro en la URL o de datos POST. Es un número entero de segundos. For example, [/w/api.php?action=query&titles=MediaWiki&format=json&maxlag=1 this link] shows metadata about the page "MediaWiki" unless the lag is greater than 1 second while [/w/api.php?action=query&titles=MediaWiki&format=json&maxlag=-1 this one] (with -1 at the end) shows you the actual lag without metadata.

If the specified lag is exceeded at the time of the request, a 503 status code is returned (or 200 during API requests, see T33156), with a response body with the following format:

Se establecen las siguientes cabeceras HTTP:


 * Retry-After: un número mínimo recomendado de segundos que el cliente debería esperar antes de volver a intentar
 * X-Database-Lag: The number of seconds of lag of the most lagged slave

El uso recomendado para wikis de Wikimedia es el siguiente:


 * Usa maxlag=5 (5 segundos). Este es un valor adecuado y no agresivo, fijado como predeterminado en Pywikibot. Valores más altos indican un comportamiento más agresivo, valores más bajos son más amables.
 * If you get a lag error, pause your script for at least 5 seconds before trying again. Be careful not to go into a busy loop.
 * It's possible that with this value, you may get a low duty cycle at times of high database load. That's OK, just let it wait for off-peak. We give humans priority at times of high load because we don't want to waste their time by rejecting their edits.
 * Unusually high or persistent lag should be reported to on irc.freenode.net.
 * Interactive tasks (where a user is waiting for the result) may omit the maxlag parameter. Noninteractive tasks should always use it. See also API:Etiquette.

Note that the caching layer (Varnish or squid) may also generate error messages with a 503 status code, due to timeout of an upstream server. Clients should treat these errors differently, because they may occur consistently when you try to perform a long-running expensive operation. Repeating the operation on timeout would use excessive server resources and may leave your client in an infinite loop. You can distinguish between cache-layer errors and MediaWiki lag conditions using any of the following:


 * X-Database-Lag header is distinctive to slave lag errors in MediaWiki
 * No Retry-After in Varnish errors
 * X-Squid-Error header should be present in squid errors
 * The response body in slave lag errors will match the regex

For testing purposes, you may intentionally make the software refuse a request by passing a negative value, such as in the following URL: [/w/api.php?action=query&titles=MediaWiki&format=json&maxlag=-1 /w/api.php?action=query&titles=MediaWiki&format=json&maxlag=-1].

The maxlag parameter is checked in, and also applies to the action API.