Manual:Job queue

From MediaWiki.org
Jump to: navigation, search

In MediaWiki 1.6, a job queue was introduced to perform long-running tasks asynchronously. The job queue is designed to hold many short tasks using batch processing.

Job execution on page requests[edit | edit source]

By default, each time a page request runs, one job is taken from the job queue and executed. This behavior is controlled by the $wgJobRunRate configuration variable. Setting this variable to 1, will run a job on each request. Setting this variable to a number between 1 and 0 will execute a job on average every 1 / $wgJobRunRate requests. Setting this variable to 0 will disable the execution of jobs during page requests completely, but you should run runJobs.php manually or periodically.

MediaWiki version: 1.23

When enabled, jobs will be executed opening a socket and making an internal HTTP request to an unlisted Special page: SpecialRunJobs.php.

The configuration variable Manual:$wgRunJobsAsync has been added to force the execution of jobs synchronously, in scenarios where making an internal HTTP request for job execution is not wanted.

When running jobs asynchronously, it will open an internal HTTP connection for handling the execution of jobs, and will return the contents of the page immediately to the client without waiting for the job to complete. Otherwise, the job will be executed in the same process and the client will have to wait until the job is completed. When the job does not run asynchronously, if a fatal error occurs during job execution, it will propagate to the client, aborting the load of the page.

Note that even if $wgRunJobsAsync is set to true, if PHP can't open a socket to make the internal HTTP request, it will fallback to the synchronous job execution.

History[edit | edit source]

Jobs were originally being run at the end of the page request, on the same process. This could cause problems if a PHP fatal error occurs during a job execution, aborting the load of the page.

Changes introduced in MediaWiki 1.22[edit | edit source]

In MediaWiki 1.22, the job queue execution on each page request was changed (Gerrit change 59797) so, instead of executing the job inside the same PHP process that's rendering the page, a new PHP cli command is spawned to execute runJobs.php in the background. It will only work if $wgPhpCli is set to an actual path or safe mode is off, otherwise, the old method will be used.

This new execution method could cause some problems:

  • If $wgPhpCli is set to an incompatible version of PHP (e.g.: an outdated version) jobs may fail to run (fixed in 1.23).
  • PHP open_basedir restrictions are in effect, and $wgPhpCli is disallowed (bug 60208 STATUS:RESOLVED FIXED IN 1.23).
  • Performance: even if the job queue is empty, the new PHP process is started anyway (bug 60210 STATUS:RESOLVED FIXED IN 1.23).
  • Sometimes the spawning PHP process cause the server or only the CLI process to hang due to stdout and stderr descriptors not properly redirected (bug 58719 STATUS:RESOLVED FIXED IN 1.22)
  • It does not work for shared code (wiki farms), because it doesn't pass additional required parameters to runJobs.php to identify the wiki that's running the job (bug 60698 STATUS:RESOLVED FIXED IN 1.23)
  • Normal shell limits like $wgMaxShellMemory, $wgMaxShellTime and $wgMaxShellFileSize are enforced on the runJobs.php process that's being executed in background.

There's no way to revert to the old on-request job queue handling, besides setting $wgPhpCli to false, for example, which may cause other problems (bug 61387). It can be disabled completely by setting $wgJobRunRate = 0;, but jobs will no longer run on page requests, and you must explicitly run runJobs.php to periodically run pending jobs.

Changes introduced in MediaWiki 1.23[edit | edit source]

In MediaWiki 1.23, the 1.22 execution method is abandoned, and jobs are triggered by MediaWiki making an HTTP connection to itself.

It was first designed as an API entry point (Gerrit change 113038) but later changed to be an unlisted Special Page: SpecialRunJobs.php (Gerrit change 118336).

While it solves various of the bugs introduced in 1.22, it still requires to load a lot of PHP classes in memory on a new process to execute a job, and also makes a new HTTP request that the server must handle.

Performance issue[edit | edit source]

If the performance burden of this is too great, you can reduce $wgJobRunRate by putting something like this in your LocalSettings.php:

$wgJobRunRate = 0.01;

There is also a way to empty the job queue manually, for example after changing a template that's present on many pages. Simply run the maintenance/runJobs.php maintenance script. For example:

/path-to-my-wiki/maintenance$ php ./runJobs.php

MediaWiki also allows you to set the $wgJobRunRate to 0, and then use some sort of scheduler to run jobs completely in the background. For instance, if you were to use cron to run the jobs every day at midnight you would enter in your crontab file:

0 0 * * * /usr/bin/php /var/www/wiki/maintenance/runJobs.php > /var/log/runJobs.log 2>&1

Job examples[edit | edit source]

Updating links tables when a template changes[edit | edit source]

When a template changes, MediaWiki adds a job to the job queue for each article transcluding that template. Each job is a command to read an article, expand any templates, and update the link table accordingly. So null edits are no longer necessary, although it may take a while for big operations to complete. This can help to ease strain on a virtual person.

HTML cache invalidation[edit | edit source]

A wider class of operations can result in invalidation of the HTML cache for a large number of pages:

  • Changing an image (all the thumbnails have to be re-rendered, and their sizes recalculated)
  • Deleting a page (all the links to it from other pages need to change from blue to red)
  • Creating or undeleting a page (like above, but from red to blue)
  • Changing a template (all the pages that transclude the template need updating)

Except for template changes, these operations do not invalidate the links tables, but they do invalidate the HTML cache of all pages linking to that page, or using that image. Invalidating the cache of a page is a short operation; it only requires updating a single database field and sending a multicast packet to clear the caches. But if there are more than about 1000 to do, it takes a long time. By default, jobs are added when more than 500 pages need to be invalidated, one job per 500 operations (see Manual:$wgUpdateRowsPerJob).

Typical values[edit | edit source]

During a period of low load, the job queue might be zero. At Wikimedia, the job queue is, in practice, almost never zero. In off-peak hours, it might be a few hundred to a thousand. During a busy day, it might be a few million, but it can quickly fluctuate by 10% or more.[1]

Special:Statistics[edit | edit source]

Up to MediaWiki 1.16, the job queue value was shown on Special:Statistics. However, since 1.17 (rev:75272) it's been removed, and can be seen now with API:Meta#siteinfo / si:


The number of jobs returned in the API result may be slightly inaccurate when using MySQL, which estimates the number of jobs in the database. This number can fluctuate based on the number of jobs that have recently been added or deleted. For other databases that do not support fast result-size estimation, the actual number of jobs is given.

For developers[edit | edit source]

See also[edit | edit source]

Language: English  • Deutsch • français • 日本語 • polski • Tiếng Việt