In MediaWiki 1.6, a job queue was introduced to perform long-running tasks asynchronously. The job queue is designed to hold many short tasks using batch processing.
It is recommended that you instead schedule the running of jobs completely in the background, via the command line.
By default, jobs are run at the end of a web request. Disable this default behaviour by setting
runJobs.phpas the same user as the web server runs as, to ensure that permissions to the filesystem are correctly accounted for if jobs touch uploaded files.
You could use cron to run the jobs every hour. Add the following to your crontab file:
0 * * * * /usr/bin/php /var/www/wiki/maintenance/runJobs.php --maxtime=3600 > /var/log/runJobs.log 2>&1
Using Cron makes it easy to get started, but can make email notifications and cascading template feel slow (to wait up to an hour). Consider using one of the below approaches to set up a continuous job runner instead.
If you have shell access and the possibility to create init scripts, you can create a simple service to run jobs as they become available, and also throttle them to prevent the job runner to monopolize the CPU resources of the server:
Create a bash script, for example at
#!/bin/bash # Put the MediaWiki installation path on the line below MW_INSTALL_PATH=/home/www/www.mywikisite.example/mediawiki RUNJOBS=$MW_INSTALL_PATH/maintenance/runJobs.php echo Starting job service... # Wait a minute after the server starts up to give other processes time to get started sleep 60 echo Started. while true; do # Job types that need to be run ASAP no matter how many of them are in the queue # Those jobs should be very "cheap" to run php $RUNJOBS --type="enotifNotify" # Everything else, limit the number of jobs on each batch # The --wait parameter will pause the execution here until new jobs are added, # to avoid running the loop without anything to do php $RUNJOBS --wait --maxjobs=20 # Wait some seconds to let the CPU do other things, like handling web requests, etc echo Waiting for 10 seconds... sleep 10 done
Depending on how fast the server is and the load it handles, you can adapt the number of jobs to run on each cycle and the number of seconds to wait on each cycle.
Make the script executable (
If using systemd, create a new service unit by creating the file
User parameter to the user that runs PHP on your web server:
[Unit] Description=MediaWiki Job runner [Service] ExecStart=/usr/local/bin/mwjobrunner Nice=10 ProtectSystem=full User=php-fpm OOMScoreAdjust=200 StandardOutput=journal [Install] WantedBy=multi-user.target
Enable it and start it with those commands:
sudo systemctl enable mw-jobqueue sudo systemctl start mw-jobqueue sudo systemctl status mw-jobqueue
Job execution on page requests
By default, at the end of each web request, one job is taken from the job queue and executed.
This behavior is controlled by the $wgJobRunRate configuration variable.
Setting this variable to
1, will run a job on each request.
Setting this variable to
0 will disable the execution of jobs during web requests completely, so that you can instead run runJobs.php manually or periodically from the command line.
If the performance burden of running jobs on every web request is too great but you are unable to run jobs from the command line, you can reduce $wgJobRunRate to a number between
This means a job will execute on average every
1 / $wgJobRunRate requests.
$wgJobRunRate = 0.01;
There is also a way to empty the job queue manually, for example after changing a template that's present on many pages.
Simply run the
maintenance/runJobs.php maintenance script.
/path-to-my-wiki/maintenance$ php ./runJobs.php
The configuration variable $wgRunJobsAsync has been added to force the execution of jobs synchronously, in scenarios where making an internal HTTP request for job execution is not wanted.
When running jobs asynchronously, it will open an internal HTTP connection for handling the execution of jobs, and will return the contents of the page immediately to the client without waiting for the job to complete. Otherwise, the job will be executed in the same process and the client will have to wait until the job is completed. When the job does not run asynchronously, if a fatal error occurs during job execution, it will propagate to the client, aborting the load of the page.
Note that even if $wgRunJobsAsync is set to true, if PHP can't open a socket to make the internal HTTP request, it will fall back to the synchronous job execution. However, there are a variety of situations where this internal request may fail, and jobs won't be run, without falling back to the synchronous job execution. Starting with MediaWiki 1.28.1 and 1.27.2, $wgRunJobsAsync now defaults to false.
The deferred updates mechanism was introduced in MediaWiki 1.23 and received major changes during MediaWiki 1.27 and 1.28. It allows the execution of some features at the end of the request, when all the content has been sent to the browser, instead of queuing it in the job, which would otherwise be executed potentially some hours later. The goal of this alternate mechanism is mainly to speed up the main MediaWiki requests, and at the same time execute some features as soon as possible at the end of the request.
Some deferrable updates can be both deferrable updates and jobs, if specified as such.
Changes in MediaWiki 1.22
In MediaWiki 1.22, the job queue execution on each page request was changed (Gerrit change 59797) so, instead of executing the job inside the same PHP process that's rendering the page, a new PHP cli command is spawned to execute runJobs.php in the background. It will only work if $wgPhpCli is set to an actual path or safe mode is off, otherwise, the old method will be used.
This new execution method could cause some problems:
- If $wgPhpCli is set to an incompatible version of PHP (e.g.: an outdated version) jobs may fail to run (fixed in 1.23).
open_basedirrestrictions are in effect, and $wgPhpCli is disallowed (task T62208, fixed in 1.23).
- Performance: even if the job queue is empty, the new PHP process is started anyway (task T62210, fixed in 1.23).
- Sometimes the spawning PHP process cause the server or only the CLI process to hang due to stdout and stderr descriptors not properly redirected (task T60719, fixed in 1.22)
- It does not work for shared code (wiki farms), because it doesn't pass additional required parameters to runJobs.php to identify the wiki that's running the job (task T62698, fixed in 1.23)
- Normal shell limits like $wgMaxShellMemory, $wgMaxShellTime and $wgMaxShellFileSize are enforced on the runJobs.php process that's being executed in the background.
There's no way to revert to the old on-request job queue handling, besides setting $wgPhpCli to
false, for example, which may cause other problems (task T63387).
It can be disabled completely by setting
$wgJobRunRate = 0;, but jobs will no longer run on page requests, and you must explicitly run runJobs.php to periodically run pending jobs.
Changes in MediaWiki 1.23
In MediaWiki 1.23, the 1.22 execution method is abandoned, and jobs are triggered by MediaWiki making an HTTP connection to itself.
While it solves various bugs introduced in 1.22, it still requires loading a lot of PHP classes in memory on a new process to execute a job, and also makes a new HTTP request that the server must handle.
Changes in MediaWiki 1.27
In MediaWiki 1.25 and MediaWiki 1.26, use of $wgRunJobsAsync would sometimes cause jobs not to get run if the wiki has custom $wgServerName configuration. This was fixed in MediaWiki 1.27. task T107290
Changes in MediaWiki 1.28
Between MediaWiki 1.23 and MediaWiki 1.27, use of $wgRunJobsAsync would cause jobs not to get run on if MediaWiki requests are for a server name or protocol that does not match the currently configured server name one (e.g. when supporting both HTTP and HTTPS, or when MediaWiki is behind a reverse proxy that redirects to HTTPS). This was fixed in MediaWiki 1.28. task T68485
Changes in MediaWiki 1.29
In MediaWiki 1.27.0 to 1.27.3 and 1.28.0 to 1.28.2, when $wgJobRunRate is set to a value greater than 0, an error like the one below may appear in error logs, or on the page:
PHP Notice: JobQueueGroup::__destruct: 1 buffered job(s) never inserted
As a result of this error, certain updates may fail in some cases, like category members not being updated on category pages, or recent changes displaying edits of deleted pages - even if you manually run runJobs.php to clear the job queue. It has been reported as a bug (task T100085) and was solved in 1.27.4 and 1.28.3.
When a template changes, MediaWiki adds a job to the job queue for each article transcluding that template. Each job is a command to read an article, expand any templates, and update the link table accordingly. Previously, the host articles would remain outdated until either their parser cache expires or until a user edits the article.
HTML cache invalidation
A wider class of operations can result in invalidation of the HTML cache for a large number of pages:
- Changing an image (all the thumbnails have to be re-rendered, and their sizes recalculated)
- Deleting a page (all the links to it from other pages need to change from blue to red)
- Creating or undeleting a page (like above, but from red to blue)
- Changing a template (all the pages that transclude the template need updating)
Except for template changes, these operations do not invalidate the links tables, but they do invalidate the HTML cache of all pages linking to that page, or using that image. Invalidating the cache of a page is a short operation; it only requires updating a single database field and sending a multicast packet to clear the caches. But if there are more than about 1000 to do, it takes a long time. By default, one job is added per 300 operations (see $wgUpdateRowsPerJob)
Note, however, that even if purging the cache of a page is a short operation, reparsing a complex page that is not in the cache may be expensive, specially if a highly used template is edited and causes lots of pages to be purged in a short period of time and your wiki has lots of concurrent visitors loading a wide spread of pages.
This can be mitigated by reducing the number of pages purged in a short period of time, by reducing
$wgUpdateRowsPerJob to a small number (20, for example) and also set
htmlCacheUpdate to a low number (5, for example).
Audio and video transcoding
When using TimedMediaHandler to process local uploads of audio and video files, the job queue is used to run the potentially very slow creation of derivative transcodes at various resolutions/formats.
These are not suitable for running on web requests -- you will need a background runner.
It's recommended to set up separate runners for the
webVideoTranscodePrioritized job types if possible. These two queues process different subsets of files -- the first for high resolution HD videos, and the second for lower-resolution videos and audio files which process more quickly.
During a period of low load, the job queue might be zero. At Wikimedia, the job queue is, in practice, almost never zero. In off-peak hours, it might be a few hundred to a thousand. During a busy day, it might be a few million, but it can quickly fluctuate by 10% or more. 
The number of jobs returned in the API result may be slightly inaccurate when using MySQL, which estimates the number of jobs in the database. This number can fluctuate based on the number of jobs that have recently been added or deleted. For other databases that do not support fast result-size estimation, the actual number of jobs is given.
- Maintained by Core Platform Team.
- Live chat (IRC): connect
- Issue tracker: Phabricator MediaWiki-Core-JobQueue (Report an issue)