Manual:Job queue/For developers

The jobs are non-urgent tasks. For a general introduction and management of jobs, see Manual:Job queue.

Differences with
Deferred updates (also called deferrable updates) are functions executed at the end of a MediaWiki request and at the end of execution of each job, if possible after closing the current Web request (see ). They are useful to postpone some time-consuming tasks in order to speed up the main MediaWiki request.

Jobs will be executed at a later time, possibly hours or days after the request, but deferrable updates will be executed at the end of the request. Hence deferrable updates should be used for urgent things, and jobs for non-urgent things.

Registering a job
To use the to do your non-urgent jobs, you need to do these things:

Create a Job subclass
You need to create a class, that, given parameters and a Title, will perform your deferred updates

Add your Job class to the global list
Add the Job class to the global $wgJobClasses array. In extensions, this is usually done in the main extension file, e.g. . Make sure the key name is unique.

If your extension uses extension.json descriptor, you can use its section :

How to invoke a job
There is another function to push jobs,, which will be executed at the very end, hence after jobs pushed with.

Job queue type
A job queue type is the command name you give to the parent::__construct method of your job class; e.g., using the example above, that would be synchroniseThreadArticleData.

getQueueSizes
will return an array of all job queue types and their sizes.

getSize
While  is handy for analysing the entire job queue, for performance reasons, it’s best to use   when analysing a specific job type, which will only return the job queue size of that specific job type.

Pushing jobs
The primary function is, which selects the job queue corresponding to the job type. Depending on the job queue implementation (database or Redis) it will be pushed either through a Redis connection either as a deferrable update (database).

The lazy push function keeps in memory the jobs. At the end of the current execution (end of MediaWiki request and/or end of the current job execution) the jobs kept in memory are pushed, as the last deferrable update (of type AutoCommitUpdate). As a deferrable update, they are are pushed at the end of the current execution, and as an AutoCommitUpdate the jobs are pushed as a single database transaction. See  and   for details.

In CLI, note that deferrable updates (either from  (JobQueueDB implementation), either from  ) are directly executed if the database transaction flag  is free. See  and   for details.

When some jobs are pushed through  but never really pushed (and hence lost) the destructor of JobQueueGroup shows a warning in the debug log. PHP Notice: JobQueueGroup::__destruct: 1 buffered job(s) never inserted See phabricator:T100085 for an example of such a warning; this was before MediaWiki 1.29 release for Web-executed jobs, because when a job internally lazy-push a job and the former job is executed in the shutdown part of a MediaWiki requests, the later job is not pushed (because  was already called); the fix for this specific bug was to call   in   to always push lazily-pushed jobs after execution of a single job.

Execution of jobs
Jobs are executed through two methods, depending on the parameter  (zero, (x)or greater than zero). If  MediaWiki executes some jobs at the end of the Web requests; if   nothing happens at the end of the Web requests. In all cases (but particularly important in the later case) jobs can be executed in CLI with.

The jobs are run by the  class. Each job is given its own database transaction.

At the end of the job execution, deferrable updates are executed. Since MediaWiki 1.28.3/1.29 lazily-pushed jobs are pushed through a deferrable update in order to use a dedicated database transaction (with ).