Manual:Job queue/For developers

Jobs are non-urgent tasks. For a general introduction and management of job queues, see Manual:Job queue.

Differences with
Deferred updates (also called deferrable updates) are functions executed at the end of a MediaWiki web request (or at the end of the execution of a job). For web requests, if supported by the web server, this work happens after the http response has been closed (see, register_postsend_function, and ). Deferred updates are a useful way to postpone time-consuming tasks in order to speed up the main MediaWiki response. See the class for details, as well as Database transactions.

Deferrable updates will be executed at the end of the current request. They are memorised only within the web request.

Jobs will be executed at a later time, possibly several hours after the original web request. The job queue storage backend is configurable, but defaults to the table in the wiki's main database. 

Deferrable updates should be used for urgent things, and jobs for non-urgent things.

Some deferrable updates, implementing the interface, can be transformed to jobs. Any EnqueueableDataUpdate added during the execution of another job, is initially stored as deferrable update to be executed immediately after current job is finished. However, if the job runner stores more than 100 deferred updates, any EnqueueableDataUpdate are converted to jobs and queued for later.

Use jobs if you need to save data in the context of a GET request
Instead of a DeferredUpdate, use a job for writing data in the context of a GET request. Using a DeferredUpdate will succeed but the TransactionProfiler will report a violation: "Expectation (masterConns <= 0) by MediaWiki::main not met (actual: 1):".

Registering a job
To use the to do your non-urgent jobs, you need to do these things:

Create a Job subclass
You need to create a class, that, given parameters and a Title, will perform your deferred updates

Add your Job class to the global list
Add the Job class to the global array. In extensions, this is done in the file, and in core it's done in. The key must be unique and match the value in the job's constructor, and the value is the class name.

How to queue a job
There is another function to push jobs,, which will be executed at the very end, hence after jobs pushed with.

Job queue type
A job queue type is the command name you give to the parent::__construct method of your job class; e.g., using the example above, that would be synchroniseThreadArticleData.

getQueueSizes
will return an array of all job queue types and their sizes.

getSize
While  is handy for analysing the entire job queue, for performance reasons, it’s best to use   when analysing a specific job type, which will only return the job queue size of that specific job type.

Pushing jobs
The primary function is. It selects the job queue corresponding to the job type and, depending on the job queue implementation (database or Redis), it will be pushed either through a Redis connection (Redis case) either as a deferrable update (database case).

The lazy push function keeps in memory the jobs. At the end of the current execution (end of MediaWiki request or end of the current job execution) the jobs kept in memory are pushed, as the last deferrable update (of type ). As a deferrable update, the jobs are pushed at the end of the current execution, and as an  the jobs are pushed as a single database transaction. See  and   for details.

In CLI, note that deferrable updates (either from  (JobQueueDB implementation), either from  ) are directly executed if the database transaction flag  is free. See  and   for details.

When some jobs are pushed through  but never really pushed (and hence lost), usually because an unhandled exception is thrown, the destructor of JobQueueGroup shows a warning in the debug log:

PHP Notice: JobQueueGroup::__destruct: 1 buffered job(s) never inserted

See for an example of such a warning; this was before MediaWiki 1.29 release for Web-executed jobs, because when a job internally lazy-push a job and the former job is executed in the shutdown part of a MediaWiki request, the later job is not pushed (because   was already called); the fix for this specific bug was to call   in   to always push lazily-pushed jobs after execution of each job.

Execution of jobs
Jobs are ordinarily executed at the end of a web request, at the rate of per request. If, no jobs are run at the end of a web request. The default value of is 1.

All enqueued jobs can be executed at any time by running. This is particularly important when.

The jobs are run by the  class. Each job is given its own database transaction.

At the end of the job execution, deferrable updates are executed. Since MediaWiki 1.28.3/1.29 lazily-pushed jobs are pushed through a deferrable update in order to use a dedicated database transaction (with ).