Manual:Job queue/zh

在MediaWiki 1.6中，引入了作业队列以异步执行长时间运行的任务. 作业队列被设计为使用批处理来保存许多短任务.



设置
建议您通过命令行完全在后台调度作业的运行. 默认情况下，作业在web请求结束时运行. 通过将设置为 来禁用此默认行为.

Cron
您可以使用Cron每小时运行一次作业. 将以下内容添加到crontab文件中：

使用Cron可以很容易地开始，但会使电子邮件通知和级联模板感觉很慢（最多等待一个小时）. 考虑使用以下方法之一来建立一个连续的作业运行器.



持续服务
如果您有shell访问权限，并且可以创建init脚本，那么您可以创建一个简单的服务，在作业可用时运行它们，并限制它们，以防止作业运行者独占服务器的CPU资源：

创建一个bash脚本，例如 ：

Depending on how fast the server is and the load it handles, you can adapt the number of jobs to run on each cycle and the number of seconds to wait on each cycle.

Make the script executable.

If using systemd, create a new service unit by creating the file. Change the  parameter to the user that runs PHP on your web server:

Enable it and start it with those commands:

Job execution on page requests
By default, at the end of each web request, one job is taken from the job queue and executed. This behavior is controlled by the configuration variable. Setting this variable to, will run a job on each request. Setting this variable to  will disable the execution of jobs during web requests completely, so that you can instead run  manually or periodically from the command line.

When enabled, jobs will be executed by opening a socket and making an internal HTTP request to an unlisted special page: Special:RunJobs. 参见#Asynchronous章节.



性能问题
If the performance burden of running jobs on every web request is too great but you are unable to run jobs from the command line, you can reduce to a number between   and. This means a job will execute on average every  requests.



人工用法
There is also a way to empty the job queue manually, for example after changing a template that's present on many pages. 简单地运行 维护脚本. 例如：

异步
The configuration variable has been added to force the execution of jobs synchronously, in scenarios where making an internal HTTP request for job execution is not wanted.

When running jobs asynchronously, it will open an internal HTTP connection for handling the execution of jobs, and will return the contents of the page immediately to the client without waiting for the job to complete. Otherwise, the job will be executed in the same process and the client will have to wait until the job is completed. When the job does not run asynchronously, if a fatal error occurs during job execution, it will propagate to the client, aborting the load of the page.

Note that even if $wgRunJobsAsync is set to true, if PHP can't open a socket to make the internal HTTP request, it will fall back to the synchronous job execution. However, there are a variety of situations where this internal request may fail, and jobs won't be run, without falling back to the synchronous job execution. Starting with MediaWiki 1.28.1 and 1.27.2, $wgRunJobsAsync now defaults to false.



滞后更新
The deferred updates mechanism allows the execution of code to be scheduled for the end of the request, after all content has been sent to the browser. This is similar to queuing a job, except that it runs immediately instead of upto several minutes/hours in the future.

DeferredUpdates was introduced in MediaWiki 1.23 and received major changes during MediaWiki 1.27 and 1.28. The goal of this mechanism is speed up the web responses by doing less work, as well as to prioritise some work that would previously be a job to run as soon as possible after the end of the response.

A deferrable update can implement  in order to be queueable as a Job as well. This is used by RefreshSecondaryDataUpdate in core, for example, which means if the update fails for any reason, MediaWiki will fallback to queuing as a job and try again later as to fulfil the contract in question.



MediaWiki 1.22 引入的改变
In, the job queue execution on each page request was changed so, instead of executing the job inside the same PHP process that's rendering the page, a new PHP cli command is spawned to execute  in the background. It will only work if is set to an actual path or safe mode is off, otherwise, the old method will be used.

这个新的执行方法可能导致一些问题：


 * If is set to an incompatible version of PHP (e.g.: an outdated version) jobs may fail to run (fixed in 1.23).
 * PHP  restrictions are in effect, and  is disallowed (, fixed in 1.23).
 * Performance: even if the job queue is empty, the new PHP process is started anyway (, fixed in 1.23).
 * Sometimes the spawning PHP process cause the server or only the CLI process to hang due to stdout and stderr descriptors not properly redirected (, fixed in 1.22)
 * It does not work for shared code (wiki farms), because it doesn't pass additional required parameters to runJobs.php to identify the wiki that's running the job (, fixed in 1.23)
 * Normal shell limits like, and  are enforced on the runJobs.php process that's being executed in the background.

There's no way to revert to the old on-request job queue handling, besides setting to , for example, which may cause other problems. It can be disabled completely by setting, but jobs will no longer run on page requests, and you must explicitly run runJobs.php to periodically run pending jobs.



MediaWiki 1.23 引入的改变
In MediaWiki 1.23, the 1.22 execution method is abandoned, and jobs are triggered by MediaWiki making an HTTP connection to itself.

It was first designed as an API entry point but later changed to be the unlisted special page Special:RunJobs.

While it solves various bugs introduced in 1.22, it still requires loading a lot of PHP classes in memory on a new process to execute a job, and also makes a new HTTP request that the server must handle.



MediaWiki 1.27 引入的改变
In MediaWiki 1.25 and MediaWiki 1.26, use of would sometimes cause jobs not to get run if the wiki has custom   configuration. This was fixed in MediaWiki 1.27.



MediaWiki 1.28 引入的改变
Between MediaWiki 1.23 and MediaWiki 1.27, use of would cause jobs not to get run on if MediaWiki requests are for a server name or protocol that does not match the currently configured server name one (e.g. when supporting both HTTP and HTTPS, or when MediaWiki is behind a reverse proxy that redirects to HTTPS). This was fixed in MediaWiki 1.28.



MediaWiki 1.29 引入的改变
In MediaWiki 1.27.0 to 1.27.3 and 1.28.0 to 1.28.2, when is set to a value greater than 0, an error like the one below may appear in error logs, or on the page:

PHP Notice: JobQueueGroup::__destruct: 1 buffered job(s) never inserted

As a result of this error, certain updates may fail in some cases, like category members not being updated on category pages, or recent changes displaying edits of deleted pages - even if you manually run to clear the job queue. It has been reported as a bug and was solved in 1.27.4 and 1.28.3.



作业示例


当模版改变时更新链接表
When a template changes, MediaWiki adds a job to the job queue for each article transcluding that template. Each job is a command to read an article, expand any templates, and update the link table accordingly. Previously, the host articles would remain outdated until either their parser cache expires or until a user edits the article.



HTML 缓存清空
A wider class of operations can result in invalidation of the HTML cache for a large number of pages:


 * Changing an image (all the thumbnails have to be re-rendered, and their sizes recalculated)
 * Deleting a page (all the links to it from other pages need to change from blue to red)
 * Creating or undeleting a page (like above, but from red to blue)
 * Changing a template (all the pages that transclude the template need updating)

Except for template changes, these operations do not invalidate the links tables, but they do invalidate the HTML cache of all pages linking to that page, or using that image. Invalidating the cache of a page is a short operation; it only requires updating a single database field and sending a multicast packet to clear the caches. But if there are more than about 1000 to do, it takes a long time. By default, one job is added per 300 operations (see )

Note, however, that even if purging the cache of a page is a short operation, reparsing a complex page that is not in the cache may be expensive, especially if a highly used template is edited and causes lots of pages to be purged in a short period of time and your wiki has lots of concurrent visitors loading a wide spread of pages. This can be mitigated by reducing the number of pages purged in a short period of time, by reducing to a small number (20, for example) and also set  for   to a low number (5, for example).



音视频转码
When using to process local uploads of audio and video files, the job queue is used to run the potentially very slow creation of derivative transcodes at various resolutions/formats.

These are not suitable for running on web requests -- you will need a background runner.

It's recommended to set up separate runners for the  and   job types if possible. These two queues process different subsets of files -- the first for high resolution HD videos, and the second for lower-resolution videos and audio files which process more quickly.



典型值
当负荷低时，作业队列可能为零. 在维基媒体，作业队列实际上几乎从来没有为零. 在低谷时，它可能为几百至一千. 在繁忙的日子，它可能为几百万，但可能很快地变化10%或更多.

Special:Statistics
直到MediaWiki 1.16以前，队列值曾显示于Special:Statistics. 然而从1.17（75272）版本开始这已被移除，并且现在可以通过查看：

The number of jobs returned in the API result may be slightly inaccurate when using MySQL, which estimates the number of jobs in the database. This number can fluctuate based on the number of jobs that have recently been added or deleted. For other databases that do not support fast result-size estimation, the actual number of jobs is given.



Code stewardship
<span id="See_also">