Manual:Job queue/zh

在MediaWiki 1.6中，引入了作业队列以异步执行长时间运行的任务. 作业队列被设计为使用批处理来保存许多短任务.



设置
建议您通过命令行完全在后台调度作业的运行. 默认情况下，作业在web请求结束时运行. 通过将设置为 来禁用此默认行为.

Cron
您可以使用Cron每小时运行一次作业. 将以下内容添加到crontab文件中：

使用Cron可以很容易地开始，但会使电子邮件通知和级联模板感觉很慢（最多等待一个小时）. 考虑使用以下方法之一来建立一个连续的作业运行器.



持续服务
如果您有shell访问权限，并且可以创建init脚本，那么您可以创建一个简单的服务，在作业可用时运行它们，并限制它们，以防止作业运行者独占服务器的CPU资源：

创建一个bash脚本，例如 ：

根据服务器的速度和处理的负载，您可以调整每个周期运行的作业数和每个周期等待的秒数.

使脚本可执行（ ）.

如果使用systemd，请通过创建文件 来创建新的服务单元. 将 参数更改为在web服务器上运行PHP的用户：

启用它并使用以下命令启动它：



页面请求的作业执行
默认情况下，在每个web请求结束时，从作业队列中取出一个作业并执行. 此行为由配置变量控制. 将此变量设置为 ，将对每个请求运行一个作业. 将此变量设置为 将完全禁止在web请求期间执行作业，因此您可以从命令行手动或定期运行.

启用后，将通过打开套接字并向未列出的特殊页面：Special:RunJobs发出内部HTTP请求来执行作业. 参见#Asynchronous章节.



性能问题
如果在每个web请求上运行作业的性能负担太大，但无法从命令行运行作业，则可以将降至 到 之间. 这意味着作业将平均每 请求执行一次.



手动使用
还有一种方法可以手动清空作业队列，例如，在更改许多页面上的模板后. 简单地运行 维护脚本. 例如：

异步
已添加配置变量，以在不需要对作业执行发出内部HTTP请求的情况下强制同步执行作业.

当异步运行作业时，它将打开一个用于处理作业执行的内部HTTP连接，并将立即将页面内容返回给客户端，而无需等待作业完成. 否则，作业将在同一进程中执行，客户端必须等待作业完成. 当作业未异步运行时，如果在作业执行过程中发生致命错误，它将传播到客户端，从而中止页面加载.

注意，即使$wgRunJobsAsync设置为true，如果PHP无法打开套接字来发出内部HTTP请求，它也会返回到同步作业执行. 但是，在各种情况下，此内部请求可能会失败，作业不会运行，而不会返回到同步作业执行. 从MediaWiki 1.28.1和1.27.2开始，$wgRunJobsAsync现在默认为false.



滞后更新
滞后更新机制允许在将所有内容发送到浏览器后，在请求结束时安排代码的执行. This is similar to queuing a job, except that it runs immediately instead of upto several minutes/hours in the future.

DeferredUpdates在MediaWiki 1.23中引入，并在MediaWiki 1.27和1.28期间进行了重大更改. 该机制的目标是通过减少工作量来加快网络响应速度，并在响应结束后尽快优先处理一些以前属于作业的工作.

A deferrable update can implement  in order to be queueable as a Job as well. This is used by RefreshSecondaryDataUpdate in core, for example, which means if the update fails for any reason, MediaWiki will fallback to queuing as a job and try again later as to fulfil the contract in question.



MediaWiki 1.22的改变
在中，每个页面请求上的作业队列执行都发生了更改（），因此，不再在渲染页面的同一个PHP进程中执行作业，而是生成了一个新的PHPcli命令来在后台执行. 只有当设置为实际路径或安全模式关闭时，它才会工作，否则将使用旧方法.

这个新的执行方法可能导致一些问题：


 * 如果设置为PHP的不兼容版本（例如：过时版本），则作业可能无法运行（1.23中已修复）.
 * PHP 限制生效，被禁止（，在1.23中修复）.
 * Performance: even if the job queue is empty, the new PHP process is started anyway (, fixed in 1.23).
 * Sometimes the spawning PHP process cause the server or only the CLI process to hang due to stdout and stderr descriptors not properly redirected (, fixed in 1.22)
 * It does not work for shared code (wiki farms), because it doesn't pass additional required parameters to runJobs.php to identify the wiki that's running the job (, fixed in 1.23)
 * Normal shell limits like, and  are enforced on the runJobs.php process that's being executed in the background.

There's no way to revert to the old on-request job queue handling, besides setting to , for example, which may cause other problems. It can be disabled completely by setting, but jobs will no longer run on page requests, and you must explicitly run runJobs.php to periodically run pending jobs.



MediaWiki 1.23 引入的改变
In MediaWiki 1.23, the 1.22 execution method is abandoned, and jobs are triggered by MediaWiki making an HTTP connection to itself.

It was first designed as an API entry point but later changed to be the unlisted special page Special:RunJobs.

While it solves various bugs introduced in 1.22, it still requires loading a lot of PHP classes in memory on a new process to execute a job, and also makes a new HTTP request that the server must handle.



MediaWiki 1.27 引入的改变
In MediaWiki 1.25 and MediaWiki 1.26, use of would sometimes cause jobs not to get run if the wiki has custom   configuration. This was fixed in MediaWiki 1.27.



MediaWiki 1.28 引入的改变
Between MediaWiki 1.23 and MediaWiki 1.27, use of would cause jobs not to get run on if MediaWiki requests are for a server name or protocol that does not match the currently configured server name one (e.g. when supporting both HTTP and HTTPS, or when MediaWiki is behind a reverse proxy that redirects to HTTPS). This was fixed in MediaWiki 1.28.



MediaWiki 1.29 引入的改变
In MediaWiki 1.27.0 to 1.27.3 and 1.28.0 to 1.28.2, when is set to a value greater than 0, an error like the one below may appear in error logs, or on the page:

PHP Notice: JobQueueGroup::__destruct: 1 buffered job(s) never inserted

As a result of this error, certain updates may fail in some cases, like category members not being updated on category pages, or recent changes displaying edits of deleted pages - even if you manually run to clear the job queue. It has been reported as a bug and was solved in 1.27.4 and 1.28.3.



作业示例


当模版改变时更新链接表
When a template changes, MediaWiki adds a job to the job queue for each article transcluding that template. Each job is a command to read an article, expand any templates, and update the link table accordingly. Previously, the host articles would remain outdated until either their parser cache expires or until a user edits the article.



HTML 缓存清空
A wider class of operations can result in invalidation of the HTML cache for a large number of pages:


 * Changing an image (all the thumbnails have to be re-rendered, and their sizes recalculated)
 * Deleting a page (all the links to it from other pages need to change from blue to red)
 * Creating or undeleting a page (like above, but from red to blue)
 * Changing a template (all the pages that transclude the template need updating)

Except for template changes, these operations do not invalidate the links tables, but they do invalidate the HTML cache of all pages linking to that page, or using that image. Invalidating the cache of a page is a short operation; it only requires updating a single database field and sending a multicast packet to clear the caches. But if there are more than about 1000 to do, it takes a long time. By default, one job is added per 300 operations (see )

Note, however, that even if purging the cache of a page is a short operation, reparsing a complex page that is not in the cache may be expensive, especially if a highly used template is edited and causes lots of pages to be purged in a short period of time and your wiki has lots of concurrent visitors loading a wide spread of pages. This can be mitigated by reducing the number of pages purged in a short period of time, by reducing to a small number (20, for example) and also set  for   to a low number (5, for example).



音视频转码
When using to process local uploads of audio and video files, the job queue is used to run the potentially very slow creation of derivative transcodes at various resolutions/formats.

These are not suitable for running on web requests -- you will need a background runner.

It's recommended to set up separate runners for the  and   job types if possible. These two queues process different subsets of files -- the first for high resolution HD videos, and the second for lower-resolution videos and audio files which process more quickly.



典型值
当负荷低时，作业队列可能为零. 在维基媒体，作业队列实际上几乎从来没有为零. 在低谷时，它可能为几百至一千. 在繁忙的日子，它可能为几百万，但可能很快地变化10%或更多.

Special:Statistics
直到MediaWiki 1.16以前，队列值曾显示于Special:Statistics. 然而从1.17（75272）版本开始这已被移除，并且现在可以通过查看：

The number of jobs returned in the API result may be slightly inaccurate when using MySQL, which estimates the number of jobs in the database. This number can fluctuate based on the number of jobs that have recently been added or deleted. For other databases that do not support fast result-size estimation, the actual number of jobs is given.

<span id="For_developers">

Code stewardship
<span id="See_also">