Podręcznik:Kolejka zadań

From MediaWiki.org
Jump to navigation Jump to search
This page is a translated version of the page Manual:Job queue and the translation is 24% complete.

Outdated translations are marked like this.
Other languages:
Bahasa Indonesia • ‎Deutsch • ‎English • ‎Tiếng Việt • ‎français • ‎italiano • ‎polski • ‎русский • ‎中文 • ‎日本語

W MediaWiki 1.6 została wprowadzona kolejka zadań (ang. job queue) do wykonywania długotrwałych zadań w sposób asynchroniczny. Celem kolejki zadań jest umożliwienie wykonywania wielu zadań przy wykorzystaniu przetwarzania wsadowego.

Set up

Można również ustawić $wgJobRunRate na 0, a następnie użyć procesu crona do opróżnienia kolejki zadań. Odbywa się to poprzez uruchomienie maintenance/runJobs.php.

For example, you could use cron to run the jobs every day at midnight by entering the following in your crontab file:

0 0 * * * /usr/bin/php /var/www/wiki/maintenance/runJobs.php > /var/log/runJobs.log 2>&1

Warning: Running jobs once a day like that could be very problematic for consistency and responsiveness -- you should instead run jobs as soon as possible after they are queued, using a wrapper script that waits for jobs to be queued and runs the appropriate job runners.

@FIXME: detail the actual wrappers Wikimedia uses in production to run jobs.

Note: you should run runJobs.php as the same user as the web server runs as, to ensure that permissions to the filesystem are correctly accounted for if jobs touch uploaded files.

Simple service to run jobs

If you have shell access and the possibility to create init scripts, you can create a simple service to run jobs as they become available, and also throttle them to prevent the job runner to monopolize the CPU resources of the server:

Create a bash script, for example at /usr/local/bin/mwjobrunner:

#!/bin/bash
# Put the MediaWiki installation path on the line below
IP=/home/www/www.mywikisite.com/mediawiki
RJ=$IP/maintenance/runJobs.php
echo Starting job service...
# Wait a minute after the server starts up to give other processes time to get started
sleep 60
echo Started.
while true; do
	# Job types that need to be run ASAP mo matter how many of them are in the queue
	# Those jobs should be very "cheap" to run
	php $RJ --type="enotifNotify"
	php $RJ --type="htmlCacheUpdate" --maxjobs=50
	# Everything else, limit the number of jobs on each batch
	# The --wait parameter will pause the execution here until new jobs are added,
	# to avoid running the loop without anything to do
	php $RJ --wait --maxjobs=10
	# Wait some seconds to let the CPU do other things, like handling web requests, etc
	echo Waiting for 10 seconds...
	sleep 10
done

Depending on how fast the server is and the load it handles, you can adapt the number of jobs to run on each cycle and the number of seconds to wait on each cycle.

Make the script executable (chmod 755).

If using systemd, create a new service unit by creating the file /etc/systemd/system/mw-jobqueue.service. Change the User parameter to the user that runs PHP on your web server:

[Unit]
Description=MediaWiki Job runner

[Service]
ExecStart=/usr/local/bin/mwjobrunner
Nice=10
ProtectSystem=full
User=php-fpm
OOMScoreAdjust=200
StandardOutput=journal

[Install]
WantedBy=multi-user.target

Enable it and start it with those commands:

sudo systemctl enable mw-jobqueue
sudo systemctl start mw-jobqueue
sudo systemctl status mw-jobqueue


Job execution on page requests

By default, at the end of each web request, one job is taken from the job queue and executed. This behavior is controlled by the $wgJobRunRate configuration variable. Setting this variable to 1, will run a job on each request. Setting this variable to 0 will disable the execution of jobs during web requests completely, so that you can instead run runJobs.php manually or periodically from the command line.

Wersja MediaWiki: 1.23

When enabled, jobs will be executed by opening a socket and making an internal HTTP request to an unlisted special page: Special:RunJobs. See also the asynchronous section.

Performance issue

If the performance burden of running jobs on every web request is too great but you are unable to run jobs from the command line, you can reduce $wgJobRunRate to a number between 1 and 0. This means a job will execute on average every 1 / $wgJobRunRate requests.

$wgJobRunRate = 0.01;

Manual usage

There is also a way to empty the job queue manually, for example after changing a template that's present on many pages. Simply run the maintenance/runJobs.php maintenance script. For example:

/path-to-my-wiki/maintenance$ php ./runJobs.php

History

Asynchronous

The configuration variable $wgRunJobsAsync has been added to force the execution of jobs synchronously, in scenarios where making an internal HTTP request for job execution is not wanted.

When running jobs asynchronously, it will open an internal HTTP connection for handling the execution of jobs, and will return the contents of the page immediately to the client without waiting for the job to complete. Otherwise, the job will be executed in the same process and the client will have to wait until the job is completed. When the job does not run asynchronously, if a fatal error occurs during job execution, it will propagate to the client, aborting the load of the page.

Note that even if $wgRunJobsAsync is set to true, if PHP can't open a socket to make the internal HTTP request, it will fallback to the synchronous job execution. However, there are a variety of situations where this internal request may fail, and jobs won't be run, without falling back to the synchronous job execution. Starting with MediaWiki 1.28.1 and 1.27.2, $wgRunJobsAsync now defaults to false.

Deferred updates

The deferred updates mechanism was introduced in MediaWiki 1.23 and received major changes during MediaWiki 1.27 and 1.28. It allows the execution of some features at the end of the request, when all the content has been sent to the browser, instead of queuing it in the job, which would otherwise be executed potentially some hours later. The goal of this alternate mechanism is mainly to speed up the main MediaWiki requests, and at the same time execute some features as soon as possible at the end of the request.

Some deferrable updates can be both deferrable updates and jobs, if specified as such.

Changes introduced in MediaWiki 1.22

In MediaWiki 1.22 , the job queue execution on each page request was changed (Gerrit change 59797) so, instead of executing the job inside the same PHP process that's rendering the page, a new PHP cli command is spawned to execute runJobs.php in the background. It will only work if $wgPhpCli is set to an actual path or safe mode is off, otherwise, the old method will be used.

This new execution method could cause some problems:

  • If $wgPhpCli is set to an incompatible version of PHP (e.g.: an outdated version) jobs may fail to run (fixed in 1.23).
  • PHP open_basedir restrictions are in effect, and $wgPhpCli is disallowed (zadanie T62208, fixed in 1.23).
  • Performance: even if the job queue is empty, the new PHP process is started anyway (zadanie T62210, fixed in 1.23).
  • Sometimes the spawning PHP process cause the server or only the CLI process to hang due to stdout and stderr descriptors not properly redirected (zadanie T60719, fixed in 1.22)
  • It does not work for shared code (wiki farms), because it doesn't pass additional required parameters to runJobs.php to identify the wiki that's running the job (zadanie T62698, fixed in 1.23)
  • Normal shell limits like $wgMaxShellMemory , $wgMaxShellTime and $wgMaxShellFileSize are enforced on the runJobs.php process that's being executed in the background.

There's no way to revert to the old on-request job queue handling, besides setting $wgPhpCli to false, for example, which may cause other problems (zadanie T63387). It can be disabled completely by setting $wgJobRunRate = 0;, but jobs will no longer run on page requests, and you must explicitly run runJobs.php to periodically run pending jobs.

Changes introduced in MediaWiki 1.23

In MediaWiki 1.23, the 1.22 execution method is abandoned, and jobs are triggered by MediaWiki making an HTTP connection to itself.

It was first designed as an API entry point (Gerrit change 113038) but later changed to be the unlisted special page Special:RunJobs (Gerrit change 118336).

While it solves various bugs introduced in 1.22, it still requires loading a lot of PHP classes in memory on a new process to execute a job, and also makes a new HTTP request that the server must handle.

Changes introduced in MediaWiki 1.27

In MediaWiki 1.25 and MediaWiki 1.26, use of $wgRunJobsAsync would sometimes cause jobs not to get run if the wiki has custom $wgServerName configuration. This was fixed in MediaWiki 1.27. zadanie T107290

Changes introduced in MediaWiki 1.28

Between MediaWiki 1.23 and MediaWiki 1.27, use of $wgRunJobsAsync would cause jobs not to get run on if MediaWiki requests are for a server name or protocol that does not match the currently configured server name one (e.g. when supporting both HTTP and HTTPS, or when MediaWiki is behind a reverse proxy that redirects to HTTPS). This was fixed in MediaWiki 1.28. zadanie T68485

Changes introduced in MediaWiki 1.29

In MediaWiki 1.27.0 to 1.27.3 and 1.28.0 to 1.28.2, when $wgJobRunRate is set to a value greater than 0, an error like the one below may appear in error logs, or on the page:

PHP Notice: JobQueueGroup::__destruct: 1 buffered job(s) never inserted

As a result of this error, certain updates may fail in some cases, like category members not being updated on category pages, or recent changes displaying edits of deleted pages - even if you manually run runJobs.php to clear the job queue. It has been reported as a bug (zadanie T100085) and was solved in 1.27.4 and 1.28.3.

Job examples

Aktualizacja tabeli linków po zmianach w szablonie

MediaWiki 1.6 dodaje zadanie do kolejki zadań dla każdego artykułu, który zawiera w sobie jakiś szablon. Każde zadanie to polecenie, aby przejrzeć artykuł, rozwinąć szablony i odpowiednio zaktualizować tabelę linków.

Nie są więc już konieczne puste edycje, których zadaniem byłoby wymuszenie tych aktualizacji, choć może to trochę potrwać, zwłaszcza gdy jest dużo operacji do wykonania.

Inwalidacja pamięci podręcznej HTML

Szeroka klasa operacji może spowodować potrzebę inwalidacji (czyli unieważnienia/aktualizacji) pamięci podręcznej HTML dla dużej liczby stron; są to operacje typu:

  • zmiana grafiki (wszystkie miniaturki muszą być ponownie zrenderowane a ich rozmiary od nowa przeliczone)
  • usunięcie strony (we wszystkie linkach prowadzących do niej na innych stronach trzeba zmienić kolor z niebieskiego na czerwony)
  • utworzenie lub przywrócenie strony (jak wyżej ale kolor należy zmienić z czerwonego na niebieski)
  • zmiana zawartości szablonu (wszystkie strony, które zawierają dany szablon potrzebują aktualizacji)

Za wyjątkiem zmian w szablonie, działania te nie wymagają inwalidacji tabeli linków, ale inwalidują pamięć podręczną HTML wszystkich stron zawierających linki do tej strony lub wykorzystujących daną grafikę. Inwalidacja pamięci podręcznej strony jest operacją krótkotrwałą; wymaga tylko aktualizacji pojedynczego pola bazy danych i wysyłanie pakietu danych typu multicast, w celu wyczyszczenia pamięci podręcznej.

Ale jeśli ich ilość przekracza 1000, to zajmuje to dużo czasu. Domyślnie zadania są dodawane po przekroczeniu liczby 500 operacji potrzebujących inwalidacji, czyli jedno zadanie przypada na 500 operacji.

Audio and video transcoding

When using TimedMediaHandler to process local uploads of audio and video files, the job queue is used to run the potentially very slow creation of derivative transcodes at various resolutions/formats.

These are not suitable for running on web requests -- you will need a background runner.

It's recommended to set up separate runners for the webVideoTranscode and webVideoTranscodePrioritized job types if possible. These two queues process different subsets of files -- the first for high resolution HD videos, and the second for lower-resolution videos and audio files which process more quickly.

Typowe wartości

W okresie niskiego obciążenia, długość kolejki zadań może wynosić zero. W praktyce, w projektach Wikimedii, długość kolejki zadań rzadko kiedy wynosi zero. Poza godzinami szczytu, może to być wartość od kilkuset do tysiąca zadań. Podczas pracowitego dnia, może to być nawet kilka milionów, ale wartość ta zwykle szybko fluktuuje w granicach ±10% i powyżej. [1]

Special:Statistics

Do wersji 1.16 MediaWiki, długość kolejki zadań była pokazywana na stronie specjalnej Specjalna:Statystyka; od wersji 1,17 (rev: 75272) została stamtąd usunięta ale może być obserwowana poprzez API:Siteinfo,

The number of jobs returned in the API result may be slightly inaccurate when using MySQL, which estimates the number of jobs in the database. This number can fluctuate based on the number of jobs that have recently been added or deleted. For other databases that do not support fast result-size estimation, the actual number of jobs is given.

For developers

Zobacz też