Manual:Performance tuning

Jump to: navigation, search
Language: English  • 한국어

This page provides a quick overview of ways to improve the performance of MediaWiki. See also Manual:CacheManual:Cache.

For the impatient[edit]

Note Note: Most of the tweaks have been collected in the puppet/modules/role/manifests/simple_performant.pp and puppet/modules/role/manifests/simple_miser.pp files; if you install puppet, you can make them all at once with a single command.


Opcode caching[edit]

See Manual:Cache#PHP caching and PHP configuration#Opcode caching

Opcode caches store the compiled output of PHP scripts, greatly reducing the amount of time needed to run a script multiple times. OPcache is included in PHP 5.5.0 and later. Supported opcode caches for previous PHP versions are APC, mmTurck, WinCache and XCache; see $wgMainCacheTypeManual:$wgMainCacheType.

Object caching[edit]

See Manual:Cache#Object caching

The MediaWiki user interface text and other expensive objects can be cached by an object cache, as will logins and partially completed pages.


APC was an opcode and object cache and got deprecated with the release of PHP 5.5.0 which introduced OPcache as built-in opcode cache. APCu is (only) the object caching part of APC and can be used with PHP 5.5.0 and up. APCu is not completely stable[1] - until this is fixed, better use memcached.


See MemcachedMemcached

If you have enough available RAM, you should use memcached, which will require at least 80MB or more of RAM, about 60MB for code plus whatever you need for cache. If you balance your load across multiple webservers, you should use a dedicated memcached (cluster).

Output caching[edit]

See Manual:Cache#Page caching

Complete page caching gives tremendous performance increases for anonymous users but does not improve performance for users who are logged in.

File cache[edit]

See Manual:File cacheManual:File cache for instructions on enabling and configuring rendered page caching

MediaWiki pages can be computationally expensive to render. MediaWiki has an optional file caching system that stores the output of rendered pages. For larger sites, using an external cache like Squid or Varnish is preferable to using the file cache.

HTTP caching proxies and HTTP acceleration[edit]

See Manual:Squid cachingManual:Squid caching and Manual:Varnish cachingManual:Varnish caching

Simply put, HTTP accelerators/caching proxies (such as Squid and Varnish) store copies of pages sent out by the web server. When a cached page is requested, the accelerator serves up the copy instead of passing the response on to the web server. This can tremendously reduce the load on the web server. When a page is updated, the copy is removed from the accelerator's cache.

See also this article for instructions on using Apache's mod_cache_disk with MediaWiki.

Other web server tuning[edit]

  • if you use Apache as web server, use PHP-FPM, not mod_php. PHP-FPM optimizes re-use of PHP processes.
    • switch Apache to use the event MPM instead of the prefork MPM.
  • adjust robots.txt to disallow bots from crawling history pages. This decreases general server load.
  • HTTP/2 protocol can help, even with ResourceLoader.[2]

Configuration settings[edit]

Large sites running MediaWiki 1.6 or later should set $wgJobRunRateManual:$wgJobRunRate to a low number, say 0.01. See Manual:Job queueManual:Job queue for more information.


MediaWiki uses composer for solving dependensies. By default it search all packages in /vendor folder for classes need autoload. It`s a realtime operation and it supports caching only at bytecode level. But you can set composer for using static list of autoload classes, which will make your wiki faster.

Open console in /vendor dir and simply enter:

composer update

But you should repeat that procedure on every vendor library update.

PHP tuning[edit]


Although MediaWiki can work without the mbstring PHP library, it is highly recommended for performance reasons (note: mbstring.func-overload configuration option must be off).


Can have great impact in some cases, probably never harms. [2]


HipHop Virtual Machine is a JIT for PHP developed by and used in production at Facebook. HHVM is not a magic bullet, but has favorable performance characteristics compared to Zend. HHVM support isn't complete in MediaWiki, and should not be attempted by the faint hearted (some brave attempts can be found at HipHop deploymentHipHop deployment).

Database configuration and setup[edit]


For a heavy concurrent write load, InnoDB is essential. In MediaWiki 1.24 and earlier, set $wgAntiLockFlagsManual:$wgAntiLockFlags = ALF_NO_LINK_LOCK | ALF_NO_BLOCK_LOCK; to reduce lock contention, at the expense of introducing occasional inconsistencies. Use memcached, not the default MySQL-based object cache.

See below for some DB configuration tricks. You can also try and run the mysql-tuning-primer script to get some quick statistics and suggestions.

Multiple servers[edit]

The database software and web server software will start to fight over RAM on busy MediaWiki installations that are hosted on a single server. If your wiki has a consistent traffic, a logical step, once other performance optimizations have been made (and cache serves most of the content), is to put the database and web server on separate servers (or, in some cases, multiple separate servers, starting with a slave.) Also:


Some tools can help quickly evaluate the effects of performance tuning.

See also[edit]

  1. APCu GitHub issue 19: Hung apaches on pthread wrlocks
  2. 2.0 2.1 Niklas Laxström, Performance is a feature, December 9th, 2013.