Manual:Performance tuning

This page provides a quick overview of ways to improve the performance of MediaWiki.

Quick start
Short version: We recommend bytecode cache for PHP, APCu as local object cache, Memcached for main cache; this is what the Wikimedia Foundation uses for Wikipedia et al.

In some cases, over-caching at too many levels may degrade performance.

Quick start with Puppet
Most of the tweaks on this page have been collected in a puppet manifest ( and ). If you install Puppet, you can apply them to your server with a single command.

Bytecode caching

 * See PHP configuration

PHP works by compiling a PHP file into bytecode and then executing that bytecode. The process of compiling the file into bytecode takes some time. PHP accelerators work by storing the compiled bytecode and executing it directly reducing the time spent compiling code.

OPcache is included in PHP 5.5.0 and later and the recommended accelerator for MediaWiki. If unavailable, APC is a decent alternative. Other supported op code caches are: mmTurck, WinCache, XCache.

Opcode caches store the compiled output of PHP scripts, greatly reducing the amount of time needed to run a script multiple times. MediaWiki does not need to be configured to do PHP caching and will "just work" if you install and enable any of them.

mbstring
Although MediaWiki can work without the mbstring PHP library, it is highly recommended for performance reasons (note: mbstring.func-overload configuration option must be off).

HHVM
HipHop Virtual Machine is a JIT for PHP developed by and used in production at Facebook. HHVM is not a magic bullet, but has favorable performance characteristics compared to Zend. HHVM support isn't complete in MediaWiki, and should not be attempted by the faint hearted (some brave attempts can be found at ).

Object caching
There are three primary object cache interfaces in MediaWiki. For more information about these, see Manual:Caching.

Local server
This interface is used for lightweight caching directly on the web server. This interface is expected to persist stored values across web requests.

Presence of a supported backend is automatically detected by MediaWiki. No configuration necessary.

HHVM has built-in support for APC cache methods. For PHP 5.5+, you can install APCu, XCache, or WinCache. APCu is known to be unstable in PHP 5.5+ in some cases.

Main cache
This interface is used as the main object cache for larger objects.

The main cache is disabled by default and needs to be configured manually. To enable it, set to a key in. There are preconfigured interfaces for Memcached, APC, and MySQL. You can configure additional backends via (e.g. for Redis).

Default:

Local server
If you have APC installed is strongly recommended to use that by setting the following in : When using APC with limited RAM and no Memcached or other object cache, objects may be evicted too often due to the size of parser output cache. Consider overriding to CACHE_DB. This will move those keys to the database instead.

If you can't use APC, consider installing (will require at least 80MB or more of RAM). While installing Memcached is considerably more complicated, is is very effective.

If neither APC nor Memcached is an option, you can fallback to storing the object cache in your MySQL database. The following preset will do that:

Multiple servers
If your MediaWiki site is served by multiple web servers, you should use a central Memcached server. Detailed instructions are on the  page.

Interwiki cache
MediaWiki interwiki prefixes are stored in the  database table. See Interwiki cache for how to cache these in a CDB or PHP file.

Page view caching
Page view caching increases performance tremendously for anonymous (not logged-in) users. It does not affect performance for logged-in users.

Caching proxy

 * See and.

Simply put, HTTP accelerators (or "caching proxies") store copies of pages sent out by the web server. When a cached page is requested a second time, the proxy serves up the copy instead of passing the request on to the web server. This can tremendously reduce the load on the MediaWiki web server. When a page is updated, the copy is removed from the accelerator's cache through a purge.

Use Varnish as cache proxy, your leverage any built-in support your web server may have through a plug-in or configuration option.

See also this article for instructions on using Apache's mod_cache_disk with MediaWiki.

File cache

 * See for main article about this.

In absence of a caching proxy or HTTP accelerator, MediaWiki can optionally use the file system to store the output of rendered pages. For larger sites, using an external cache like Varnish is preferable to using the file cache.

Web server

 * if you use Apache as web server, use PHP-FPM, not mod_php. PHP-FPM optimizes re-use of PHP processes.
 * switch Apache to use the event MPM instead of the prefork MPM.
 * adjust robots.txt to disallow bots from crawling history pages. This decreases general server load.
 * HTTP/2 protocol can help, even with ResourceLoader.

Configuration settings
Large sites running MediaWiki 1.6 or later should set to a low number, say 0.01. See for more information.

Composer
MediaWiki uses composer for solving dependencies. By default it search all packages in /vendor folder for classes need autoload. It`s a realtime operation and it supports caching only at bytecode level. But you can set composer for using static list of autoload classes, which will make your wiki faster.

Open console in /vendor dir and simply enter: composer update

But you should repeat that procedure on every vendor library update.

MySQL
For a heavy concurrent write load, InnoDB is essential. In MediaWiki 1.24 and earlier, set  to reduce lock contention, at the expense of introducing occasional inconsistencies. Use memcached, not the default MySQL-based object cache.

See below for some DB configuration tricks. You can also try and run the mysql-tuning-primer script to get some quick statistics and suggestions.

Multiple servers
The database software and web server software will start to fight over RAM on busy MediaWiki installations that are hosted on a single server. If your wiki has a consistent traffic, a logical step, once other performance optimizations have been made (and cache serves most of the content), is to put the database and web server on separate servers (or, in some cases, multiple separate servers, starting with a slave.) Also:


 * check that MySQL has query cache enabled and enough memory;
 * give most memory to innodb_buffer_pool;
 * add cores for MySQL if maxed out at peak times;
 * give memcached even more RAM for in-memory cache.

Benchmarking
Some tools can help quickly evaluate the effects of performance tuning.


 * http://webpagetest.org is "real life" testing, commanded in your browser.
 * http://fannon.de/p/mediawiki-benchmark is a web based benchmarker, that uses the API but real browser requests
 * ab is a command line tool which quickly produces some nice stats.
 * PageSpeed