Manual:Performance tuning

This page provides an overview of different ways to improve the performance of MediaWiki.

Context
MediaWiki is capable of scaling to meet the needs of large wiki farms such as those of Wikimedia Foundation, WikiHow and FANDOM and can take advantage of a wide number of methods including multiple load-balanced database servers, memcached object caching, Varnish caches (see Manual:Varnish caching) and multiple application servers. For most smaller installations, this is overkill though, and simply enabling object caching and optimizing PHP performance should suffice.

Quick start
Short version: We recommend bytecode cache for PHP, APCu as local object cache, Memcached for main cache; this is what the Wikimedia Foundation uses for Wikipedia et al.

In some cases, over-caching at too many levels may degrade performance.

Quick start with Puppet
Most of the tweaks on this page have been collected in a puppet manifest ( and ). If you install Puppet, you can apply them to your server with a single command.

Bytecode caching

 * See PHP configuration

PHP works by compiling a PHP file into bytecode and then executing that bytecode. The process of compiling a large application such as MediaWiki takes considerable time. PHP accelerators work by storing the compiled bytecode and executing it directly reducing the time spent compiling code.

OPcache is included in PHP 5.5.0 and later and the recommended accelerator for MediaWiki. Other supported op code caches are: WinCache.

Opcode caches store the compiled output of PHP scripts, greatly reducing the amount of time needed to run a script multiple times. MediaWiki does not need to be configured to do PHP bytecode caching and will "just work" once installed and enabled them.

Object caching
For more information about local server, main cache and other cache interfaces, see Manual:Caching.

Local server
This interface is used for lightweight caching directly on the web server. This interface is expected to persist stored values across web requests.

Presence of a supported backend is automatically detected by MediaWiki. No configuration necessary.

For PHP 7+, you should install APCu, XCache, or WinCache. HHVM has built-in support for APC cache methods. (On PHP 5, APCu was known to be unstable in some cases. )

To install APCu, use:

A script,  is bundled with the APCu package which can be used to inspect the status of the cache, and also examine the contents of the user cache to verify that MediaWiki is correctly using it.

Main cache
This interface is used as the main object cache for larger objects.

The main cache is disabled by default and needs to be configured manually. To enable it, set to a key in. There are preconfigured interfaces for Memcached, APC, and MySQL. You can configure additional backends via (e.g. for Redis).

Single web server
If you have APC installed is strongly recommended to use that by setting the following in : Once set, the user session store and parser output cache will also inherit this MainCacheType setting.

When using APC with limited RAM (and no Memcached or other object cache configured), then important objects might be evicted too often due to the size of parser output cache building up. Consider setting to CACHE_DB, which will move those keys out to the database instead.

If using  and users are unable to login due to "session hijacking" errors, consider overriding   to. See task T147161 for more info.

If you can't use APC, consider installing (requires at least 80MB of RAM). While installing Memcached is considerably more complicated, it is very effective.

If neither APC or Memcached is an option, you can fallback to storing the object cache in your MySQL database. The following preset will do that:

Multiple web servers
If your MediaWiki site is served by multiple web servers, you should use a central Memcached server. Detailed instructions are on the  page.

It is important do you do not use APC as the main cache for multiple web servers, as this cache is expected to be coordinated centrally for a single MediaWiki installation. Having each web server use APC as its own MainCache will cause stale values, corruption or other unexpected side-effects. Note that for values that are safe to store in uncoordinated fashion (the "local-server cache"), MediaWiki automatically makes use of APC regardless of this configuration setting.

Interwiki cache
MediaWiki interwiki prefixes are stored in the  database table. See Interwiki cache for how to cache these in a CDB or PHP file.

Localisation cache
By default, interface message translations are cached in the database table. Ensure in  is set to a valid path to use a local caching instead. See for more details.

Page view caching
Page view caching increases performance tremendously for anonymous (not logged-in) users. It does not affect performance for logged-in users.

Caching proxy
A caching proxy (or "HTTP accelerator") stores a copy of web pages generated by your web server. When such page is requested a second time, then the proxy serves up its local copy, instead of passing the request onto the real web server.

This massively improves the response times for page loads by end users, and also tremendously reduces the computational load on the MediaWiki web server. When a page is edited, MediaWiki can automatically purge the local copy from the cache proxy.

Examples of cache proxies:


 * Varnish Cache, this is currently (as of November 2018) used by Wikipedia. See also Manual:Varnish caching.
 * Squid, this was used by Wikipedia prior to 2012. See also Squid on Wikitech.
 * Apache's mod_cache_disk, see this article for instructions with MediaWiki.

File cache

 * See for main article about this.

In absence of a caching proxy or HTTP accelerator, MediaWiki can optionally use the file system to store the output of rendered pages. For larger sites, using an external cache like Varnish is preferable to using the file cache.

Web server

 * if you use Apache as web server, use PHP-FPM, not mod_php. PHP-FPM optimizes re-use of PHP processes.
 * switch Apache to use the event MPM instead of the prefork MPM.
 * adjust robots.txt to disallow bots from crawling history pages. This decreases general server load.
 * HTTP/2 protocol can help, even with ResourceLoader.

Configuration settings
Large sites running MediaWiki 1.6 or later should set to a low number, say 0.01. See for more information.

Composer
MediaWiki uses composer for organizing library dependencies. By default it these are included from the  directory using a dynamic autoloader. This autoloader needs to search directories which can be slow. It is recommended to generate a static autoloader with Composer, which will make your wiki respond faster.

Using a static autoloader is the default for all MediaWiki installations from the tarball download or from Git. If for some reason this is not the case, use the following to generate the static autoloader:

composer update -o --no-dev

Remember that this will need to be re-run after each MediaWiki update as it includes a static copy of which libraries and clases exist in the software.

MySQL
For a heavy concurrent write load, InnoDB is essential. Use memcached, not the default MySQL-based object cache.

See below for some DB configuration tricks. You can also try and run the mysql-tuning-primer script to get some quick statistics and suggestions.

Multiple servers
The database software and web server software will start to fight over RAM on busy MediaWiki installations that are hosted on a single server. If your wiki has a consistent traffic, a logical step, once other performance optimizations have been made (and cache serves most of the content), is to put the database and web server on separate servers (or, in some cases, multiple separate servers, starting with a slave.) Also:


 * check that MySQL has query cache enabled and enough memory;
 * give most memory to innodb_buffer_pool;
 * add cores for MySQL if maxed out at peak times;
 * give memcached even more RAM for in-memory cache.

Benchmarking
Some tools can help quickly evaluate the effects of performance tuning.


 * http://webpagetest.org is "real life" testing, commanded in your browser.
 * ab is a command line tool which quickly produces some nice stats.
 * PageSpeed