|Language:||English • 한국어|
This page provides a quick overview of ways to improve the performance of MediaWiki. See also Manual:Cache.
For the impatient
Note: Most of the tweaks have been collected in the puppet/modules/role/manifests/simple_performant.pp and puppet/modules/role/manifests/simple_miser.pp files; if you install puppet, you can make them all at once with a single command.
- 1 For the impatient
- 2 Cache
- 3 Other web server tuning
- 4 Configuration settings
- 5 PHP tuning
- 6 Database configuration and setup
- 7 Benchmarking
- 8 See also
Opcode caches store the compiled output of PHP scripts, greatly reducing the amount of time needed to run a script multiple times. OPcache is included in PHP 5.5.0 and later. Supported opcode caches for previous PHP versions are APC, mmTurck, WinCache and XCache; see $wgMainCacheType.
The MediaWiki user interface text and other expensive objects can be cached by an object cache, as will logins and partially completed pages.
APC was an opcode and object cache and got deprecated with the release of PHP 5.5.0 which introduced OPcache as built-in opcode cache. APCu is (only) the object caching part of APC and can be used with PHP 5.5.0 and up. APCu is not completely stable - until this is fixed, better use memcached.
- See Memcached
If you have enough available RAM, you should use memcached, which will require at least 80MB or more of RAM, about 60MB for code plus whatever you need for cache. If you balance your load across multiple webservers, you should use a dedicated memcached (cluster).
Complete page caching gives tremendous performance increases for anonymous users but does not improve performance for users who are logged in.
- See Manual:File cache for instructions on enabling and configuring rendered page caching
MediaWiki pages can be computationally expensive to render. MediaWiki has an optional file caching system that stores the output of rendered pages. For larger sites, using an external cache like Squid or Varnish is preferable to using the file cache.
HTTP caching proxies and HTTP acceleration
Simply put, HTTP accelerators/caching proxies (such as Squid and Varnish) store copies of pages sent out by the web server. When a cached page is requested, the accelerator serves up the copy instead of passing the response on to the web server. This can tremendously reduce the load on the web server. When a page is updated, the copy is removed from the accelerator's cache.
Other web server tuning
- if you use Apache as web server, use PHP-FPM, not mod_php. PHP-FPM optimizes re-use of PHP processes.
- switch Apache to use the event MPM instead of the prefork MPM.
- adjust robots.txt to disallow bots from crawling history pages. This decreases general server load.
- HTTP/2 protocol can help, even with ResourceLoader.
MediaWiki uses composer for solving dependensies. By default it search all packages in /vendor folder for classes need autoload. It`s a realtime operation and it supports caching only at bytecode level. But you can set composer for using static list of autoload classes, which will make your wiki faster.
Open console in /vendor dir and simply enter:
But you should repeat that procedure on every vendor library update.
Can have great impact in some cases, probably never harms. 
HipHop Virtual Machine is a JIT for PHP developed by and used in production at Facebook. HHVM is not a magic bullet, but has favorable performance characteristics compared to Zend. HHVM support isn't complete in MediaWiki, and should not be attempted by the faint hearted (some brave attempts can be found at HipHop deployment).
Database configuration and setup
For a heavy concurrent write load, InnoDB is essential. In MediaWiki 1.24 and earlier, set
$wgAntiLockFlags = ALF_NO_LINK_LOCK | ALF_NO_BLOCK_LOCK; to reduce lock contention, at the expense of introducing occasional inconsistencies. Use memcached, not the default MySQL-based object cache.
See below for some DB configuration tricks. You can also try and run the mysql-tuning-primer script to get some quick statistics and suggestions.
The database software and web server software will start to fight over RAM on busy MediaWiki installations that are hosted on a single server. If your wiki has a consistent traffic, a logical step, once other performance optimizations have been made (and cache serves most of the content), is to put the database and web server on separate servers (or, in some cases, multiple separate servers, starting with a slave.) Also:
- check that MySQL has query cache enabled and enough memory;
- give most memory to innodb_buffer_pool;
- add cores for MySQL if maxed out at peak times;
- give memcached even more RAM for in-memory cache.
Some tools can help quickly evaluate the effects of performance tuning.
- http://webpagetest.org is "real life" testing, commanded in your browser.
- http://fannon.de/p/mediawiki-benchmark is a web based benchmarker, that uses the API but real browser requests
- ab is a command line tool which quickly produces some nice stats.
- Quick harmless solutions
- Manual:APC, Manual:Cache
- http://dammit.lt/2007/01/26/mediawiki-performance-tuning/ : APC and a few simple settings that boost performance
- More extensive changes, sacrificing some functionality
- User:Ilmari Karonen/Performance tuning, focusing on small wikis
- Use cases
- For developers: