Manual:Performance tuning


 * Manual:File cache, saving rendered pages to disk
 * Memcached, a memory-based object cache
 * Manual:Squid caching, a separate caching server
 * User:Robchurch/Performance tuning, mostly about the PHP APC module
 * User:Aaron Schulz/How to make MediaWiki fast
 * http://dammit.lt/2007/01/26/mediawiki-performance-tuning/ : also APC and a few simple settings that boost performance
 * Wikimedia servers, Wikimedia's caching and multiple server strategy for its sites
 * Caching, minification, domain-sharing and compression techniques used by WikiFur
 * Wikipedia: Site internals, configuration, code examples and management issues [PDF, 2007]

If you have the RAM, you should use Memcached, which will require at least 80MB or more of RAM, about 60MB for code plus whatever you need for cache. The user interface text will be cached by memcached, as will logins and partially completed pages.

The database and Apaches will start to fight for the RAM as you become more busy and your next step is likely to be a split of the database server and Apache web server(s) onto different computers. You're likely to need to turn off swap on the database server - the operating system often seems to allocate too much RAM to cache on database servers.

If you're still seeing mostly author traffic and not many readers, adding more Apache computers is next. If you are seeing many readers, a Squid computer first is probably a better choice.

Busy sites should use a Squid cache server between the Apaches and the internet. For Wikimedia the Squids handle 70-80% of all page views, including most by those not logged in. It's extremely important as traffic rises. Ideally you'll have enough RAM on the Squid(s) to hold the current version of all articles but in practice you're likely to use some disk storage as well. Squid can use up to 2GB of RAM directly, more for operating system caching. Wikimedia typically uses 3GB or 4GB per Squid.

Wikimedia uses an approximate ratio of one 4GB Squid to five 512MB 3GHz P4 Apaches to one 4GB dual Opteron database server with 6 fast disks in RAID 10. One or two of the Apaches will have extra RAM beyond the 512MB needed for Apache and that is used for Memcached - up to two or three gigabytes per computer. One set like this should handle about 150 to 200 page views per second. For press links and slashdotting the Squid should handle at least twice that.

For a heavy concurrent write load, InnoDB is essential. Set  to reduce lock contention, at the expense of introducing occasional inconsistencies. Use memcached not the default MySQL-based object cache.

Large sites running MediaWiki 1.6 or later should set $wgJobRunRate to a low number, say 0.01. See mw:Manual:Job queue for more information.