Manual:Performance tuning/zh

本页概述了各种改进MediaWiki性能的方法.

语境
MediaWiki is capable of scaling to meet the needs of large wiki farms such as those of Wikimedia Foundation, WikiHow and FANDOM and can take advantage of a wide number of methods including multiple load-balanced database servers, memcached object caching, Varnish caches (see Manual:Varnish caching) and multiple application servers. For most smaller installations, this is overkill though, and simply enabling object caching and optimizing PHP performance should suffice.

快速开始
简短版本. 我们推荐PHP使用字节码缓存，APCu作为本地对象缓存，Memcached作为主缓存；这就是维基媒体基金会对维基百科等使用的东西.

在某些情况下，太多层的过度缓存可能会降低性能.

使用Puppet快速开始
Most of the tweaks on this page have been collected in a puppet manifest ( and ). If you install Puppet, you can apply them to your server with a single command.

字节码缓存

 * See PHP configuration

PHP works by compiling a PHP file into bytecode and then executing that bytecode. The process of compiling a large application such as MediaWiki takes considerable time. PHP accelerators work by storing the compiled bytecode and executing it directly reducing the time spent compiling code.

OPcache is included in PHP 5.5.0 and later and the recommended accelerator for MediaWiki. Other supported op code caches are: WinCache.

Opcode caches store the compiled output of PHP scripts, greatly reducing the amount of time needed to run a script multiple times. MediaWiki does not need to be configured to do PHP bytecode caching and will "just work" once installed and enabled them.

对象缓存
For more information about local server, main cache and other cache interfaces, see Manual:Caching.

本地服务器
This interface is used for lightweight caching directly on the web server. This interface is expected to persist stored values across web requests.

Presence of a supported backend is automatically detected by MediaWiki. No MediaWiki configuration necessary.

For PHP 7+, you should install APCu or WinCache. (On PHP 5, APCu was known to be unstable in some cases. )

To install APCu, use:

A script,  is bundled with the APCu package which can be used to inspect the status of the cache, and also examine the contents of the user cache to verify that MediaWiki is correctly using it.

主要缓存
这个接口被用作较大对象的主要对象缓存.

The main cache is disabled by default and needs to be configured manually. To enable it, set to a key in. There are preconfigured interfaces for Memcached, APC, and MySQL. You can configure additional backends via (e.g. for Redis).

单个Web服务器
If you have APC installed is strongly recommended to use that by setting the following in :

一旦设置，用户会话存储和解析器输出缓存也将继承这个MainCacheType设置.

When using APC with limited RAM (and no Memcached or other object cache configured), then important objects might be evicted too often due to the size of parser output cache building up. Consider setting to CACHE_DB, which will move those keys out to the database instead.

If using  and users are unable to login due to "session hijacking" errors, consider overriding   to. See task T147161 for more info.

If you can't use APC, consider installing (requires at least 80MB of RAM). While installing Memcached is considerably more complicated, it is very effective.

如果APC或Memcached二者都不选，您可以退回到将对象缓存存储在你的数据库中. 下面的预设将做到这一点：

多个Web服务器
If your MediaWiki site is served by multiple web servers, you should use a central Memcached server. Detailed instructions are on the  page.

It is important that you do not use APC as the main cache for multiple web servers, as this cache is expected to be coordinated centrally for a single MediaWiki installation. Having each web server use APC as its own MainCache will cause stale values, corruption or other unexpected side-effects. Note that for values that are safe to store in uncoordinated fashion (the "local-server cache"), MediaWiki automatically makes use of APC regardless of this configuration setting.

跨维基缓存
MediaWiki interwiki prefixes are stored in the  database table. See Interwiki cache for how to cache these in a CDB or PHP file.

本地化缓存
By default, interface message translations are cached in the database table. Ensure in  is set to a valid path to use a local caching instead. See for more details.

页面访问缓存
Page view caching increases performance tremendously for anonymous (not logged-in) users. It does not affect performance for logged-in users.

缓存代理
A caching proxy (or "HTTP accelerator") stores a copy of web pages generated by your web server. When such page is requested a second time, then the proxy serves up its local copy, instead of passing the request onto the real web server.

This massively improves the response times for page loads by end users, and also tremendously reduces the computational load on the MediaWiki web server. When a page is edited, MediaWiki can automatically purge the local copy from the cache proxy.

缓存代理的例子.

See also Squid on Wikitech.
 * Varnish Cache, this is currently (as of November 2018) used by Wikipedia. See also Manual:Varnish caching.
 * Squid，这是维基百科在2012年之前使用的.
 * Apache's mod_cache_disk, see this article for instructions with MediaWiki.

文件缓存

 * See for main article about this.

In absence of a caching proxy or HTTP accelerator, MediaWiki can optionally use the file system to store the output of rendered pages. For larger sites, using an external cache like Varnish is preferable to using the file cache.

Web server

 * if you use Apache as web server, use PHP-FPM, not mod_php. PHP-FPM optimizes re-use of PHP processes.
 * switch Apache to use the event MPM instead of the prefork MPM.
 * adjust robots.txt to disallow bots from crawling history pages. This decreases general server load.
 * HTTP/2 protocol can help, even with ResourceLoader.

Configuration settings
Large sites running MediaWiki 1.6 or later should set to a low number, say 0.01. See for more information.

Composer
MediaWiki uses composer for organizing library dependencies. By default these are included from the  directory using a dynamic autoloader. This autoloader needs to search directories which can be slow. It is recommended to generate a static autoloader with Composer, which will make your wiki respond faster.

Using a static autoloader is the default for all MediaWiki installations from the tarball download or from Git. If for some reason this is not the case, use the following to generate the static autoloader:

composer update -o --no-dev

Remember that this will need to be re-run after each MediaWiki update as it includes a static copy of which libraries and classes exist in the software.

MySQL
For a heavy concurrent write load, InnoDB is essential. Use memcached, not the default MySQL-based object cache.

See below for some DB configuration tricks. You can also try and run the mysql-tuning-primer script to get some quick statistics and suggestions.

多个服务器
The database software and web server software will start to fight over RAM on busy MediaWiki installations that are hosted on a single server. If your wiki has a consistent traffic, a logical step, once other performance optimizations have been made (and cache serves most of the content), is to put the database and web server on separate servers (or, in some cases, multiple separate servers, starting with a replica.) Also:


 * check that MySQL has query cache enabled and enough memory;
 * give most memory to innodb_buffer_pool;
 * add cores for MySQL if maxed out at peak times;
 * give memcached even more RAM for in-memory cache.

基準測試
Some tools can help quickly evaluate the effects of performance tuning.


 * http://webpagetest.org is "real life" testing, commanded in your browser.
 * ab is a command line tool which quickly produces some nice stats.
 * PageSpeed
 * (removed in MediaWiki 1.35)

参见

 * http://dammit.lt/2007/01/26/mediawiki-performance-tuning/ : APC and a few simple settings that boost performance


 * More extensive changes, sacrificing some functionality
 * User:Aaron Schulz/How to make MediaWiki fast
 * Comprehensive MediaWiki performance optimisation (mostly redundant with this page and what above)
 * User:Ilmari Karonen/Performance tuning, focusing on small wikis
 * Use cases
 * Wikipedia: Site internals, configuration, code examples and management issues [PDF, 2007]
 * Caching, minification, domain-sharing and compression techniques used by WikiFur
 * For developers:
 * Logging and
 * North's Performance chapter