Wikimedia Performance Team/Backend performance/zh

这些是MediaWiki后端开发的性能指南，旨在部署到维基媒体基金会的维基网站.

该怎么做（概要）

 * 准备好对你的代码的性能感到惊讶；预测往往是坏的.
 * 严格测量性能（在你的开发环境和生产环境中），并知道时间花在哪里.
 * 当发现延迟时，要负责任并将其作为优先事项；你对使用模式和测试内容有最好的想法.
 * 性能往往与不良工程的其他症状有关；想想根本原因.
 * MediaWiki很复杂，可以以意想不到的方式互动. 你的代码可能会在其他地方暴露出性能问题，你需要识别.
 * 没被缓存的昂贵但有价值的操作应该最多需要5秒，2秒更好.
 * 如果这还不够，可以考虑使用作业队列在后台服务器上执行任务.

一般性能原则
MediaWiki应用程序开发：


 * 延迟加载那些不会影响页面初始渲染的模块，特别是“折叠内容之上”（用户屏幕上最初可见的页面顶部部分）. 因此，尽可能少地加载JavaScript，而是按需加载更多的组件. 有关更多信息，请参见加载模块.
 * 用户应该有一个流畅的体验；不同的组件应该逐步呈现. 保留元素的定位（例如，避免在页面重排时把内容挤到后面）.

维基媒体基础设施：
 * 你的代码是在一个共享环境中运行的. 因此，长期运行的SQL查询不能作为网络请求的一部分运行. 相反，应该让它们在专门的服务器上运行（使用JobQueue），并注意死锁和锁定等待超时的问题.


 * 你创建的表将被其他代码所共享. 每个数据库查询必须能够使用其中一个索引（包括写查询！）. EXPLAIN，并在需要时创建新的索引.


 * 为你的需求选择合适的持久层. Redis作业队列，MariaDB数据库，或Swift文件存储. 只有在你的代码能够始终有效地处理缓存数据消失的情况下，才进行缓存；否则，就持久化数据.
 * 维基媒体使用并严重依赖许多不同的缓存层，所以你的代码需要在这种环境下工作！（但如果所有东西都没被缓存命中，也必须正常运行. ）
 * 缓存命中率应尽可能高；注意你是否引入了新的cookies、共享资源、捆绑请求或调用，或其他会改变请求并降低缓存命中率的改变.

测量
测量你的代码运行的速度，这样你就可以基于事实而不是迷信或感觉来做决定. 将这些原则与架构指南和安全指南一起使用. 性能（你的代码运行得（相对）快）和可扩展性（你的代码在更大的维基上和多次并发实例化时不会变得更慢）都很重要；都要测量.

百分比
总是考虑高的百分位数值而不是中位数.

网络中的性能数据有两个不同的“信号”是很常见的：一个是用户通过热缓存访问应用程序，另一个是用户通过冷缓存访问应用程序. 在有这两个信号的数据集上计算平均数是毫无意义的. 为了对数据进行快速检查，请确保你至少有10,000个数据点，并计算出50%和90%的统计量. 这两个数字可能相差很大，这可能表明您可以修复性能问题. 例如，如果网络往返速度相当慢，而且你有很多资源被获取，你会看到用户来到你的网站时，有缓存资源（从而避免了所有那些缓慢的往返）和没有缓存资源的巨大差异. 如果你有足够的数据那就更好了，你可以计算出1、50、90和99的百分位数. 一个好的经验法则是，要有统计学意义，你需要10,000个数据点来计算第90个百分点，100,000个数据点来计算第99个百分点，100万个数据点来计算第99.9个百分点.

这条经验法则使事情有点过于简单化，但对性能分析很有效. （关于这个的一些文献）

延迟
无论网络延迟如何，软件都应以可接受的速度运行，但有些操作可能会有令人惊讶的网络延迟变化，如启用Instant Commons时查找图像文件. 记住，延迟也取决于用户的连接. 维基媒体网站为许多使用移动或拨号连接的人提供服务，这些连接既慢又有很高的往返时间. 也有一些快速的连接具有较长的RTT，例如卫星调制解调器，2秒的RTT并不罕见.

可以有一些方法来管理延迟：
 * 首先，要意识到哪些代码路径是要永远快速的（数据库，memcache），哪些可能是缓慢的（获取文件信息或垃圾邮件黑名单，可能是跨维基的，在互联网上）.
 * 在创建可能间歇性变慢的代码路径时，记录这个事实.
 * 小心不要堆积请求——例如，外部搜索引擎可能在恶劣条件下返回较慢，而它通常很快. 瓶颈可能会导致所有的网络服务器陷入困境.
 * 考虑将操作分解成可以分离的小块
 * 或者，可以考虑并行运行操作——这可能有点棘手，因为MediaWiki目前还没有一次执行多个HTTP读取的好的原语

（当然，延迟在一定程度上取决于用户的连接. 维基媒体网站为许多使用移动或拨号连接的人服务. 这个目标在300毫秒左右的往返时间内是合理的. 如果有人用的是2000ms RTT的卫星，那么他们可以预期所有的东西都很慢，但那只是一小部分用户. ）

最坏情况下，一个开销较高但有价值的请求，在它未命中缓存或无法缓存的时候，应当在5秒的服务器时间以内计算完成. 最好能到2秒钟.


 * 例如：保存对页面的新编辑
 * 例如：显示视频缩略图

我的代码多久运行一次？
考虑网站或浏览器执行代码的频率是很重要的. 以下是主要的例子：
 * 频繁. 这显然是最关键的.
 * 浏览页面时（实际发送HTML时）——就是说，也很频繁了，除非用户收到 返回码或者类似这个. 几乎每次匿名（未登录）读者阅读维基百科页面时，他们都会收到预先渲染好的HTML. 如果你添加了新的代码，每次有人浏览页面时都会运行，那就要注意了.
 * 渲染页面内容时. MediaWiki（如在维基媒体网站上的配置）通常只在编辑后或缓存缺失后才需要渲染页面内容（在服务器端），所以渲染的频率远远低于页面浏览. 出于这个原因，性能开销更大的操作是可以接受的. 渲染通常不会在用户等待的时候进行——除非用户刚刚编辑了页面，这就导致...
 * 保存编辑时. 这是最罕见的代码路径，也是可以接受最大延迟的路径. 用户倾向于接受在执行一个“感觉很重”的动作后的较长时间等待，比如编辑一个页面. (但维基媒体希望鼓励更多的人编辑和上传，所以这个标准可能会改变）. )

还要注意请求失败后的代码路径. 例如，注意“严密的重试循环”，这可能会导致数以百计的服务器陷入错误循环中. 如果可能的话，在失败后，你应该转而重新安排或在短时间内缓存错误，然后再试一次. (不正确的缓存错误也很危险).

你不是一个人
在开始设计系统体系结构之前，与Performance Team一起了解一般性能目标. 例如，面向用户的应用程序的延迟时间可能为200毫秒，而数据库的延迟时间可能为20毫秒或更短，特别是在根据以前查询的结果决定进一步访问时. 你不想过早地优化，但是你需要了解你的目标是否物理上可行.

您可能不需要设计自己的后端; 可以考虑使用现有的服务，或者请人为您设计一个接口. 考虑模块化. 性能是很难的，不要试图重新造轮子.

使用共享的资源与执行环境
请注意，你的代码会使用共享的资源（如数据库、队列和缓存服务器）的环境中运行. 因此，长时间运行的查询（如5秒以上）应该在专用服务器上运行. 例如，复杂的特殊页面列表的重新生成使用“vslow”数据库服务器. 注意容易出现死锁和锁定等待超时的查询模式；长时间运行的事务、写或锁定的 查询中低效的 子句、在同一事务中“间隙锁定”查询之前的插入查询. 在评估查询是否会花费“很长时间”或引起争用时，对它们进行性能分析. 这些数字总是相对于服务器的总体性能，以及它的运行频率而言的.

主要的执行环境是对单个网络请求的响应，其他环境是CLI模式（例如维护脚本）. 要注意的是，各种扩展可以通过钩子增加额外的查询和更新. 为了尽量减少核心和扩展之间的交互导致的超时、死锁和半完成更新的风险，应该在主事务轮中努力使RDBMS和对象存储的写入快速而简单. 对于那些需要花费大量时间或复杂的更新，应尽可能使用DeferredUpdates或JobQueue，以更好地将不同的模块相互隔离. 当数据项发生变化时，使用简单的缓存清理来进行重新计算，以避免速度变慢（还可以避免竞态条件问题以及多数据中心复制问题）.

限流
如果你的产品暴露了新的用户操作，使数据库的修改超出了标准的页面创建/页面编辑机制，那么首先要考虑这是否合适和可扩展. 如果你采用“一切都是维基页面”，你的维护开销和操作风险将大大降低. 参见Dan McKinley写的《拥抱“无趣”技术》.

如果你确实必须暴露新的“写”动作，确保应用一个速率限制.

例如：


 * UrlShortener暴露了用于创建新的短网址的API，它需要一个速率限制. 通常由 驱动. T133109

对于性能开销大的非写操作的计算，如可能暴露出缓慢或开销大的计算的高级用户功能，可以考虑实施基于PoolCounter的节流，以限制整个服务器的负载.

例如：


 * Special:Contributions暴露了一个数据库读取查询，可能会很慢. 这是由PoolCounter限制的速率. 参见T234450和T160985.

长时间运行的查询
应该在专门的服务器上运行长时间的读取查询，正如维基媒体对统计分析所做的那样. MySQL对 查询使用快照，如果使用了 快照会持续到. 快照实现了REPEATABLE-READ，确保在事务中，客户端看到的是单一时间点（第一个 的时间）存在的数据库. 在生产服务器上保持一个事务的开启时间超过（理想情况下）几秒钟是个坏主意. 只要一个REPEATABLE-READ事务是开启的（至少做了一次查询），MySQL就必须在索引中保留那些后来被删除或改变的行的旧版本，因为长期运行的事务应该在任何相关的SELECT查询中看到它们. 这些行会弄乱与长期运行的查询无关的热表的索引. 有些用于研究的数据库使用了长时间运行的查询. 特殊页面可以使用“vslow”查询组来映射到专用数据库.

锁
维基媒体的MySQL/MariaDB服务器使用InnoDB，它支持可重复读取事务. 间隙锁是“下一键锁”的一部分，这就是InnoDB实现可重复读取事务隔离级别的方式. 在维基媒体，可重复读取事务隔离是默认开启的（除非代码在命令行交互（CLI）模式下运行，如维护脚本），所以你在一个请求中做的所有SQL 查询将自动被捆绑到一个事务中. 更多信息请参见维基百科上的事务隔离条目，并查找可重复读取（快照隔离），以了解为什么最好避免幻象读取和其他现象.
 * 任何时候你在做写/删除/更新查询的时候，只要更新了什么东西，就会有间隙锁在上面，除非是通过唯一索引. 即使你不是在重复读取，即使你在做一个 ，如果，它返回多条记录，它也会有内部一致性. 因此，像是 或 或 ，要在唯一索引（如主键）上执行你的操作. 在你造成间隙锁的情况下，你想切换到在主键上做操作，需要首先执行 ，找到要操作的ID；这不能是 因为它有同样的锁问题. 这意味着你可能要处理竞赛条件问题，所以你可能想用 代替.

这里有一个导致不当锁定的常见错误：例如，看一下表 （tables.sql的第208行），其中有一个遵循“实体-值-属性”模式的三列的表.
 * 1) 第1列：对象/实体（这里是UserID）.
 * 2) 第2列：该对象的一个属性的名称
 * 3) 第3列：与该对象的该属性相关的值

也就是说，你有一堆每个实体的键值对，都在一个表中. (这种表的模式是一种反模式. 在这种情况下，为用户偏好的变化创建一个工作流是很诱人的，它可以删除该用户ID的所有记录，然后重新插入新的记录. 但这将导致数据库的大量争用. 相反，改变查询，使你只按主键删除. 首先 它，然后，当你插入新值时，你可以使用 （如果该行已经存在，则忽略插入）. 这是更有效的方法. 另外，你可以使用JSON blob，但这很难在单项的JOIN或WHERE子句中使用. 参见《关于MySQL锁》，对间隙锁进行一些解释.

事务
一般来说，每个网络请求和每个数据库操作都应该发生在一个事务中. 然而，当数据库事务与其他事物的操作混合在一起时要小心，比如另一个数据库事务或访问像Swift这样的外部服务. 要特别注意锁的顺序. 每次你更新或删除或插入任何东西时，都要问问自己：
 * 你会锁住什么表？
 * 是否有其他调用者？
 * 在进行查询后，提交之前，你在做什么？

避免过度争论. 避免以不必要的顺序上锁，特别是当你在做一些缓慢的事情，并在最后提交的时候. 例如，如果你有一个counter列，每当有事情发生时，你就会递增它，那么就不要在你解析一个页面10秒之前，用hook递增.

不要使用 （如果有人在事务中更新了一条记录，但没有提交，另一个请求仍然可以看到它）或 （每次你做SELECT，就像你做SELECT FOR UPDATE一样，也就是所谓的锁共享模式——锁定您选择的每一行，直到您提交事务——导致锁等待超时和死锁）.

Examples
Good example:. When message blobs (JSON collections of several translations of specific messages) change, it can lead to updates of database rows, and the update attempts can happen concurrently. In a previous version of the code, the code locked a row in order to write to it and avoid overwrites, but this could lead to contention. In contrast, in the current codebase, the  method performs a repeated attempt at update until it determines (by checking timestamps) that there will be no conflict. See lines 212-214 for an explanation and see line 208-234 for the outer do-while loop that processes  until it is empty.

Bad example: The former structure of the ArticleFeedbackv5 extension. Code included:

Bad practices here include the multiple counter rows with id = '0' updated every time feedback is given on any page, and the use of DELETE + INSERT IGNORE to update a single row. Both result in locks that prevent >1 feedback submission saving at a time (due to the use of transactions, these locks persist beyond than the time needed by the individual statements). See minutes 11-13 of Asher Feldman's performance talk & page 17 of his slides for more explanation.

索引
你创建的表将被其他代码所共享. 每个数据库查询都必须能够使用其中一个索引（包括写查询！）.

除非你处理的是一个很小的表，否则你需要对写进行索引（与读类似）. 注意死锁和锁定等待的超时问题. 尽量通过主查询进行更新和删除，而不是通过一些辅助键. 尽量避免对不存在的行进行 查询. 确保连接条件已经加上索引.

你不能对blob进行索引，但是你可以对blob前缀（由blob的前几个字符组成的子串）进行索引.

复合键——命名空间-标题对在数据库中到处都是. 你需要按顺序查询，先查询命名空间，再命名标题！

使用 和 查询，找出哪些索引受到特定查询的影响. 如果它在EXTRA列中说“使用临时表”或“使用filesort”，这往往是坏事！如果“possible_keys”是NULL，那往往不妙（虽然小排序和临时表是可以容忍的）. 一个“明显的”索引可能由于“选择性”差而实际上没有被使用. 请看维基媒体代码的性能分析指南，更多细节请看Roan Kattouw 2010年关于扩展开发者的安全性、可扩展性和性能的演讲，Roan 2012年的MySQL优化教程（幻灯片），以及Tim Starling 2013年的性能演讲.

索引不是灵丹妙药；更多并不总是更好. 一旦索引变得足够大，以至于它不再适合在RAM中使用，它的速度就会急剧下降. 此外，索引可以使读取更快，但写入更慢.

Good example: See the  and   tables. One of them also offers a reverse index, which gives you a cheap alternative to SORT BY.

Bad example: See this changeset (a fix). As the note states, "needs to be id/type, not type/id, according to the definition of the relevant index in :  ". Rather than using the index that was built on the id-and-type combination, the previous code (that this is fixing) attempted to specify an index that was "type-and-id", which did not exist. Thus, MariaDB did not use the index, and thus instead tried to order the table without using the index, which caused the database to try to sort 20 million rows with no index.

Persistence layer
Choose the right persistence layer for your needs: job queue (like Redis), database (like MariaDB), or file store (like Swift). In some cases, a cache can be used instead of a persistence layer.

Wikimedia sites makes use of local services including Redis, MariaDB, Swift, and memcached. (Also things like Parsoid that plug in for specific things like VisualEditor.) They are expected to reside on a low-latency network. They are local services, as opposed to remote services like Varnish.

People often put things into databases that ought to be in a cache or a queue. Here's when to use which:
 * 1) MySQL/MariaDB database - longterm storage of structured data and blobs.
 * 2) Swift file store - longterm storage for binary files that may be large. See Media storage for details.
 * 3) Redis jobqueue - you add a job to be performed, the job is done, and then the job is gone. You don't want to lose the jobs before they are run. But you are ok with there being a delay.
 * (in the future maybe MediaWiki should support having a high-latency and a low-latency queue.)

A cache, such as memcached, is storage for things that persist between requests, and that you don't need to keep - you're fine with losing any one thing. Use memcached to store objects if the database could recreate them but it would be computationally expensive to do so, so you don't want to recreate them too often. You can imagine a spectrum between caches and stores, varying on how long developers expect objects to live in the service before getting evicted; see the Caching layers section for more.

Permanent names: In general, store resources under names that won't change. In MediaWiki, files are stored under their "pretty names", which was probably a mistake - if you click Move, it ought to be fast (renaming title), but other versions of the file also have to be renamed. And Swift is distributed, so you can't just change the metadata on one volume of one system.

Object size: Memcached sometimes gets abused by putting big objects in there, or where it would be cheaper to recalculate than to retrieve. So don't put things in memcached that are TOO trivial - that causes an extra network fetch for very little gain. A very simple lookup, like "is a page watched by current user", does not go in the cache, because it's indexed well so it's a fast database lookup.

When to use the job queue: If the thing to be done is fast (~5 milliseconds) or needs to happen synchronously, then do it synchronously. Otherwise, put it in the job queue. You do not want an HTTP request that a user is waiting on to take more than a few seconds. Examples using the job queue:


 * Updating link table on pages modified by a template change
 * Transcoding a video that has been uploaded

HTMLCacheUpdate is synchronous if there are very few backlinks. Developers also moved large file uploads to an asynchronous workflow because users started experiencing timeouts.

In some cases it may be valuable to create separate classes of job queues -- for instance video transcoding done by Extension:TimedMediaHandler is stored in the job queue, but a dedicated runner is used to keep the very long jobs from flooding other servers. Currently this requires some manual intervention to accomplish (see TMH as an example).

Extensions that use the job queue include RenameUser, TranslationNotification, Translate, GWToolset, and MassMessage.

Additional examples:
 * large uploads. UploadWizard has API core modules and core jobs take care of taking chunks of file, reassembling, turning into a file the user can view. The user starts defining the description, metadata, etc., and the data is sent 1 chunk at a time.
 * purging all the pages that use a template from Varnish & bumping the  column in the database, which tells parser cache it's invalid and needs to be regenerated
 * refreshing links: when a page links to many pages, or it has categories, it's better to refresh links or update categories after saving, then propagate the change. (For instance, adding a category to a template or removing it, which means every page that uses that template needs to be linked to the category -- likewise with files, externals, etc.)

How slow or contentious is the thing you are causing? Maybe your code can't do it on the same web request the user initiated. You do not want an HTTP request that a user is waiting on to take more than a few seconds.

Example: You create a new kind of notification. Good idea: put the actual notification action (emailing people) or adding the flags (user id n now has a new notification!) into the jobqueue. Bad idea: putting it into a database transaction that has to commit and finish before the user gets a response.

Good example: The Beta features extension lets a user opt in for a "Beta feature" and displays, to the user, how many users have opted in to each of the currently available Beta features. The preferences themselves are stored in  table. However, directly counting the number of opted-in users every time that count is displayed would not have acceptable performance. Thus, MediaWiki stores those counts in the database in the  table, but they are also stored in memcached. It's important to immediately update the user's own preference and be able to display the updated preference on page reload, but it's not important to immediately report to the user the increase or decrease in the count, and this information doesn't get reported in Special:Statistics.

Therefore, BetaFeatures updates those user counts every half hour or so, and no more. Specifically, the extension creates a job that does a SELECT query. Running this query takes a long time - upwards of 5 minutes! So it's done once, and then on the next user request, the result gets cached in memcached for the page https://en.wikipedia.org/wiki/Special:Preferences#mw-prefsection-betafeatures. (They won't get updated at all if no one tries to fetch them, but that is unlikely.) If a researcher needs a realtime count, they can directly query the database outside of MediaWiki application flow.

Code: UpdateBetaFeatureUserCountsJob.php and BetaFeaturesHooks.php.

Bad example: add one?

Multiple datacenters
See Database transactions

Once CDN requests reach (non-proxy) origin servers, the responding service (such as Apache/MediaWiki, Thumbor, or HyperSwitch) must limit its own read operations from persistence layers to only involve the local datacenter. The same applies to write operations to caching layers, except for allowing asynchronous purging broadcasts or asynchronous replication of caches that are profoundly expensive to regenerate from scratch (e.g. parser cache in MySQL). Write operations to source data persistence layers (MySQL, Swift, Cassandra) are more complex, but generally should only happen on HTTP POST or PUT requests from end-users and should be synchronous in the local datacenter, with asynchronous replication to remote datacenters. Updates to search index persistence layers (Elastic, BlazeGraph) can use either this approach, the Job queue, or Change propagation. The enqueue operations to the job/propagation systems are themselves synchronous in the local datacenter (with asynchronous replication to the remote ones).

HTTP POST/PUT requests to MediaWiki will be routed to the master datacenter and the MediaWiki job queue workers only run there as well (e.g. where the logic of  executes). An independent non-MediaWiki API service might be able to run write APIs correctly in multiple datacenters at once if it has very limited semantics and has no relational integrity dependencies on other source data persistence layers. For example, if the service simply takes end-user input and stores blobs keyed under new UUIDs, there is no way that writes can conflict. If updates or deletions are later added as a feature, then Last-Write-Wins might be considered a "correct" approach to handling write conflicts between datacenters (e.g. if only one user has permission to change any given blob then all conflicts are self-inflicted). If write conflicts are not manageable, then such API requests should be routed to the master datacenter.

Work involved during cache misses
Wikimedia uses and depends heavily on many different caching layers, so your code needs to work in that environment! (But it also must work if everything misses cache.)

Cache-on-save: Wikimedia sites use a preemptive cache-repopulation strategy: if your code will create or modify a large object when the user hits "save" or "submit", then along with saving the modified object in the database/filestore, populate the right cache with it (or schedule a job in the job queue to do so). This will give users faster results than if those large things were regenerated dynamically when someone hit the cache. Localization (i18n) messages, SpamBlacklist data, and parsed text (upon save) are all aggressively cached. (See "Caching layers" for more.)

At the moment, this strategy does not work well for memcached for Wikimedia's multi-datacenter use case. A workaround when using WANObjectCache is to use  as normal, but with "lockTSE" set and with a "check" key passed in. The key can be "bumped" via  to perform invalidations instead of using. This avoids cache stampedes on purge for hot keys, which is usually the main goal.

If something is VERY expensive to recompute, then use a cache that is somewhat closer to a store. For instance, you might use the backend (secondary) Varnishes, which are often called a cache, but are really closer to a store, because objects tend to persist longer there (on disk).

Cache misses are normal: Avoid writing code that, on cache miss, is ridiculously slow. (For instance, it's not okay to  and assume that a memcache between the database and the user will make it all right; cache misses and timeouts eat a lot of resources. Caches are not magic.) The cluster has a limit of 180 seconds per script (see the limit in Puppet); if your code is so slow that a function exceeds the max execution time, it will be killed.

Write your queries such that an uncached computation will take a reasonable amount of time. To figure out what is reasonable for your circumstance, ask the Site performance and architecture team.

If you can't make it fast, see if you can do it in the background. For example, see some of the statistics special pages that run expensive queries. These can then be run on a dedicated time on large installations. But again, this requires manual setup work -- only do this if you have to.

Watch out for cached HTML: HTML output may sit around for a long time and still needs to be supported by the CSS and JS. Problems where old JS/CSS hang around are in some ways more obvious, so it's easier to find them early in testing, but stale HTML can be insidious!

Good example: See the TwnMainPage extension. It offloads the recalculation of statistics (site stats and user stats) to the job queue, adding jobs to the queue before the cache expires. In case of cache miss, it does not show anything; see CachedStat.php. It also sets a limit of 1 second for calculating message group stats; see SpecialTwnMainPage.php.

Bad example: a change "disabled varnish cache, where previously it was set to cache in varnish for 10 seconds. Given the amount of hits that page gets, even a 10 second cache is probably helpful."

缓存层
缓存命中率应该越高越好；如果你引入了新的cookie、共享的资源、捆绑的请求或调用，或者其他会改变请求并降低缓存命中率的变化，就要注意了.

你需要关心的缓存层有：
 * 1) 浏览器缓存
 * 2) 原生浏览器缓存
 * 3) LocalStorage. 参见meta:Research:Module storage performance，查看统计数据证明，将ResourceLoader存储在LocalStorage中可以加快页面加载时间，使用户浏览更多内容.
 * 4) 前端Varnish
 * Varnish缓存缓存了整个HTTP响应，包括图片的缩略图、经常请求的页面、ResourceLoader模块以及类似的可以通过URL检索的项目. 前端的Varnish将这些保存在内存中. 一个加权随机负载平衡器（LVS）将网络请求分配给前端Varnish.
 * 因为维基媒体在地理上分布它的前端Varnish（在阿姆斯特丹和旧金山的缓存中心以及德克萨斯和弗吉尼亚的数据中心）以减少用户的延迟，一些工程师把这些前端Varnish称为“边缘缓存”，有时也称为CDN（内容交付网络）. 一些细节见MediaWiki at WMF.
 * 1) 后端Varnish
 * 如果前端Varnish没有缓存的响应，它通过基于哈希的负载平衡（URI的哈希值）将请求传递给后端Varnish. 后端Varnish持有更多的响应，将它们存储在磁盘上. 每个URL最多只在一个后端Varnish上.
 * 1) 对象缓存（在WMF生产中用memcached实现，但其他实现方式包括Redis、APC等）
 * 对象缓存是一个通用的服务，用于许多事情，例如，用户对象缓存. 它是一个通用的服务，很多服务都可以把东西放在里面. 你也可以将该服务作为一个更大的缓存策略中的一层，这就是维基媒体设置中的解析器缓存的作用. 解析器缓存的一层存在于对象缓存中.
 * 一般来说，不要禁用解析器缓存. 参见：如何使用解析器缓存.
 * 1) 数据库的缓冲池和查询缓存（不能直接控制）.

如何选择使用哪个（些）缓存，以及如何避免将不适当的对象放入缓存？见《选择正确的缓存：MediaWiki开发者指南》.

弄清楚如何通过清除、直接更新（将数据推入缓存）或以其他方式提高时间戳或版本标识来适当地使缓存中的内容失效. 你的应用需求将决定你的缓存清除策略.

由于Varnish为每个URL提供内容，URL应该是确定的——也就是说，它们不应该从同一个URL提供不同的内容. 不同的内容属于不同的URL. 这对匿名用户来说应该是正确的；对登录用户来说，维基媒体的配置包含了涉及到cookies和缓存层的额外细节.

Good example: (from the mw.cookie change) of not poisoning the cache with request-specific data (when cache is not split on that variable). Background:  will use MediaWiki's cookie settings, so client-side developers don't think about this. These are passed via the ResourceLoader startup module. Issue: However, it doesn't use Manual:$wgCookieSecure (instead, this is documented not to be supported), since the default value (' ') varies by the request protocol, and the startup module does not vary by protocol. Thus, the first hit could poison the module with data that will be inapplicable for other requests.

Bad examples:
 * GettingStarted error: Don't use Token in your cookie name. In this case, the cookie name hit a regular expression that Varnish uses to know what to cache and not cache. See the code, an initial revert, another early fix, another revert commit, the Varnish layer workaround, the followup fix, the GettingStarted fix part 1 and part 2, and the regex fix.
 * WikidataClient was fetching a large object from memcached just to decide which project group it was on, when it would have been more efficient to simply recompute it by putting the very few values needed into a global variable. (See the changeset that fixed the bug.)
 * Template parse on every page view is a bad thing, as it obviates the advantage of the parser cache (the cache that caches parsed wikitext).

Multiple data centers
WMF运行多个数据中心（“eqiad”、“codfw”等）. 该计划是转移到主/从数据中心配置（请参阅RFC），其中用户从最近的数据中心的缓存中读取页面，而所有更新活动都流向主数据中心. 大多数MediaWiki代码不需要直接知道这一点，但它确实对开发人员编写代码的方式有影响; 请参见 RFC的设计影响.
 * ''TODO: bring guidelines from RFC to here and other pages.

Cookies
对于cookies，除了与缓存有关的问题（见上面的“缓存层”）以外，还有一个问题是cookies会使每个请求的有效载荷膨胀，也就是说，它们会导致更多的数据来回发送，而且往往是不必要的. 虽然膨胀的header有效载荷对页面性能的影响没有增大Varnish缓存比率那么直接，但它的可测量性和重要性并不低. 请考虑使用localStorage或sessionStorage作为cookies的替代品. 客户端存储在非IE浏览器中运行良好，在IE8以上的IE中也是如此.

参见Google关于最小化请求开销的建议.

Technical documents

 * WMF usage of Graphite
 * MediaWiki & Wikimedia use cases for Redis
 * API:Etiquette
 * Job class reference
 * Manual:Job queue (and Manual:Job queue/For developers)
 * Manual:How to debug
 * Manual:Profiling
 * Performance profiling for Wikimedia code

Talks

 * "Why your extension will not be enabled on Wikimedia wikis in its current state and what you can do about it", Roan Kattouw, Wikimania, July 2010
 * Notes from Tim Starling's security and performance talk, WMF training session, July 2011
 * MediaWiki MySQL optimization tutorial (slides), Roan Kattouw, Berlin Hackathon, June 2012
 * "MediaWiki Performance Profiling" (video) (slides), Asher Feldman, WMF Tech Days, September 2012
 * "MediaWiki Performance Techniques", Tim Starling, Amsterdam Hackathon, May 2013
 * "Let's talk about web performance" (video), Peter Hedenskog, WMF tech talk, August 2015

Posts and discussions

 * "Measuring Site Performance at the Wikimedia Foundation", Asher Feldman, March 2012
 * "How the Technical Operations team stops problems in their tracks", Sumana Harihareswara, February 2013
 * Requests for comment/Performance standards for new features, December 2013
 * Notes from performance discussion, Architecture Summit 2014, January 2014

General web performance

 * "Scalable Web Architecture and Distributed Systems" (book chapter), Kate Matsudaira, May 2012
 * "80% of end-user response time is spent on the frontend", Marcus Ljungblad, April 2014