Performance guidelines

From MediaWiki.org
Jump to: navigation, search

These performance guidelines are for developers of code that's intended to run on Wikimedia sites, including core MediaWiki, extensions, user scripts, and gadgets. Much of the following is also applicable to extension code that is not intended for Wikimedia sites.

This guide interrelates with the Architecture guidelines and Security for developers/Architecture.

This page talks about how to make your code fast given how the site works right now. It does not get into the details of how the overall architecture of MediaWiki can be improved to increase performance; for that, see phab:T96903.

What to do (summary)[edit]

  • Be prepared to be surprised by the performance of your code; predictions tend to be bad.
  • Be scrupulous about measuring performance (in your development environment AND in production) and know where time is being spent.
  • When latency is identified, take responsibility for it and make it a priority; you have the best idea of usage patterns & what to test.
  • Performance is often related to other symptoms of bad engineering; think about the root cause.
    • MediaWiki is complex and can interact in unexpected ways. Your code can expose performance issues elsewhere that you will need to identify.
  • Expensive but valuable actions that miss the cache should take, at most, 5 seconds; 2 seconds is better.
    • If that isn't enough, consider using the job queue to perform a task on background servers.

General performance principles[edit]

MediaWiki application development:

  • Deliver CSS and JavaScript fast (bundled, minified, and avoiding duplication) while retaining the benefits of caching. The ResourceLoader does this.
  • Defer loading modules that don't affect the initial rendering of the page, particularly "above the fold" (the top portion of the page that's initially visible on the user's screen). So load as little JavaScript as possible with position 'top'; instead load more components asynchronously or with the default ResourceLoader behavior of loading at the bottom of the page. See loading modules for more information.
  • Users should have a smooth experience; different components should render progressively. Preserve positioning of elements (e.g. avoid pushing down content in a reflow). need code from a bad example

Wikimedia infrastructure:

  • Your code is running in a shared environment. Thus, long-running SQL queries can't run as part of the web request. Instead they should be made to run on a dedicated server (use the JobQueue), and watch out for deadlocks and lock-wait timeouts.
  • The tables you create will be shared by other code. Every database query must be able to use one of the indexes (including write queries!). EXPLAIN your queries and create new indices where required.
  • Choose the right persistence layer for your needs: Redis job queue, MariaDB database, or Swift file store. Only cache if your code can always performantly handle the cached data disappearing; otherwise, persist the data. need a bad example
  • Wikimedia uses and depends heavily on many different caching layers, so your code needs to work in that environment! (But it also must work if everything misses cache.)
  • The cache hit ratio should be as high as possible; watch out if you're introducing new cookies, shared resources, bundled requests or calls, or other changes that will vary requests and reduce cache hit ratio.

How to think about performance[edit]

Measure[edit]

Measure how fast your code works, so you can make decisions based on facts instead of superstition or feeling. Use these principles together with the Architecture guidelines and Security guidelines. Both performance (your code runs (relatively) fast) and scalability (your code doesn't get much slower on larger wikis and when instantiated many times concurrently) are important; measure both.

Percentiles[edit]

Always consider high percentile values rather than median.

It is very common that performance data in the web has two different "signals": one for users accessing the application on a warm cache and other from users accessing the application on cold cache. Calculating averages on a dataset with these two signals is pointless. To do a fast check on the data make sure you have at least 10.000 data points and calculate the 50th and 90th percentile. Those numbers might differ greatly and that would be an indication on performance issues that you can fix. For example, if network roundtrips are quite slow and you have a lot of resources being fetched you shall see a huge difference between users coming to your site with cached resources (thus avoiding all those slow round trips) and not. Even better, if you have sufficient data, you can calculate 1, 50, 90 and 99 percentiles. A good rule of thumb is that to have statistical significance you need 10.000 data points to calculate a 90th percentile, 100.000 for a 99th percentile and 1 million for a 99.9.

This rule of thumb oversimplifies matters a bit, but works well for performance analysis. (Some literature about this)

Latency[edit]

The software should run at an acceptable speed regardless of network latency, but some operations may have surprisingly variable network latency, such as looking up image files when Instant Commons is enabled. Remember that latency also depends on the user's connection. Wikimedia sites serve many people on mobile or dialup connections which are both slow and have a high round-trip time. There are also fast connections with long RTT, for example satellite modem where 2 second RTT is not unusual.

There can be some ways to manage latency:

  • first, be aware of which code paths are meant to always be fast (database, memcache) and which may be slow (fetching File info or spam blacklists that might be cross-wiki and go over the internet)
  • when creating a code path that may be intermittently slow, document this fact.
  • be careful not to pile on requests -- for instance an external search engine might be slow to return under poor conditions while it's normally fast. Bottlenecking may cause all web servers to get caught up.
  • consider breaking operations into smaller pieces which can be separated
  • alternately, consider running operations in parallel -- this can be tricky though, as MediaWiki currently does not have good primitives for doing multiple HTTP fetches at once

(Latency of course depends on the user's connection somewhat. Wikimedia sites serve many people on mobile or dialup connections. This goal is reasonable up to 300 milliseconds round-trip time or so. If someone's on satellite with 2000ms RTT, then they can expect everything to be slow, but that's a small minority of users.)

In the worst case a request that is expensive but valuable and misses or cannot be cached should take at most 5 seconds of server compute time. Strive for two seconds.

  • example: saving a new edit to a page
  • example: rendering a video thumbnail

How often will my code run?[edit]

It's important to think about how often the site or the browser will have to execute your code. Here are the main cases:

  • Always. This is obviously the most critical.
  • On page views (when actually sending HTML) -- that is, nearly always, unless the user gets a 304 Not Modified code or some such. Nearly every time an anonymous (not logged in) reader reads a Wikipedia page, they will get canned, pre-rendered HTML sent to them. If you add new code that runs every time anyone views a page, watch out.
  • When rendering page content. MediaWiki (as configured on Wikimedia sites) usually has to render page content (on the server side) only after an edit or after a cache miss, so renders are far less frequent than page views. For that reason, more expensive operations are acceptable. Rendering is typically not done while a user is waiting -- unless the user just edited the page, which leads to...
  • When saving an edit. This is the rarest code path, and the one on which the largest delays are acceptable. Users tend to accept a longer wait after performing an action that "feels heavy", like editing a page. (But Wikimedia wants to encourage more people to edit and upload, so this norm may change.)

Also watch out for failure code paths. Watch out, for instance, for a 'tight retry loop', which could cause hundreds of servers to get stuck in an error cycle. If possible, after failure, you should instead reschedule and/or cache the error for a short time, before trying again. (Incorrectly cached errors are also dangerous.)

You're not alone[edit]

Work with your product managers/development manager/Performance Team to understand general performance targets before you start architecting your system. For example, a user-facing application might have an acceptable latency of 200 ms while a database might have something like 20 ms or less, especially if further access is decided based on the results of previous queries. You don't want to prematurely optimize, but you need to understand if your targets are physically possible.

You might not need to design your own backend; consider using an existing service, or having someone design an interface for you. Consider modularization. Performance is hard; do not try to reinvent the wheel.

ResourceLoader[edit]

ResourceLoader is the delivery system for the optimized loading and managing of modules. Learn how to develop code with ResourceLoader and its features.

Examples:

Modules requested over HTTP are cached by timestamp. If your module is made up of wiki pages or plain files, the default ResourceLoaderModule classes take care of measuring the invalidation timestamp for you. When implementing a custom module, you're responsible for timestamp measuring and freshness computation.

Examples:

  • "ResourceLoaderModule hash" uses the hash of data inside the key, or the hash of request context. In a previous version, it used a generic key and then whenever a request came in for a different language than the previous request, it invalidated the cache. This cached the module to infinitely invalidate its own cache causing it essentially to not be cached at all. TODO: find code (is this what you wanted?)

Deferring loading[edit]

Defer loading modules that don't affect the initial rendering of the page (above the fold). Load as little JavaScript as needed from the top loading queue; load more components asynchronously or from the bottom queue.

Downloading and executing resources (styles, MediaWiki messages, and scripts) can slow down a user's experience. Per "Developing with ResourceLoader", when possible, load modules that the user will not immediately need via the bottom queue, rather than the top queue.

Chrom(e|ium)'s developer tools are a good way to introspect the order in which your code is loading resources. For further advice on tuning front-end performance, Ilya Grigorik's book "High Performance Browser Networking" is excellent and available to read for free.

Don't load anything synchronously that will freeze the user interface.

Good example: https://github.com/wikimedia/mediawiki-extensions-Wikibase/blob/master/client/resources/wikibase.client.linkitem.init.js performs lazy loading.

MultimediaViewer follows the init model pattern. mmv.bootstrap.autostart.js https://phabricator.wikimedia.org/diffusion/EMMV/browse/master/resources/mmv/mmv.bootstrap.autostart.js;832cbf3f030fede74f76f4f26e2137813cdf2edf - is a bootstrapper that loads everything else.

Bad examples:

Preserving positioning[edit]

Users should have a smooth experience; different components should render progressively. Preserve positioning of elements (e.g. avoid pushing content in a reflow).

Don't have your code discover that an element is needed and then cause them to appear. Instead, have your code preserve a place for the element ahead of time, or display an option greyed out until you know whether it's active.

It's good to explicitly state width/height on <img> elements, or to reserve space for elements that will be rendered by JavaScript if you're sure they will be there.

Good example:

  • VisualEditor used to have its "Edit" tab appear last, effectively pushing the "Edit source" one step to the left. The browser initially rendered "Edit source" and then moved it to a different place, causing performance and user experience problems. When a user clicked on "Edit source", she often ended up unintentionally clicking the (new) "Edit" tab instead, since it had moved under her cursor. See the changeset that fixed the problem.
  • The Media Viewer within Extension:MultimediaViewer - see mmv.bootstrap.js - starts the display by displaying a large black background, and then starts displaying the content from top to bottom and also absolutely positioned. Therefore, items render smoothly and do not jump around or cause reflows.

Bad example:

  • Fundraising and other site info banners on Wikipedia may cause a "jump" of content when the banner is inserted.

Shared environment[edit]

Your code is running in a shared environment. Thus, long-running queries should be on a dedicated server, and watch out for deadlocks and lock-wait timeouts. When assessing whether your queries will take "a long time" or cause contention, profile them. These numbers will always be relative to the performance of the server in general, and to how often it will be run.

Long-running queries[edit]

Long-running queries that do reads should be on a dedicated server, as Wikimedia does with analytics. MySQL uses snapshots for SELECT queries, and the snaphotting persists until COMMIT if BEGIN was used. Snapshots implement REPEATABLE-READ by making sure that, in the transaction, the client sees the database as it existed in single point in time (the time of the first SELECT). Keeping one transaction open for more than (ideally) seconds is a bad idea on production servers. As long as a REPEATABLE-READ transaction is open (that did at least one query), MySQL has to keep the old versions of rows around in the index that were since deleted or changed because the long-running transaction should see them in any relevant SELECT queries. These rows can clutter up the index of hot tables that have nothing to do with the long-running query. There are research databases - use those. Special pages can use the "vslow" query group to be mapped to dedicated databases.

Locking[edit]

Wikimedia's MySQL/MariaDB servers use InnoDB, which supports repeatable read transactions. Gap locking is part of "Next-key Locks", which is how InnoDB implements REPEATABLE READ transaction isolation level. At Wikimedia, repeatable read transaction isolation is on by default (unless the code is running in Command-Line Interaction (CLI) mode, as with the maintenance scripts), so all the SQL SELECT queries you do within one request will automatically get bundled into a transaction. For more information, see the Wikipedia articles on and look up repeatable read (snapshot isolation), to understand why it's best to avoid phantom reads and other phenomena.

Anytime you are doing a write/delete/update query that updates something, it will have gap locks on it unless it is by a unique index. Even if you are not in repeatable read, even if you are doing one SELECT, it will be internally consistent if, for example, it returns multiple rows.Thus: do your operations, e.g., DELETE or UPDATE or REPLACE, on a unique index, such as a primary key. The situations where you were causing gap locks and you want to switch to doing operations on a primary key are ones where you want to do a SELECT first to find the ID to operate on; this can't be SELECT FOR UPDATE since it has the same locking problems. This means you might have to deal with race condition problems, so you may want to use INSERT IGNORE instead of INSERT.

Here's a common mistake that causes inappropriate locking: take a look at, for instance, the table user_properties (line 208 of tables.sql), in which you have a three-column table that follows the "Entity-value-attribute" pattern.

  1. Column 1: the object/entity (here, UserID)
  2. Column 2: the name of a property for that object
  3. Column 3: the value associated with that property for the object

That is, you have a bunch of key-values for each entity that are all in one table. (This table schema is kind of an antipattern. But at least this is a reasonably specific table that just holds user preferences.)In this situation, it's tempting to create a workflow for user preference changes that deletes all the rows for that userID, then reinserts new ones. But this causes a lot of contention for the database. Instead, change the query so you only delete by the primary key. SELECT it first, and then, when you INSERT new values, you can use INSERT IGNORE (which ignores the insert if the row already exists). This is more efficient. Alternatively, you can use a JSON blob, but this is hard to use in JOINs or WHERE clauses in single entries. See "On MySQL locks" for some explanation of gap locks.

Transactions[edit]

Every web request and every database operation, in general, should occur within a transaction. However, be careful when mixing a database transaction with an operation on something else, such as another database transaction or accessing an external service like Swift. Be particularly careful with locking order. Every time you update or delete or insert anything, ask:

  • what you are locking?
  • are there other callers?
  • what are you doing, after making the query, all the way to making the commit?

Avoid excessive contention. Avoid locking things in an unnecessary order, especially when you're doing something slow and committing at the end. For instance, if you have a counter column that you increment every time something happens, then DON'T increment it in a hook just before you parse a page for 10 seconds.

Do not use READ UNCOMMIT (if someone updates a row in a transaction and has not committed it, another request can still see it) or SERIALIZABLE (every time you do SELECT, it's as if you did SELECT FOR UPDATE, a.k.a. lock-and-share mode -- locking every row you select until you commit the transaction -leads to lock-wait timeouts and deadlocks).

Examples[edit]

Good example: includes/MessageBlobStore.php. When message blobs (JSON collections of several translations of specific messages) change, it can lead to updates of database rows, and the update attempts can happen concurrently. In a previous version of the code, the code locked a row in order to write to it and avoid overwrites, but this could lead to contention. In contrast, in the current codebase, the updateMessage() method performs a repeated attempt at update until it determines (by checking timestamps) that there will be no conflict. See lines 212-214 for an explanation and see line 208-234 for the outer do-while loop that processes $updates until it is empty.

Bad example: The former structure of the ArticleFeedbackv5 extension. Code included:

 INSERT /* DatabaseBase::insert Asher Feldman */ INTO `aft_article_feedback`
(af_page_id,af_revision_id,af_created,af_us
er_id,af_user_ip,af_user_anon_token,af_form_id,af_experiment,af_link_id,af_has_comment) VALUES
('534366','506813755','20120813223135','14719981',NULL,'','6','M5_6','0','1')
INSERT /* ApiArticleFeedbackv5::saveUserRatings Asher Feldman */ INTO `aft_article_answer`
(aa_field_id,aa_response_rating,aa_response_text,aa_response_boolean,aa_response_option_id,aa_feedb
ack_id,aat_id) VALUES ('16',NULL,NULL,'1',NULL,'253294',NULL),('17',NULL,'Well sourced article!
(this is a test comment) ',NULL,NULL,'253294',NULL)
UPDATE /* ApiArticleFeedbackv5::saveUserRatings Asher Feldman */ `aft_article_feedback` SET
af_cta_id = '2' WHERE af_id = '253294'

Bad practices here include the multiple counter rows with id = '0' updated every time feedback is given on any page, and the use of DELETE + INSERT IGNORE to update a single row. Both result in locks that prevent >1 feedback submission saving at a time (due to the use of transactions, these locks persist beyond than the time needed by the individual statements). See minutes 11-13 of Asher Feldman's performance talk & page 17 of his slides for more explanation.

Indexing[edit]

The tables you create will be shared by other code. Every database query must be able to use one of the indexes (including write queries!).

Unless you're dealing with a tiny table, you need to index writes (similarly to reads). Watch out for deadlocks and for lock-wait timeouts. Try to do updates and deletes by primary query, rather than some secondary key. Try to avoid UPDATE/DELETE queries on rows that do not exist. Make sure join conditions are cleanly indexed.

You cannot index blobs, but you can index blob prefixes (the substring comprising the first several characters of the blob).

Compound keys - namespace-title pairs are all over the database. You need to order your query by asking for namespace first, then title!

Use EXPLAIN & MYSQL DESCRIBE query to find out which indexes are affected by a specific query. If it says "Using temporary table" or "Using filesort" in the EXTRA column, that's often bad! If "possible_keys" is NULL, that's often bad (small sorts and temporary tables are tolerable though). An "obvious" index may not actually be used due to poor "selectivity". See the Performance profiling for Wikimedia code guide, and for more details, see Roan Kattouw's 2010 talk on security, scalability and performance for extension developers, Roan's MySQL optimization tutorial from 2012 (slides), and Tim Starling's 2013 performance talk.

Indexing is not a silver bullet; more isn't always better. Once an index gets big enough that it doesn't fit into RAM anymore, it slows down dramatically. Additionally, an index can make reads faster, but writes slower.

Good example: See the ipblock and page_props tables. One of them also offers a reverse index, which gives you a cheap alternative to SORT BY.

Bad example: See this changeset (a fix). As the note states, "needs to be id/type, not type/id, according to the definition of the relevant index in wikibase.sql: wb_entity_per_page (epp_entity_id, epp_entity_type)". Rather than using the index that was built on the id-and-type combination, the previous code (that this is fixing) attempted to specify an index that was "type-and-id", which did not exist. Thus, MariaDB did not use the index, and thus instead tried to order the table without using the index, which caused the database to try to sort 20 million rows with no index.

Persistence layer[edit]

Choose the right persistence layer for your needs: job queue (like Redis), database (like MariaDB), or file store (like Swift). In some cases, a cache can be used instead of a persistence layer.

Wikimedia sites makes use of local services including Redis, MariaDB, Swift, and memcached. (Also things like Parsoid that plug in for specific things like VisualEditor.) They are expected to reside on a low-latency network. They are local services, as opposed to remote services like Varnish.

People often put things into databases that ought to be in a cache or a queue. Here's when to use which:

  1. MySQL/MariaDB database - longterm storage of structured data and blobs.
  2. Swift file store - longterm storage for binary files that may be large. See wikitech:Media storage for details.
  3. Redis jobqueue - you add a job to be performed, the job is done, and then the job is gone. You don't want to lose the jobs before they are run. But you are ok with there being a delay.
(in the future maybe MediaWiki should support having a high-latency and a low-latency queue.)

A cache, such as memcached, is storage for things that persist between requests, and that you don't need to keep - you're fine with losing any one thing. Use memcached to store objects if the database could recreate them but it would be computationally expensive to do so, so you don't want to recreate them too often. You can imagine a spectrum between caches and stores, varying on how long developers expect objects to live in the service before getting evicted; see the Caching layers section for more.

Permanent names: In general, store resources under names that won't change. In MediaWiki, files are stored under their "pretty names", which was probably a mistake - if you click Move, it ought to be fast (renaming title), but other versions of the file also have to be renamed. And Swift is distributed, so you can't just change the metadata on one volume of one system.

Object size: Memcached sometimes gets abused by putting big objects in there, or where it would be cheaper to recalculate than to retrieve. So don't put things in memcached that are TOO trivial - that causes an extra network fetch for very little gain. A very simple lookup, like "is a page watched by current user", does not go in the cache, because it's indexed well so it's a fast database lookup.

When to use the job queue: If the thing to be done is fast (~5 milliseconds) or needs to happen synchronously, then do it synchronously. Otherwise, put it in the job queue. You do not want an HTTP request that a user is waiting on to take more than a few seconds. Examples using the job queue:

  • Updating link table on pages modified by a template change
  • Transcoding a video that has been uploaded

HTMLCacheUpdate is synchronous if there are very few backlinks. Developers also moved large file uploads to an asynchronous workflow because users started experiencing timeouts.

In some cases it may be valuable to create separate classes of job queues -- for instance video transcoding done by Extension:TimedMediaHandler is stored in the job queue, but a dedicated runner is used to keep the very long jobs from flooding other servers. Currently this requires some manual intervention to accomplish (see TMH as an example).

Extensions that use the job queue include RenameUser, TranslationNotification, Translate, GWToolset, and MassMessage.

Additional examples:

  • large uploads. UploadWizard has API core modules and core jobs take care of taking chunks of file, reassembling, turning into a file the user can view. The user starts defining the description, metadata, etc., and the data is sent 1 chunk at a time.
  • purging all the pages that use a template from Varnish & bumping the page_touched column in the database, which tells parser cache it's invalid and needs to be regenerated
  • refreshing links: when a page links to many pages, or it has categories, it's better to refresh links or update categories after saving, then propagate the change. (For instance, adding a category to a template or removing it, which means every page that uses that template needs to be linked to the category -- likewise with files, externals, etc.)

How slow or contentious is the thing you are causing? Maybe your code can't do it on the same web request the user initiated. You do not want an HTTP request that a user is waiting on to take more than a few seconds.

Example: You create a new kind of notification. Good idea: put the actual notification action (emailing people) or adding the flags (user id n now has a new notification!) into the jobqueue. Bad idea: putting it into a database transaction that has to commit and finish before the user gets a response.

Good example: The Beta features extension lets a user opt in for a "Beta feature" and displays, to the user, how many users have opted in to each of the currently available Beta features. The preferences themselves are stored in user_properties table. However, directly counting the number of opted-in users every time that count is displayed would not have acceptable performance. Thus, MediaWiki stores those counts in the database in the betafeatures_user_counts table, but they are also stored in memcached. It's important to immediately update the user's own preference and be able to display the updated preference on page reload, but it's not important to immediately report to the user the increase or decrease in the count, and this information doesn't get reported in Special:Statistics.

Therefore, BetaFeatures updates those user counts every half hour or so, and no more. Specifically, the extension creates a job that does a SELECT query. Running this query takes a long time - upwards of 5 minutes! So it's done once, and then on the next user request, the result gets cached in memcached for the page https://en.wikipedia.org/wiki/Special:Preferences#mw-prefsection-betafeatures . (They won't get updated at all if no one tries to fetch them, but that is unlikely.) If a researcher needs a realtime count, they can directly query the database outside of MediaWiki application flow.

Code: UpdateBetaFeatureUserCountsJob.php and BetaFeaturesHooks.php.

Bad example: add one?

Work involved during cache misses[edit]

Wikimedia uses and depends heavily on many different caching layers, so your code needs to work in that environment! (But it also must work if everything misses cache.)

Cache-on-save: Wikimedia sites use a preemptive cache-repopulation strategy: if your code will create or modify a large object when the user hits "save" or "submit", then along with saving the modified object in the database/filestore, populate the right cache with it (or schedule a job in the job queue to do so). This will give users faster results than if those large things were regenerated dynamically when someone hit the cache. Localization (i18n) messages, SpamBlacklist data, and parsed text (upon save) are all aggressively cached. (See "Caching layers" for more.)

At the moment, this strategy does not work well for memcached for Wikimedia's multi-datacenter use case. A workaround when using WANObjectCache is to use getWithSetCallback as normal, but with "lockTSE" set and with a "check" key passed in. The key can be "bumped" via touchCheckKey to perform invalidations instead of using delete. This avoids cache stampedes on purge for hot keys, which is usually the main goal.

If something is VERY expensive to recompute, then use a cache that is somewhat closer to a store. For instance, you might use the backend (secondary) Varnishes, which are often called a cache, but are really closer to a store, because objects tend to persist longer there (on disk).

Cache misses are normal: Avoid writing code that, on cache miss, is ridiculously slow. (For instance, it's not okay to count * and assume that a memcache between the database and the user will make it all right; cache misses and timeouts eat a lot of resources. Caches are not magic.) The cluster has a limit of 180 seconds per script (see the limit in Puppet); if your code is so slow that a function exceeds the max execution time, it will be killed.

Write your queries such that an uncached computation will take a reasonable amount of time. To figure out what is reasonable for your circumstance, ask the Site performance and architecture team.

If you can't make it fast, see if you can do it in the background. For example, see some of the statistics special pages that run expensive queries. These can then be run on a dedicated time on large installations. But again, this requires manual setup work -- only do this if you have to.

Watch out for cached HTML: HTML output may sit around for a long time and still needs to be supported by the CSS and JS. Problems where old JS/CSS hang around are in some ways more obvious, so it's easier to find them early in testing, but stale HTML can be insidious!

Good example: See the TwnMainPage extension. It offloads the recalculation of statistics (site stats and user stats) to the job queue, adding jobs to the queue before the cache expires. In case of cache miss, it does not show anything; see CachedStat.php. It also sets a limit of 1 second for calculating message group stats; see SpecialTwnMainPage.php.

Bad example: a change "disabled varnish cache, where previously it was set to cache in varnish for 10 seconds. Given the amount of hits that page gets, even a 10 second cache is probably helpful."

Caching layers[edit]

The cache hit ratio should be as high as possible; watch out if you're introducing new cookies, shared resources, bundled requests or calls, or other changes that will vary requests and reduce cache hit ratio.

Caching layers that you need to care about:

  1. Browser caches
    1. native browser cache
    2. LocalStorage. See meta:Research:Module storage performance#Results to see the statistical proof that storing ResourceLoader storage in LocalStorage speeds page load times and causes users to browse more.
  2. Front-end Varnishes
    The Varnish caches cache entire HTTP responses, including thumbnails of images, frequently-requested pages, ResourceLoader modules, and similar items that can be retrieved by URL. The front-end Varnishes keep these in memory. A weighted-random load balancer (LVS) distributes web requests to the front-end Varnishes.
    Because Wikimedia distributes its front-end Varnishes geographically (in the Amsterdam & San Francisco caching centers as well as the Texas and Virginia data centers) to reduce latency to users, some engineers refer to those front-end Varnishes as "edge caching" and sometimes as a CDN (content delivery network).
  3. Back-end Varnishes
    If a frontend Varnish doesn't have a response cached, it passes the request to the back-end Varnishes via hash-based load balancing (on the hash of the URI). The backend Varnishes hold more responses, storing them ondisk. Every URL is on at most one backend Varnish.
  4. object cache (implemented in memcached in WMF production, but other implementations include Redis, APC, etc.)
    The object cache is a generic service used for many things, e.g., the user object cache. It's a generic service that many services can stash things in. You can also use that service as a layer in a larger caching strategy, which is what the parser cache does in Wikimedia's setup. One layer of the parser cache lives in the object cache.
    Generally, don't disable the parser cache. See: How to use the parser cache.
  5. database's buffer pool and query cache (not directly controllable)

How do you choose which cache(s) to use, and how to watch out for putting inappropriate objects into a cache? See "Picking the right cache: a guide for MediaWiki developers".

Figure out how to appropriately invalidate content from caching by purging, directly updating (pushing data into cache), or otherwise bumping timestamps or versionIDs. Your application needs will determine your Cache purging strategy.

Since the Varnishes serve content per URL, URLs ought to be deterministic -- that is, they should not serve different content from the same URL. Different content belongs at a different URL. This should be true for anonymous users; for logged-in users, Wikimedia's configuration contains additional wrinkles involving cookies and the caching layers.

Good example: (from the mw.cookie change) of not poisoning the cache with request-specific data (when cache is not split on that variable). Background: mw.cookie will use MediaWiki's cookie settings, so client-side developers don't think about this. These are passed via the ResourceLoader startup module. Issue: However, it doesn't use Manual:$wgCookieSecure (instead, this is documented not to be supported), since the default value ('detect') varies by the request protocol, and the startup module does not vary by protocol. Thus, the first hit could poison the module with data that will be inapplicable for other requests.

Bad examples:

Multiple data centers[edit]

WMF runs multiple data centers ("eqiad", "codwf", etc.). The plan is to move to a master/slave data center configuration (see RFC), where users read pages from caches at the closest data center, while all update activity flows to the master data center. Most MediaWiki code need not be directly aware of this, but it does have implications for how developers write code; see RFC's Design implications .

TODO: bring guidelines from RFC to here and other pages.

Cookies[edit]

For cookies, besides the concerns having to do with caching (see "Caching layers", above), there is also the issue that cookies bloat the payload of every request, that is, they result in more data sent back and forth, often unnecessarily. While the effect of bloated header payloads in page performance is less immediate than the impact of blowing up Varnish cache ratios, it is not less measurable or important. Please consider the usage of localStorage or sessionStorage as an alternative to cookies. Client-side storage works well in non-IE browsers, and in IE from IE8 onward.

See also Google's advice on minimizing request overhead.

See also[edit]

MediaWiki-specific[edit]

Technical documents[edit]

Talks[edit]

Posts and discussions[edit]

General web performance[edit]

Coding conventionsManual:Coding conventions
General All languagesManual:Coding conventions#Code structure · Development policyDevelopment policy · Security for developersSecurity for developers · Pre-commit checklistManual:Pre-commit checklist · Performance guidelinesPerformance guidelines (draft) · Style guideDesign/Living style guide · Accessibility guide for developersAccessibility guide for developers (draft) · Best practices for extensionsBest practices for extensions
PHP Code conventionsManual:Coding conventions/PHP · PHPUnit test conventionsManual:PHP unit testing/Writing unit tests#Test_conventions · Security checklist for developersSecurity checklist for developers
JavaScript Code conventionsManual:Coding conventions/JavaScript · Learning JavaScriptLearning JavaScript
CSS Code conventionsManual:Coding conventions/CSS
Database Code conventionsManual:Coding conventions/Database · Database policyDevelopment policy#Database policy
Python Code conventionsManual:Coding conventions/Python
Ruby Code conventionsManual:Coding conventions/Ruby
Selenium/Cucumber Code conventionsManual:Coding conventions/Selenium
Java Code conventionsManual:Coding conventions/Java
API client code Standards for API client librariesAPI:Client code/Gold standard