Database optimization

Database optimization is an important consideration in MediaWiki coding, particularly when the code is intended for deployment on Wikimedia projects.

Typical read queries should query replicas
Calling code should use. By using replicas when possible, site scalability is improved and wikis spanning multiple datacenters remain efficient by using local replica databases for all queries for most requests, rather than sometimes using the (remote) master database (from a read-only datacenter). If possible, master queries should only happen on POST requests, with exceptional cases using the  system. Doing otherwise can result in slow cross-datacenter queries. Additionally, the master database is a single point of failure (SPOF). Reads on replicas are scale easier ("buy more replicas") but reads on master doesn't scale (unless vertically but there is a limit to that). If you don't need it, avoid read on master.

Write queries and read-for-write queries
Write queries must call, as all replicas are read-only and MediaWiki is targeted at primary/replica scale-out setups.

The main case for this is when a SELECT query is used (at least partly) to determine the next write queries. Reading from replicas here is not advisable as temporary replication lag could be amplified into permanent data anomalies. Be aware that additional query flags such as "LOCK IN SHARE MODE" or "FOR UPDATE" or the use of may also be needed to avoid race conditions. Always be aware of stale reads caused by REPEATABLE-READ isolation, flushing transactions as needed.

In some cases, reading from replicas may be used for read-for-write queries anyway, typically when both: For example, when parsing pages, the exact state of each template and data source may not be as important as reflecting primary changes due to the source text of a page being edited. The user might only expect to see *their* changes be reflected while otherwise being oblivious about changes by other users to templates and assets used by the page they just edited.
 * (a) there is a large performance advantage
 * (b) only minimal changes or tolerable anomalies are likely to result from replication lag

Relation to GET/POST method
It is good practice to decide at a high level whether a request will need database write access or not. Ideally, once the entry point (API module, SpecialPage class, or Action class, etc...) is known, whether DB writes are allowed for POST requests should be known early on without have to execute the entire entry point. DB writes should not be allowed for GET and HEAD requests. Exceptional cases can use the  system.

The main methods for specifying which endpoints require database writes are:



GET requests for entry points should be able to be efficiently executed in any datacenter, while POST requests are assumed to be routed to the primary datacenter (where the primary databases reside) to avoid high latency.

General

 * In some cases, denormalize for performance. Add information duplicated from other tables. Use summary tables, counter tables, cache tables, etc. Think about how the DB will run your query. Add indexes where needed. Batch queries (when it makes sense).
 * ORDER BY expression --> filesort == bad. Use COUNT, SUM, GROUP BY, etc.; there is no limit on rows scanned. MAX/MIN of indexed field on entire table is okay.
 * Extensions generally must not add fields to core tables. Instead, they should make their own table(s), and JOIN them if necessary.

Profiling
When in doubt, don't make assumptions; profile.

Select only needed columns
Unless all columns are needed, avoid 'SELECT * FROM ...' as this consumes resources transferring extraneous data. Select only the needed columns.

WHERE
Avoid full table scans, except in edge cases in which it would actually improve performance since almost everything is going to be selected anyway.

Indices
Weigh the advantages of an index in making queries more efficient against storage limits and the efficiency losses of having to change the index when insertions, deletions, and updates are done. Generally, you shouldn't run unindexed queries. WHERE on rarely false conditions is usually okay. Unindexed ORDER BY (filesort) is never okay.

LIMIT
Use LIMIT to limit number of rows returned. When possible, also limit the number of rows scanned. Remember, LIMIT 10 OFFSET 5000 scans 5010 rows. If you can, use WHERE foo_id >= 12345 instead.

JOIN
Use JOINs rather than looping SELECT queries.

There is a rather non-obvious exemption to this: If you join many many tables, database query planner could have trouble figuring out the correct join order and this could lead to outages. Also, in cases where you might end up writing a complex query while the alternative is two simpler queries (e.g. first looking up ids by querying another table and then translating them to human-readable data by querying another table), avoid joins.

Calculations
Do calculations in such a way that the indexes are still being used.

Multiple row operations
Use multiple row operations to avoid looped INSERT and UPDATE queries.

Caching tips
Expensive queries that are likely to be frequently requested are good candidates for caching. If there is a convenient and cheap database value, such as page_touched for example, that changes when the computed value might change, it's often useful to include that cheap value in the cache key. This avoids the needs for explicit purging of cache keys. Also, consider keying on something that makes the cache value immutable, such as rev_sha1 or img_sha1. Both of these approaches can graciously handle replication lag.

In general, the best interface for caching SQL queries is. Cache-aside logic is made easy by the  method. This method also makes it easy to avoid high TTL anomalies due to replication lag by calling at the top of the callback. Explicit purges can use the  or   methods of.

Cache-aside caching example:

Cache purging example:

useful cache types
Choose your cache based on access pattern, size of the values, how often it changes, and so on. Here are cache types:

This doesn't remove the latency from your request but it reduces the load on databases. It is shared between all appservers so it should be used for common queries but if the query is too common (for example, in every request), it can cause issues for memcached. Note that storage for memcached is good and you don't have a limit on size of content you can put there (just be sensible, don't send 1GB of data there). It is not shared between datacenters (but purge requests get replicated to both dcs) so don't treat it as canonical data storage.

APCu
This cache is shared between processes in one appersver (we have around several hundreds). It reduces the latency and the load but it's not shared so invalidation won't work. It is also limited in capacity so avoid putting too much info there. A good usecase for it is configuration data (if it's stored in database). You can also put data that might change add up to a lot but with short TTL to get cached data for "hot" information.

MainObjectStash
This should be used rarely. It is replicated to both datacenters so it can be treated as "short-term data storage" but still avoid it if it can go to WAN, user sessions, etc.

Code cache
This cache that you can use inside a request is useful when you want to come back to the data later in the same request. You can use singletons or static variables to store the data but keep it in mind that it is removed in the next request (unlike python uwsgi).

Communication
If you have questions on this topic that are not answered by the documentation, you may wish to go on #wikimedia-data-persistence and get the attention of the friendly DBAs. If possible, be prepared to explain which queries are involved in the situation. Roan Kattouw, another MySQL optimization expert, answers questions on the wikitech-l mailing list and is sometimes available at #mediawiki.