Database transactions/cs

 Databázové transakce využívá MediaWiki k zachování konzistence databáze a tím i ke zlepšení jejího výkonu.

Některé obecné informace o databázových transakcích naleznete zde:


 * Na Wikipedii viz Databázová transakce a ACID
 * Pro MySQL viz transakční výpis a transakční model InnoDB



Rozsah transakce
Nejprve bychom měli rozlišit dva typy metod:


 * Ty s vnějším rozsahem transakce: Metody, které strukturálně jasně zaručují, že nemají žádné volající v řetězci, kteří provádějí transakční operace (kromě zabalení metody okolo transakce jménem uvedené metody). O takových metodách se říká, že "vlastní" kolo transakce. Místa, která mají tento rozsah, jsou metoda Skripty údržby, metoda  tříd Job a metoda  tříd DeferrableUpdate. Když jsou tyto metody spuštěny, žádní volající dále v zásobníku volání nebudou mít žádnou transakční aktivitu kromě případného definování počáteční/koncové hranice. O volajícím z vnějšího rozsahu, u kterého je také strukturálně zaručeno, že začne bez deklarovaného kola transakce, se říká, že má definující rozsah transakce. This means that methods with outer transaction scope are free to start and end transactions (given some caveats described below). Callers down the stack do not have outer scope and are expected to respect this fact.
 * Those with unclear/inner transaction scope: these are methods that are not clearly guaranteed to have outer transaction scope. Such methods do not own the transaction round, if one exists. This is most of the methods in MediaWiki core and extensions. Various methods fall under this category, such as those of model/abstraction classes, utility classes, DAO objects, hook handlers, business/control logic classes, and so on. These methods are not free to start/end transactions and must only use transaction semantics that support nesting. If they need to do some updates after commit, then they must register a post-commit callback method.

If a transaction round is started via then it is called an explicit transaction round. Otherwise, if  wraps any query activity in a transaction round, as typically is the case during web requests, then it is called an implicit transaction round. Such rounds are ownerless and are committed by MediaWiki on shutdown via. Callers can start explicit rounds midway through implicit rounds, in which case any pending database writes will be committed when the explicit round commits.

Basic transaction usage
MediaWiki is using transactions in a few ways:


 * 1) Using "traditional" / pairs to protect critical sections and be certain they are committed. Nested transactions are not supported. This should only ever be used from callers that have outer transaction scope and are only affecting one database (accounting for any possible hook handlers too). Valid methods include callbacks to  or   where only one DB is updated and no hooks are fired. Always match each  with a.
 * 2) Using / pairs to protect critical sections without knowing when they will commit. Nested sections are fully supported. These can be used anywhere, but must be properly nested (e.g. do not open a section and then not close it before a "return" statement). In maintenance scripts, when no atomic sections are open, a commit happens. If the   flag is set, however, the atomic sections join the main   transaction set. Inside   or  callbacks,   is turned off for the specified database, meaning the  will commit once there are no sections in those callbacks.
 * 3) Using implicit pivoted transaction rounds if   is enabled (this is the case per default on web requests, but not for maintenance mode or unit tests). The first write on each database connection without a transaction triggers BEGIN. A COMMIT happens at the end of the request for all databases connections with writes pending. If multiple databases that have   set where written to, then they all will do their commit step in rapid succession, at the end of the request. This maximizes cross-DB transaction atomicity. Note that optimistic concurrency control (REPEATABLE-READ or SERIALIZABLE in PostgreSQL) might undermine this somewhat, since SERIALIZATION FAILURE can occur on a proper subset of the commits, even if all the writes appeared to succeed. In any case,   reduces the number of commits which can help site performance (by reducing  calls) and means that all writes in the request are typically either committed or rollback together.
 * 4) Using explicit pivoted transactions rounds via  and . These rounds are effective in both web and CLI modes and have the same semantics as their implicit counterparts except for the following aspects:
 * 5) * Calling from a method that did not start the round will throw an error.
 * 6) * They commit any empty transactions on master databases when completed, clearing any REPEATABLE-READ snapshots. This assures that callers relying on in single-DB setups will still have fresh snapshots for   connections. It also assures that the transaction listener set in  sees all databases in an idle transaction state, allowing it to run deferred updates.
 * 7) If at any point, an exception is thrown and not caught by anything else,   will catch it and rollback all database connections with transactions. This is very useful when combined with.

Transaction misuse errors
Various misuses of transactions will cause exceptions or warnings, for example:


 * Nesting calls will throw an exception
 * Calling on a another method's transaction, started with, will throw an exception.
 * Calling or  when an atomic section is active will throw an exception.
 * The use of and  have analogous limitations to the above.
 * Calling when no transaction is open will raise a warning.
 * and expect   as argument and its value must match on each level of atomic section nesting. If it does not match, then an exception is thrown.
 * Calling or  when   is set may log a warning and no-op.
 * Calling when   is set will throw an error, triggering rollback of all DBs.
 * Calling while writes are still pending in a transaction will result in an exception.
 * Catching,  , or   exceptions without calling  can result in an exception.
 * Trying to use or  in a   that is set to use the transaction support that class provides may cause exceptions. By doing so, outer scope is forfeited so that multiple such updates can be part of a single transaction round.

Appropriate contexts for write queries
Aside from legacy code, database write transactions (including auto-commit mode queries) in MediaWiki should only happen during execution of:
 * HTTP POST requests to SpecialPages where, in the PHP class, returns true
 * HTTP POST requests to Action pages where, in the PHP class, returns true
 * HTTP POST requests to API modules where, in the PHP class, returns true
 * Jobs in JobRunner (which uses site-internal HTTP POST requests)
 * Maintenance scripts run from the command line

For writes in the context of HTTP GET requests, use the job queue.

For writes that do not have to happen before the HTTP response is sent to the client, they can be deferred via with an appropriate   subclass (usually   or  ) or via  with a callback. The job queue should be used for such updates when they are slow or too resource intensive to run in ordinary request threads on non-dedicated servers.

Specifying an atomic group of writes
When a set of queries are intimately related in determining a unit of database writes, one should use an atomic section. For example:

Another style of doing this is to use, which is useful if there are many return statements.

Situations
Suppose you have some code that applies some database updates. After the method finishes you may want to:


 * a) Apply some highly contentious database updates near the end of the transaction so they don't hold locks too long
 * b) Apply further database updates that happen to be slow, non-timely, and don't need 100% atomicity (e.g. they can be refreshed)

Methods
In some cases, code may want to know that data is committed before continuing to the next steps. One way to do this is to put the next steps in callback to, , or. The latter two are, which differ somewhat in Maintenance vs web/job request mode:


 * In web requests and jobs (including jobs in CLI mode), deferred updates run after the main transaction round commits. Each update is wrapped in its own transaction round, though  disables   on the specified database handle, committing each query on the fly. If deferred updates enqueue other deferred updates, the extra transaction rounds are simply added.
 * In Maintenance scripts, deferred updates run after any transaction on the local (e.g. "current wiki") database commits (or immediately if there is no open transaction). Deferred updates cannot simply be automatically deferred until no transactions are active as that might lead to out of memory errors for long running scripts where some (possibly "foreign wiki") database always has an active transaction (this would otherwise be ideal). This is why deferred updates are oddly tied only to the local database master. Regardless, since has outer transaction scope and   is off for them, it doesn't usually make sense to directly call  from the  method, since the code could just run immediately.

Any method with outer transaction scope has the option of calling on the   singleton to flush all active transactions on all database. This assures that all pending updates are committed before the next lines of code are executed. Also, if such a method wants to start a transaction round, it can use on the singleton, do the updates, and then call  on the singleton. This will set  on current and new DB handles during the round, causing implicit transactions to be started up, even in CLI mode. The  flag is reverted to its prior state after the round ends. Note that similar to and, transaction rounds cannot be nested.

Note that some databases, like those handling, usually have   disabled. This means that they remain in auto-commit mode even during transaction rounds.

Examples
For the cases above, here are some techniques for handling them:

 Case A:

 Case B:

Situations
Writes queries (e.g. create, update, delete operations) that affect many rows or have poor index usage take a long time to complete. Worse, is that replica databases often uses serial replication, so they apply master transactions one at a time. This means that a 10 second UPDATE query will block that long on each replica database (sometimes more since replica DBs have to handle read traffic and replicate the master's writes). This creates lag, where other updates on the master do not show for a while to other users. It also slows down users making edits due to trying to wait for replicas to catch up.

The main cases where this can happen are:


 * a) Job classes that do expensive updates
 * b) Maintenance scripts that do mass updates to large portions of tables

Another situation that sometimes comes up is when an update triggers a job, and that job must do some heavy queries in order to recompute something. It may not be a good idea to do the queries on the master DB, but the replica DBs might be lagged and not reflect the change that triggered the job. This leads to the following case:


 * c) Jobs needing to wait for a single replica DB to catch up so they can query it to determine the updates

A similar scenario can happen with external services. Suppose a service needs to do expensive API queries in order to reflect changes in the wiki's database. So this leaves another case:


 * d) Callers that do updates before notifying an external service that it needs to query the DBs to update itself

Methods
Expensive updates that create lag need to be moved to a  class and that the job's  method should batch the updates, waiting for replicas to catch up between each batch.

Examples
 Case A / B:

 Case C:

 Case D:

Situations
Sometimes changes to the primary data-set demand updates to secondary data stores (that lack BEGIN...COMMIT), for example:


 * a) Enqueue a job that will query some of the affected rows, making the end-user wait on its insertion
 * b) Enqueue a job that will query some of the affected rows, inserting it after the MediaWiki response to the end-user is flushed
 * c) Send a request to a service that will query some of the effected rows, making the end-user wait on the service request
 * d) Send a request to a service that will query some of the effected rows, doing it after the MediaWiki response to the end-user is flushed
 * e) Purge CDN proxy cache for URLs that have content based on the effected rows
 * f) Purge the  entry for a changed row
 * g) Storing a non-derivable text/semi-structured blob to another store
 * h) Storing a non-derivable file to another store
 * i) Account creation hook handler creating an LDAP entry that must accompany the new user
 * j) Updating the database and sending an e-mail to a user's inbox as part of a request by the user

Methods
In general, derivable (e.g. can be regenerated) updates to external stores will use some sort  class or  to be applied post-commit. In cases where the external data is immutable, then it can be referenced by autoincrement ID, UUID, or hash of the externally stored contents; storing the data pre-commit is best in such cases. Updates that do not fall into either category should use, batch all the update to the external store into one transaction if possible, and throw an error if the update fails (which will trigger RDBMS rollback); this reduces the window that things could go wrong and result in inconsistent data.

Examples
 Case A:

Case B:

 Case C:

 Case D:

 Case E:

 Case F:

 Case G:

 Case H:

 Case I:

 Case J:

Use of transaction rollback
The use of should strongly be avoided, since it affects what all the previous executed code did before the rollback. Outer callers might still consider that an operation was successful and attempt additional updates and/or show bogus success page. Any REPEATABLE-READ snapshot is renewed, causing objects with lazy-loading to possibly not find what they were looking for, or to get unexpectedly newer data than the rest of what was loaded. Using is particularly bad since other databases might have related changes and it's easy to forget to roll those back too.

Instead, simply throwing an exception is enough to trigger rollback of all databases due to. There are two special cases to this rule:


 * Exceptions of the type  (used for human-friendly GUI errors) do not trigger rollback on web requests inside of  if thrown before . This lets actions,special pages, and   deferred updates show proper error messages, while auditing and anti-abuse tools can still log updates to the database.
 * Callers might catch  exceptions without either re-throwing them or throwing their own version of the error. Doing so is extremely bad practice and can cause all sorts of problems from partial commits to simply spewing up   errors. Only catch DB errors in order to do some cleanup before re-throwing an error or if the database in question is used exclusively by the code catching errors.

This is how rollback is normally used, as a fail-safe that aborts everything, returns to the initial state, and errors out. However, if directly calling  is truly needed, always use  on the   singleton to make sure all databases are reverted to the initial state of any transaction round.

Debug logging
Several channels (log groups) are used to log DB related errors and warnings:


 * DBQuery
 * DBConnection
 * DBPerformance
 * DBReplication
 * exception

At Wikimedia, these logs can be found by querying logstash.wikimedia.org using +channel:&lt;CHANNEL NAME>.

Old discussions
This is the result of some conversation on the wikitech-l mailing list and subsequent discussion on the Bugzilla. Some relevant discussions are:


 * Nested database transactions
 * Can we kill DBO_TRX? It seems evil!
 * Transaction warning : WikiPage::doDeleteArticleReal
 * Transaction warning : WikiPage::doEdit (User::loadFromDatabase) (TranslateMetadata::get)

In one mail, Tim Starling explained the reasoning behind the DBO_TRX system. Here is a redacted version of his explanation:

DBO_TRX provides the following benefits: * It provides improved consistency of write operations for code which is not transaction-aware, for example rollback-on-error. * It provides a snapshot for consistent reads, which improves application correctness when concurrent writes are occurring. DBO_TRX was introduced when we switched over to InnoDB, along with the introduction of Database::begin and Database::commit. [...]   Initially, I set up a scheme where transactions were "nested", in the sense that begin incremented the transaction level and commit decremented it. When it was decremented to zero, an actual COMMIT was issued. So you would have a call sequence like: * begin -- sends BEGIN * begin -- does nothing * commit -- does nothing * commit -- sends COMMIT This scheme soon proved to be inappropriate, since it turned out that the most important thing for performance and correctness is for an   application to be able to commit the current transaction after some particular query has completed. Database::immediateCommit was introduced to support this use case -- its function was to immediately reduce the transaction level to zero and commit the underlying transaction. When it became obvious that that every Database::commit call should really be Database::immediateCommit, I changed the semantics, effectively renaming Database::immediateCommit to   Database::commit. I removed the idea of nested transactions in   favour of a model of cooperative transaction length management: * Database::begin became effectively a no-op for web requests and was sometimes omitted for brevity. * Database::commit should be called after completion of a sequence of write operations where atomicity is desired, or at the earliest opportunity when contended locks are held. [...]   When transactions too long, you hit performance problems due to lock contention. When transactions are too short, you hit consistency problems when requests fail. The scheme I introduced favours performance over consistency. It resolves conflicts between callers and callees by using the shortest transaction time. I think was an   appropriate choice for Wikipedia, both then and now, and I think it is    probably appropriate for many other medium to high traffic wikis. Savepoints were not available at the time the scheme was introduced. But they are a refinement of the abandoned transaction nesting scheme, not a refinement of the current scheme which is optimised for reducing lock contention. In terms of performance, perhaps it would be feasible to use short transactions with an explicit begin with savepoints for nesting. But then you would lose the consistency benefits of DBO_TRX that I   mentioned at the start of this post. -- Tim Starling