Database transactions

MediaWiki uses database transactions to assure consistency in the database, but also to improve performance.

Some general information about database transactions can be found here:


 * On Wikipedia, see Database transaction and ACID
 * For MySQL, see the transaction statement and the InnoDB transaction model

Transaction scope
First, we should distinguish two types of methods:
 * Those with outer transaction scope: methods that structurally are clearly guaranteed to have no callers up the chain that perform transaction operations. Places that have this scope are the execute method of Maintenance scripts, the run method of Job classes, and the doUpdate method of DeferrableUpdate classes. When these methods run, no callers further up the call stack will have any transaction activity. This means that methods with outer transaction scope are free to start and end transactions (given some caveats described below). Callers down the stack do not have outer scope and are expected to respect this fact.
 * Those without unclear/inner transaction scope: these are methods that are not clearly guaranteed to have outer transaction scope. This is most of the methods in MediaWiki core and extensions. Various methods fall under this category, such as those of model/abstraction classes, utility classes, DAO objects, hook handlers, business/control logic classes, and so on. These methods are not free to start/end transactions and must only use transaction semantics that support nesting. If they need to do some updates after commit, then they must register a post-commit callback method.

Basic transaction usage
MediaWiki is using transactions in a few ways:
 * 1) Using "traditional" begin/commit pairs to protect critical sections and be certain they are committed. Nested transactions are not supported. This should only ever be used from callers that have outer transaction scope and are only affecting one database (accounting for any possible hook handlers too). Valid methods include callbacks to onTransactionIdle or AutoCommitUpdate where only one DB is updated and no hooks are fired. Always match each begin with a commit.
 * 2) Using startAtomic/endAtomic pairs to protect critical sections without knowing when they will commit. Nested sections are fully supported. These can be used anywhere, but must be properly nested (e.g. do not open a section and then not close it before a "return" statement). In maintenance scripts, when no atomic sections are open, a commit happens. If DBO_TRX is set, however, the atomic sections join the main DBO_TRX transaction set. Inside AutoCommitUpdate or onTransactionIdle callbacks, DBO_TRX is turned off, meaning the endAtomic will commit once there are no sections in those callbacks.
 * 3) Using a pivoted transaction round if DBO_TRX is enabled (this is the case per default on web requests, but not for maintenance mode or unit tests). The first write on each database connection without a transaction triggers BEGIN. A COMMIT happens at the end of the request for all databases connections with writes pending. If multiple databases that have DBO_TRX set where written to, then they all will do their commit step in rapid succession, at the end of the request. This maximizes cross-DB transaction atomicity. Note that optimistic concurrency control (REPEATABLE-READ or SERIALIZABLE in PostgreSQL) might undermine this somewhat, since SERIALIZATION FAILURE can occur on a proper subset of the commits, even if all the writes appeared to succeed. In any case, DBO_TRX reduces the number of commits which can help site performance (by reducing fsync calls) and means that all writes in the request are typically either committed or rollback together.
 * 4) If at any point, an exception is thrown and not caught by anything else, MWExceptionHandler will catch it and rollback all database connections with transactions. This is very useful when combined with DBO_TRX.

Transaction misuse errors
Various misuses of transactions will cause exceptions or warnings, for example:
 * Nesting begin with another begin or commit will throw an exception.
 * Calling begin or commit when an atomic section is active will throw an exception.
 * Calling commit when no transaction is open will raise a warning.
 * startAtomic and endAtomic expect  as argument and its value must match on each level of atomic section nesting. If it does not match, then an exception is thrown.
 * Calling begin or commit when DBO_TRX is set may log a warning and no-op.
 * Calling getScopedLockAndFlush while writes are still pending in a transaction will result in an exception.

Specifying writes that must be transactional
When a set of queries are intimately related in determining a unit of database writes, one should use an atomic section. For example: Another style of doing this is to use doAtomicSection, which is useful if there are many return statements.

Situations
Suppose you have some code that applies some database updates. Afterwards the method finishes you may want to:
 * a) Apply some highly contentious database updates near the end of the transaction so they don't hold locks too long
 * b) Apply further database updates that happen to be slow, non-timely, and don't need 100% atomicity (e.g. they can be refreshed)

Methods
In some cases, code may want to know that data is committed before continuing to the next steps. One way to do this is to put the next steps in callback to onTransactionIdle, AtomicSectionUpdate, or AutoCommitUpdate. The latter two are DeferredUpdates, which differ somewhat in Maintenance vs web/job request mode: Any method with outer transaction scope has the option of calling commitMasterChanges on the LBFactory singleton to flush all active transactions on all database. This also assures that all pending updates are committed before the next lines of code are executed.
 * In web requests and jobs (including jobs in CLI mode), deferred updates run after the main transaction round commits. Each update is wrapped in it's own transaction round, though AutoCommitUpdate disables DBO_TRX on the specified database handle, committing each query on the fly. If deferred updates enqueue other deferred updates, the extra transaction rounds are simply added.
 * In Maintenance scripts, deferred updates run after any transaction on the local (e.g. "current wiki") database commits (or immediately if there is no open transaction). Deferred updates cannot simply be automatically deferred until no transactions are active as that might lead to OOMs for long running scripts where some (possibly "foreign wiki") database always has an active transaction (this would otherwise be ideal). This is why deferred updates are oddly tied only to the local database master. Regardless, since  has outer transaction scope and DBO_TRX is off for them, it doesn't usually make sense to directly call DeferredUpdates::addUpdate from the execute method, since the code could just run immediately.

Examples
For the cases above, here are some techniques for handling them:

Case A:

Case B:

Situations
Writes queries (e.g. CUD operations) that affect many rows or have poor index usage take a long time to complete. Worse, is that replica databases often uses serial replication, so they apply master transactions one at a time. This means that a 10 second UPDATE query will block that long on each replica database (sometimes more since replica DBs have to handle read traffic and replicate the master's writes). This creates lag, where other updates on the master do not sure for a while to other users. It also slows down users making edits due to ChronologyProtector trying to wait for replicas to catch up.

The main cases where these is needed are:
 * a) Job classes that do expensive updates
 * b) Maintenance scripts that to mass updates to large portions of tables

Methods
Expensive updates that create lag need to be moved to a Job class and that the job's run method should batch the updates, waiting for replicas to catch up between each batch.

Examples
Case A / B:

Situations
Sometimes changes to the primary data-set demand updates to secondary data stores (that lack BEGIN...COMMIT), for example:
 * a) Enqueue a job that will query some of the affected rows, making the end-user wait on its insertion
 * b) Enqueue a job that will query some of the affected rows, inserting it after the MediaWiki response to the end-user is flushed
 * c) Send a request to a service that will query some of the effected rows, making the end-user wait on the service request
 * d) Send a request to a service that will query some of the effected rows, doing it after the MediaWiki response to the end-user is flushed
 * e) Purge CDN proxy cache for URLs that have content based on the effected rows
 * f) Purge the WANObjectCache entry for a changed row
 * g) Storing a non-derivable text/semi-structured blob to another store
 * h) Storing a non-derivable file to another store
 * i) Account creation hook handler creating an LDAP entry that must accompany the new user
 * j) Sending an e-mail to a user's inbox

Methods
In general, derivable (e.g. can be regenerated) updates to external stores will use some sort DeferrableUpdate class or onTransactionIdle to be applied post-commit. In cases where the external data is immutable, then it can be referenced by autoincrement ID, UUID, or hash of the externally stored contents; storing the data pre-commit is best in such cases. Updates that do not fall into either category should use onTransactionPreCommitOrIdle, batch all the update to the external store into one transaction if possible, and throw an error if the update fails (which will trigger RDBMS rollback); this reduces the window that things could go wrong and result in inconsistent data.

Examples
Case A:

Case B:

Case C:

Case D:

Case E:

Case F:

Case G:

Case H:

Case I:

Case J:

Use of transaction rollback
The use of rollback should strongly be avoided, since it affects what all the previous executed code did before the rollback. It's particularly bad since other databases might have related changes and it's easy to forget to roll those back too. Instead, simply throwing an uncaught exception is enough to trigger rollback of all databases. This is how rollback is normally used, as a fail-safe that aborts everything, returns to the initial state, and errors out. However, if directly calling rollback is truly needed, always use rollbackMasterChanges on the LBFactory singleton to make sure all databases are reverted to the initial state of any transaction round.

Debug logging
Several channels (log groups) are used to log DB related errors and warnings:
 * wfLogDBError
 * DBPerformance
 * DBReplication
 * exception

At Wikimedia, these logs can be found by querying logstash.wikimedia.org using +channel:.

Old discussions
This is the result of some conversation on the wikitech-l mailing list and subsequent discussion on the Bugzilla. Some relevant discussions are:


 * Nested database transactions
 * Can we kill DBO_TRX? It seems evil!
 * Transaction warning: WikiPage::doDeleteArticleReal
 * Transaction warning: WikiPage::doEdit (User::loadFromDatabase) (TranslateMetadata::get)

In one mail, Tim Starling explained the reasoning behind the DBO_TRX system. Here is a redacted version of his explanation:

DBO_TRX provides the following benefits: * It provides improved consistency of write operations for code which is not transaction-aware, for example rollback-on-error. * It provides a snapshot for consistent reads, which improves application correctness when concurrent writes are occurring. DBO_TRX was introduced when we switched over to InnoDB, along with the introduction of Database::begin and Database::commit. [...]   Initially, I set up a scheme where transactions were "nested", in the sense that begin incremented the transaction level and commit decremented it. When it was decremented to zero, an actual COMMIT was issued. So you would have a call sequence like: * begin -- sends BEGIN * begin -- does nothing * commit -- does nothing * commit -- sends COMMIT This scheme soon proved to be inappropriate, since it turned out that the most important thing for performance and correctness is for an   application to be able to commit the current transaction after some particular query has completed. Database::immediateCommit was introduced to support this use case -- its function was to immediately reduce the transaction level to zero and commit the underlying transaction. When it became obvious that that every Database::commit call should really be Database::immediateCommit, I changed the semantics, effectively renaming Database::immediateCommit to   Database::commit. I removed the idea of nested transactions in   favour of a model of cooperative transaction length management: * Database::begin became effectively a no-op for web requests and was sometimes omitted for brevity. * Database::commit should be called after completion of a sequence of write operations where atomicity is desired, or at the earliest opportunity when contended locks are held. [...]   When transactions too long, you hit performance problems due to lock contention. When transactions are too short, you hit consistency problems when requests fail. The scheme I introduced favours performance over consistency. It resolves conflicts between callers and callees by using the shortest transaction time. I think was an   appropriate choice for Wikipedia, both then and now, and I think it is    probably appropriate for many other medium to high traffic wikis. Savepoints were not available at the time the scheme was introduced. But they are a refinement of the abandoned transaction nesting scheme, not a refinement of the current scheme which is optimised for reducing lock contention. In terms of performance, perhaps it would be feasible to use short transactions with an explicit begin with savepoints for nesting. But then you would lose the consistency benefits of DBO_TRX that I   mentioned at the start of this post. -- Tim Starling