Manual:External storage

External storage is an abstraction for storing the wiki's content (i.e. what would normally go into the table) outside the normal database, possibly with some kind of compression applied. Some extensions (such as StructuredDiscussions) can use external storage directly for storing other kinds of data.

The contents of external storage are addressed with an URL in the form, with the protocol determining what type of storage should be used. Pre-1.32 these URLs were stored in the field of the text table, with  set to. Since 1.32 they are stored in the field of the  table.

Advantages
The size of the table is typically the biggest among all tables. On wikis with millions of edits, the text table can be several gigabytes in size.

Since the contents of the text table are not mutable (edits to pages create new revisions and new entries to the text table, but old entries can't be modified), storing the contents on a different database provides the following benefits:


 * Split storage necessities: Instead of a big monolithic database, external storage can span several servers, that allow for easier migration and disk allocation.
 * Database performance: The database server for external storage has very low memory and CPU requirements, since it's just a store and it doesn't need too much caching and it doesn't need to perform complex queries. This allows for the main database server to use all the memory available for caching of other tables that would profit from it.
 * Backups: Backups of big databases take time. Backups of the external storage database can be done incrementally, since old entries aren't mutable. When an external storage database has grown sufficiently, a new database can be created for new external storage, and the old database can be put as read only and removed from routine backups (it still needs to be accessible for MediaWiki, though).

Code
The main class for interacting with external storage is. You can use  or (more typically)   to store a piece of data and receive the URL at which it was stored; that URL can be used with   to retrieve the data.

Internally,  interacts with the  subclass corresponding with the protocol. , which is the commonly used one, differs from the others in that it provides special handling when the stored data is a serialized subclass; such objects can be retrieved with , in which case the store will unserialize the object and get the appropriate item (by calling   on it).

In practice, you should avoid using  directly most of the time, and use  (or an even higher-level abstraction such as ) instead.

Configuration
An example  setup:


 * The line states that a   external store can be used. (The   part is not an arbitrary name that can be adjusted. It has to be  .) This corresponds to the   subclass used, and the protocol of the blob address.
 * The line states all the usable clusters with all usable nodes of a cluster. The top-level array's keys denote a cluster's name (The above example defines only one cluster. It has the name  ). The value to those keys are again arrays. They hold the specifications of the individual nodes. The first node is consider the primary. All writes to the database are performed through this primary node. Zero or more replica nodes may follow. (In the above example, you find two replica nodes). Each node may have its own ,  ,  ,  , and  , as shown in the example. The   parameter allows to specify how much of the load should pass through this note.
 * The line holds those external stores that may be used for storage of new text. If you omit this line, the external store will be read-only and new texts will go into the default database (i.e.: the same database holding page, revision, image data; not the cluster).

For a multi-primary (formerly called multi-master) wiki farm setup (like Wikimedia), consider using instead.

Database setup
For the above configuration example, you would have to:


 * 1) Create the database   on the host
 * 2) Run the   SQL-script on the on the database   on the host  . Do not use   for this task, as it will add the required tables to your default database (i.e.: the database holding page, revision, image data) and not to  . If you are not sure how to run the SQL-script on the database   on the host , please consult your database documentation.
 * 3) Set up replication (consult your database's documentation on how to set up replication) towards   on the host , and
 * 4) Set up replication towards   on the host.

Maintenance scripts
There are several maintenance scripts for moving content to the external store:


 * - move old revisions to external storage
 * - compress old revisions and potentially move them to external storage
 * - move revisions (or other data) from one external storage to another and recompress them in the process
 * - when used with --force and has been configured with.