Manual:Database access/nl

Dit artikel biedt een overzicht van toegang en algemene database kwesties in MediaWiki.

Wanneer u programmeert in MediaWiki, zult u normaal gesproken alleen de database gebruiken met behulp van MediaWiki-functies.



Database lay-out
Informatie over de MediaWiki database lay-out, bijvoorbeeld over tabellen en hun inhoud, lees. Dit was eerder gedocumenteerd in, maar vanaf MediaWiki 1.35 wordt dit beetje bij beetje verplaatst naar  , als onderdeel van het initiatief 'Abstract Schema'. Dat betekent dat  is omgezet naar   door een, dit maakt het gemakkelijker om schema-bestanden aan te maken om verschillende database engines te ondersteunen.



Inloggen in MySQL


Gebruik van sql.php
MediaWiki heeft een onderhoudsscript om de database te benaderen. Voer in de map maintenance uit:

U kunt dan een database query doen. U kunt ook een bestandsnaam opgeven, dat bestand wordt dan door MediaWiki uitgevoerd, waarbij elke MediaWiki speciale variabele wordt vervangen. Voor meer informatie, zie.

Dit werkt voor alle database backends. De prompt voor de commando-regel gebruikers bevat echter niet alle functies zoals die met uw database wordt meegeleverd.



De mysql commando-regel gebruiken
In LocalSettings.php staat uw MySQL wachtwoord en gebruikersnaam, bijvoorbeeld:

Met SSH, log in met deze gegevens:

Vervang en  met hun   waarden. Er wordt dan om uw wachtwoord gevraagd en vervolgens ziet u de prompt.



Database abstractie laag
MediaWiki gebruikt de Rdbms bibliotheek als de database abstractie laag. Het is niet de bedoeling dat ontwikkelaars direct de low-level database-functies aanroepen, zoals.

Elke connectie wordt weergegeven met  waar queries kunnen worden uitgevoerd. Er kan een connectie worden gemaakt door het aanroepen van   op een. instance, preferably dependency-injected, or obtained from MediaWikiServices, De functie  wordt afgebouwd en dient niet in nieuwe code te worden gebruikt.

wordt meestal met één parameter aangeroepen, of  bij queries om te lezen of   bij queries voor het schrijven (en voor write-informing read queries). Het verschil tussen primary en replica is belangrijk in een omgeving met meerdere databases, zoals Wikimedia. Lees de onderstaande sectie Wrapper functies om de interactie met de  objecten te zien.

Voorbeeld lezen:

""

Voorbeeld schrijven:

""

We gebruiken de conventie voor connecties om te lezen (replica) en  voor connecties om te schrijven (primary).

SelectQueryBuilder
The class is the preferred way to formulate read queries in new code. In older code, you might find  and related methods of the Database class used directly. The query builder provides a modern "fluent" interface, where methods are chained until the fetch method is invoked, without intermediary variable assignments needed. For example:

Dit voorbeeld komt overeen met de volgende SQL:

Een JOIN is natuurlijk ook mogelijk:

Dit voorbeeld komt overeen met de query:

U kunt telkens een rij van het resultaat lezen met een foreach loop. Each row is represented as an object. Bijvoorbeeld:

Er zijn ook functies om een rij te lezen, een enkel veld uit meerdere rijen of een enkel veld uit een rij:

In these examples, is an row object as in the  example above, is an array of page IDs, and is a single page ID.



Wrapper functies
We bieden aan een query functie an voor pure SQL, maar met wrapper functies als select en insert is het meestal gemakkelijker. Deze zorgen voor het gebruik van prefixen en voor de foutafhandeling. Indien u echt uw eigen SQL wil maken, lees dan de documentatie over tableName en addQuotes. U heeft ze beide nodig. Als u niet goed addQuotes gebruikt dan kunnen er ernstige veiligheidsproblemen ontstaan op uw wiki.

Een andere reden om de high level methoden te gebruiken dan om zelf uw query te maken is de zekerheid dat uw code ook op andere database-types werkt. Op dit moment wordt MySQL/MariaDB het beste ondersteund. Er is ook goede ondersteuning voor SQLite, dat werkt wel trager dan MySQL en MariaDB. Er is ook ondersteuning voor PostgreSQL, maar dat is wat minder stabiel dan voor MySQL.

In the following, the available wrapper functions are listed. For a detailed description of the parameters of the wrapper functions, please refer to class 's docs. Particularly see  for an explanation of the ,  ,  ,  ,  , and   parameters that are used by many of the other wrapper functions.

Wrapper function: select
The select function provides the MediaWiki interface for a SELECT statement. The components of the SELECT statement are coded as parameters of the select function. An example is

This example corresponds to the query

JOINs are also possible; for example:

This example corresponds to the query

provides an example of how to use table aliases in queries.

Arguments are either single values (such as 'category' and 'cat_pages > 0') or arrays, if more than one value is passed for an argument position (such as ['cat_pages > 0', $myNextCond]). If you pass in strings to the third or fifth argument, you must manually use Database::addQuotes on your values as you construct the string, as the wrapper will not do this for you. The values for table names (1st argument) or field names (2nd argument) must not be user controlled. The array construction for $conds is somewhat limited; it can only do equality and  relationships (i.e. WHERE key = 'value').

You can access individual rows of the result using a foreach loop. Once you have a row object, you can use the  operator to access a specific field. A full example might be:

Which will put an alphabetical list of categories with how many entries each category has in the variable $output. If you are outputting as HTML, ensure to escape values from the database with

Convenience functions
For compatibility with PostgreSQL, insert ids are obtained using nextSequenceValue and insertId. The parameter for nextSequenceValue can be obtained from the  statement in   and always follows the format of x_y_seq, with x being the table name (e.g. page) and y being the primary key (e.g. page_id), e.g. page_page_id_seq. For example:

For some other useful functions, e.g., , etc., see Manual:Database.php#Functions.



Basis query optimalisatie
MediaWiki ontwikkelaars die DB query's moeten schrijven, moeten verstand hebben van databases en de prestatie kwesties die deze met zich meebrengen. Patches containing unacceptably slow features will not be accepted. Unindexed queries are generally not welcome in MediaWiki, except in special pages derived from QueryPage. It's a common pitfall for new developers to submit code containing SQL queries which examine huge numbers of rows. Remember that COUNT(*) is O(N), counting rows in a table is like counting beans in a bucket.

Backward compatibility
Often, due to design changes to the DB, different DB accesses are necessary to ensure backward compatibility. This can be handled for example with the global variables and :

Replication
Large installations of MediaWiki such as Wikipedia, use a large set of replica MySQL servers replicating writes made to a primary MySQL server. It is important to understand the issues associated with this setup if you want to write code destined for Wikipedia.

It's often the case that the best algorithm to use for a given task depends on whether or not replication is in use. Due to our unabashed Wikipedia-centrism, we often just use the replication-friendly version, but if you like, you can use to check to see if replication is in use.

Lag
Lag primarily occurs when large write queries are sent to the primary server. Writes on the primary server are executed in parallel, but they are executed in serial when they are replicated to the replicas. The primary server writes the query to the binlog when the transaction is committed. The replicas poll the binlog and start executing the query as soon as it appears. They can service reads while they are performing a write query, but will not read anything more from the binlog and thus will perform no more writes. This means that if the write query runs for a long time, the replicas will lag behind the primary server for the time it takes for the write query to complete.

Lag can be exacerbated by high read load. MediaWiki's load balancer will stop sending reads to a replica when it is lagged by more than 30 seconds. If the load ratios are set incorrectly, or if there is too much load generally, this may lead to a replica permanently hovering around 30 seconds lag.

If all replicas are lagged by more than 30 seconds (according to ), MediaWiki will stop writing to the database. All edits and other write operations will be refused, with an error returned to the user. This gives the replicas a chance to catch up. Before we had this mechanism, the replicas would regularly lag by several minutes, making review of recent edits difficult.

In addition to this, MediaWiki attempts to ensure that the user sees events occurring on the wiki in chronological order. A few seconds of lag can be tolerated, as long as the user sees a consistent picture from subsequent requests. This is done by saving the primary binlog position in the session, and then at the start of each request, waiting for the replica to catch up to that position before doing any reads from it. If this wait times out, reads are allowed anyway, but the request is considered to be in "lagged replica mode". Lagged replica mode can be checked by calling. The only practical consequence at present is a warning displayed in the page footer.

Shell users can check replication lag with ; other users can check using the API.

Databases often have their own monitoring systems in place as well, see for instance MariaDB (Wikimedia) and wikitech:Help:Toolforge/Database (Wikimedia Cloud VPS).

Lag avoidance
To avoid excessive lag, queries that write large numbers of rows should be split up, generally to write one row at a time. Multi-row INSERT ... SELECT queries are the worst offenders and should be avoided altogether. Instead do the select first and then the insert.

Even small writes can cause lag if they are done at a very high speed and replication is unable to keep up. This most commonly happens in maintenance scripts. To prevent it, you should call after every few hundred writes. Most scripts make the exact number configurable:

Working with lag
Despite our best efforts, it's not practical to guarantee a low-lag environment. Replication lag will usually be less than one second, but may occasionally be up to 30 seconds. For scalability, it's very important to keep load on the primary server low, so simply sending all your queries to the primary server is not the answer. So when you have a genuine need for up-to-date data, the following approach is advised:


 * 1) Do a quick query to the primary server for a sequence number or timestamp
 * 2) Run the full query on the replica and check if it matches the data you got from the primary server
 * 3) If it doesn't, run the full query on the primary server

To avoid swamping the primary server every time the replicas lag, use of this approach should be kept to a minimum. In most cases you should just read from the replica and let the user deal with the delay.

Lock contention
Due to the high write rate on Wikipedia (and some other wikis), MediaWiki developers need to be very careful to structure their writes to avoid long-lasting locks. By default, MediaWiki opens a transaction at the first query, and commits it before the output is sent. Locks will be held from the time when the query is done until the commit. So you can reduce lock time by doing as much processing as possible before you do your write queries. Update operations which do not require database access can be delayed until after the commit by adding an object to.

Often this approach is not good enough, and it becomes necessary to enclose small groups of queries in their own transaction. Use the following syntax:

Use of locking reads (e.g. the FOR UPDATE clause) is not advised. They are poorly implemented in InnoDB and will cause regular deadlock errors. It's also surprisingly easy to cripple the wiki with lock contention.

Instead of locking reads, combine your existence checks into your write queries, by using an appropriate condition in the WHERE clause of an UPDATE, or by using unique indexes in combination with INSERT IGNORE. Then use the affected row count to see if the query succeeded.

Database schema
Don't forget about indexes when designing databases, things may work smoothly on your test wiki with a dozen of pages, but will bring a real wiki to a halt. See above for details.

For naming conventions, see.

SQLite compatibility
When writing MySQL table definitions or upgrade patches, it is important to remember that SQLite shares MySQL's schema, but that works only if definitions are written in a specific way:


 * Primary keys must be declared within main table declaration, but normal keys should be added separately with CREATE INDEX:

However, primary keys spanning over more than one field should be included in the main table definition:


 * Don't add more than one column per statement:


 * Set explicit defaults when adding NOT NULL columns:

You can run basic compatibility checks with:


 * - MediaWiki 1.36+
 * - MediaWiki 1.35 and earlier

Or, if you need to test an update patch, both:


 * - MediaWiki 1.36+
 * - MediaWiki 1.35 and earlier
 * Since DB patches update the tables.sql file as well, for this one you should pass in the pre-commit version of tables.sql (the file with the full DB definition). Otherwise, you can get an error if you e.g. drop an index (since it already doesn't exist in tables.sql because you just removed it).

The above assumes you're in $IP/maintenance/, otherwise, pass the full path of the file. For extension patches, use the extension's equivalent of these files.



Zie ook

 * &mdash; If an extension requires changes to the database when MediaWiki is updated, that can be done with this hook. Users can then update their wiki by running.