Manual:Importing XML dumps

This page describes methods to import XML dumps.

MediaWiki uses an abstract XML based format for content dumps. This is what Special:Export generates, and also what is used for XML dumps of Wikipedia and other Wikimedia sites, as described in Data_dumps. The format is explained in meta:Help:Export in some detail.

There are several methods for importing such XML dumps:

Using Special:Import
Special:Import can be used by wiki users with the import permission (per default, users in the sysop group) to import a small number of pages (about 100 should be safe). Trying to import large dumps this way may result in timeouts or connection failures. See meta:Help:Import for a detailed description.

Using importDump.php
Recommended Method for most uses.

importDump.php is a command line script located in the maintenance directory of your MediaWiki installation. If you have shell access, you can call it like this:

php importDump.php

where   is the name of the XML dump file. If the file is compressed and that has a .gz or .bz2 file extension, it it decompressed on the fly automatically.

running importDump.php can take quite long. For a dump of a large Wikipedia with millions of pages, it may take days even on a fast server. Also note that the information in meta:Help:Import about merging histories etc also applies.

After running this, you may want to run rebuilrecentchanges.php in order to update the content of you Special:Recentchanges page.

Using mwdumper
mwdumper is a Java application that can be used to read, write and convert MediaWiki XML dumps. It can be used to generate an SQL dump from the XML file (for later use with mysql or phpmyadmin) as well as for importing into the database directly. It is a lot faster than importDump.php, however, it only imports the revisions (page contents), and does not update the internal link tables accordingly -- that means that category pages and many special pages will show incomplete or incorrect information. To fix this, you have to run rebuildall.php</tt>, which will take quite long, because it has to parse all pages. Overall, this approach may even be slower than using importDump.php directly.

Using xml2sql
Xml2sql is a perl script that converts a MediaWiki XML file into an SQL dump for use with mysql</tt> or phpmyadmin</tt>. Just like using mwdumper (see above), importing this way is fast, but does not update secondary data like link tables, so you need to run rebuildall.php</tt>, which east up that advantage.

xml2sql is not an official tool and not maintained by mediawiki developers. It may become outdated and incompatible with the latest version of MediaWiki!