Manual:MWDumper

MWDumper is a quick little tool for extracting sets of pages from a MediaWiki dump file.

To import current XML export dumps, you should build MWDumper from ; you'll need tons of scattered dependencies from 2005 though. You can find a mostly up-to-date build at https://integration.mediawiki.org/ci/job/MWDumper-package/org.wikimedia$mwdumper/.

Third-party builds (which starts in GUI mode by default so you won't need most of the parameters below, just run it with ) may not contain the latest bug fixes. There are also third party builds without the gui default.

Current WMF builds are produced by jenkins; check the copy at download.wikimedia.org first, and for the absolute latest version check the jenkins build.

It can read MediaWiki XML export dumps (version 0.3, minus uploads), perform optional filtering, and output back to XML or to SQL statements to add things directly to a database in 1.4 or 1.5 schema.

It is still very much under construction.

While this can be used to import XML dumps into a MediaWiki database, it may not always be the best choice for this task. See Manual:Importing XML dumps for an overview.

Usage
Before using mwdumper, your page, text, and revision tables must be empty: In SQL: DELETE FROM page; DELETE FROM text; DELETE FROM revision; In maintenance directory: php rebuildall.php

Sample command line for a direct database import: java -jar mwdumper.jar --format=sql:1.5 pages_full.xml.bz2 | mysql -u -p  to MySQL in the above sample command.

If you want to use the output of mwdumper in a JDBC URL, you should use set  in the query string.

Also make sure that your MediaWiki tables use CHARACTER SET=binary. Otherwise, you may get error messages like  because MySQL fails to distinguish certain characters.

Complex filtering
You can also do complex filtering to produce multiple output files: java -jar mwdumper.jar \ --output=bzip2:pages_public.xml.bz2 \ --format=xml \ --filter=notalk \ --filter=namespace:\!NS_USER \ --filter=latest \ --output=bzip2:pages_current.xml.bz2 \ --format=xml \ --filter=latest \ --output=gzip:pages_full_1.5.sql.gz \ --format=sql:1.5 \ --output=gzip:pages_full_1.4.sql.gz \ --format=sql:1.4 \ pages_full.xml.gz

A bare parameter will be interpreted as a file to read XML input from; if "-" or none is given, input will be read from stdin. Input files with ".gz" or ".bz2" extensions will be decompressed as gzip and bzip2 streams, respectively.

Internal decompression of 7-zip .7z files is not yet supported; you can pipe such files through p7zip's 7za:

7za e -so pages_full.xml.7z | java -jar mwdumper.jar --format=sql:1.5 | mysql -u -p.
 * The JRE does not allow you to mix the -jar and -classpath arguments (hence the different command structure).
 * The --output argument must before the --format argument.
 * The ampersand in the MySQL URI must be escaped on Unix-like systems.

Example of using mwdumper with a direct connection to MySQL on WindowsXP
Had problems with the example above... this following example works better on XP....

1.Create a batch file with the following text.

set class=mwdumper.jar;mysql-connector-java-3.1.12/mysql-connector-java-3.1.12-bin.jar set data="C:\Documents and Settings\All Users.WINDOWS\Documents\en.wiki\enwiki-20060207-pages-articles.xml.bz2" java -client -classpath %class% org.mediawiki.dumper.Dumper "--output=mysql://127.0.0.1/wikidb?user= &password= " "--format=sql:1.5" %data%

pause

2.Download the mysql-connector-java-3.1.12-bin.jar and mwdumper.jar

3.Run the batch file.

Note


 * 1) It still reports a problem with the import files, "duplicate key"...
 * 2) The class path separator is a ; (semi-colon) in this example; different from the example above.

The "duplicate key" error may result from the page, revision and text tables in the database not being empty, or from character encoding problems. See A note on character encoding.

Troubleshooting
If strange XML errors are encountered under Java 1.4, try 1.5:
 * http://java.sun.com/j2se/1.5.0/download.jsp
 * http://www.apple.com/downloads/macosx/apple/java2se50release1.html

If mwdumper gives java.lang.IllegalArgumentException: Invalid contributor exception, see 18328

If it gives java.lang.OutOfMemoryError: Java heap space exception, run it with larger heap size, for example  (first is starting, second maximum size) (bug 21937)

Performance Tips
To speed up importing into a database, you might try:


 * Temporarily remove all indexes and auto_increment fields from the following tables: page, revision and text. This gives a tremendous speed bump, because MySQL will otherwise be updating these indexes after each insert. Don't forget to recreate the indexes afterwards.
 * Java's -server option may significantly increase performance on some versions of Sun's JVM for large files. (Not all installations will have this available.)
 * Increase MySQL's innodb_log_file_size. The default is as little as 5mb, but you can improve performance dramatically by increasing this to reduce the number of disk writes. (See the my-huge.cnf sample config.)
 * If you don't need it, disable the binary log (log-bin option) during the import. On a standalone machine this is just wasteful, writing a second copy of every query that you'll never use.
 * Various other wacky tips in the MySQL reference manual.

Reporting bugs
Bugs can be reported to the mwdumper product in the MediaWiki Bugzilla.

Todo

 * Add some more junit tests
 * Include table initialization in SQL output
 * Allow use of table prefixes in SQL output
 * Ensure that titles and other bits are validated correctly.
 * Test XML input for robustness
 * Provide filter to strip ID numbers
 * &lt;siteinfo&gt; is technically optional; live without it and use default namespaces
 * GUI frontend(s)
 * Port to Python? ;)

If you have to load a huge wiki this might help
Below is a set of instructions that makes loading a large wiki less error prone and maybe a bit faster. It is not a script but rather a set of commands you can copy into bash (running in a screen session.) You'll have to babysit and customize the process for your needs.


 * 1) Dump SQL to disk in even sized chunks.  This takes about 80 Gb of hard drive space and 3 hours for enwiki.
 * 2) Setup the db to receive the chunks.  This takes a few seconds.
 * 3) Import the chunks.  This takes a few days for enwiki.
 * 4) Rebuild the DB.  This takes another day for enwiki.
 * 5) Run standard post import cleanup.  I haven't finished this step successfully yet but some of it can be skipped I think.

export DUMP_PREFIX=/public/datasets/public/enwiki/20130604/enwiki-20130604 export DIR_ROOT=/data/project/dump export DIR=${DIR_ROOT}/enwiki export EXPORT_PROCESSES=4 export IMPORT_PROCESSES=4 export DB=enwiki2 export EXPORT_FILE_SIZE=5 export EXPORT_FILE_SUFFIX_LENGTH=8 export LOG=~/log

bash -c 'sleep 1 && echo y' | mysqladmin drop ${DB} -u root sudo rm -rf ${DIR} rm -rf ${LOG}

sudo mkdir -p ${DIR} sudo chown -R ${USER} ${DIR_ROOT} mkdir -p ${LOG}

sudo apt-get install openjdk-7-jdk libicu-dev -y #jdk for mwdumper and libicu-dev for uconv ls -1S ${DUMP_PREFIX}-pages-meta-current*.xml-p* | xargs -I{} -P${EXPORT_PROCESSES} -t bash -c ' mkdir -p ${DIR}/$(basename {}) cd ${DIR}/$(basename {}) bunzip2 -c {} | uconv -f UTF-8 -t ascii --callback escape-xml-dec -v 2> ${LOG}/$(basename {}).uconv | java -jar ~/mwdumper-1.16.jar --format=sql:1.5 2> ${LOG}/$(basename {}).mwdumper | grep INSERT | split -l ${EXPORT_FILE_SIZE} -a ${EXPORT_FILE_SUFFIX_LENGTH} 2> ${LOG}/$(basename {}).split '
 * 1) Dump SQL to disk in even sized chunks.
 * 2) Sort by size descending to keep as many threads as possible hopping.
 * 3) uconv cleans up UTF-8 errors in the source files.
 * 4) grep removes BEGIN and COMMIT statements that mwdumper thinks are good, but I do better below

mysqladmin create ${DB} --default-character-set=utf8 -u root mysql -u root ${DB} < /srv/mediawiki/maintenance/tables.sql mysql -u root ${DB} <<HERE ALTER TABLE page CHANGE page_id page_id INTEGER UNSIGNED, DROP INDEX name_title, DROP INDEX page_random, DROP INDEX page_len, DROP INDEX page_redirect_namespace_len; ALTER TABLE revision CHANGE rev_id rev_id INTEGER UNSIGNED, DROP INDEX rev_page_id, DROP INDEX rev_timestamp, DROP INDEX page_timestamp, DROP INDEX user_timestamp, DROP INDEX usertext_timestamp, DROP INDEX page_user_timestamp; ALTER TABLE text CHANGE old_id old_id INTEGER UNSIGNED; HERE
 * 1) Setup the db to receive the chunks.

echo 'BEGIN;' > ${DIR_ROOT}/BEGIN echo 'COMMIT;' > ${DIR_ROOT}/COMMIT find ${DIR} -type f | sort -R | xargs -I{} -P${IMPORT_PROCESSES} -t bash -c ' cat ${DIR_ROOT}/BEGIN {} ${DIR_ROOT}/COMMIT | mysql -u root ${DB} && rm {}'
 * 1) Import the chunks
 * 2) Each chunk is wrapped in a transaction and if the import succeeds the chunk is removed from disk.
 * 3) This means you should be able to safely ctrl-c the process at any time and rerun this block and
 * 4) it'll pick up where it left off.  The worst case scenario is you'll get some chunk that was added
 * 5) but not deleted and you'll see mysql duplicate key errors.  Or something like that.  Anyway, if you
 * 6) are reading this you are a big boy and can figure out how clean up the database or remove the file.

mysql -u root ${DB} < 1; UPDATE page, bad_page SET page.page_title = CONCAT(page.page_title, page.page_id) WHERE page.page_namespace = bad_page.page_namespace AND page.page_title = bad_page.page_title; DROP TABLE bad_page; ALTER TABLE page CHANGE page_id page_id INTEGER UNSIGNED AUTO_INCREMENT, ADD UNIQUE INDEX name_title (page_namespace,page_title), ADD INDEX page_random (page_random), ADD INDEX page_len (page_len), ADD INDEX page_redirect_namespace_len (page_is_redirect, page_namespace, page_len); ALTER TABLE revision CHANGE rev_id rev_id INTEGER UNSIGNED AUTO_INCREMENT, ADD UNIQUE INDEX rev_page_id (rev_page, rev_id), ADD INDEX rev_timestamp (rev_timestamp), ADD INDEX page_timestamp (rev_page,rev_timestamp), ADD INDEX user_timestamp (rev_user,rev_timestamp), ADD INDEX usertext_timestamp (rev_user_text,rev_timestamp), ADD INDEX page_user_timestamp (rev_page,rev_user,rev_timestamp); ALTER TABLE text CHANGE old_id old_id INTEGER UNSIGNED AUTO_INCREMENT; HERE
 * 1) Rebuild the DB

cd /srv/mediawiki php maintenance/update.php
 * 1) Run standard post import cleanup

Change history (abbreviated)

 * 2005-10-25: Switched SqlWriter.sqlEscape back to less memory-hungry StringBuffer
 * 2005-10-24: Fixed SQL output in non-UTF-8 locales
 * 2005-10-21: Applied more speedup patches from Folke
 * 2005-10-11: SQL direct connection, GUI work begins
 * 2005-10-10: Applied speedup patches from Folke Behrens
 * 2005-10-05: Use bulk inserts in SQL mode
 * 2005-09-29: Converted from C# to Java
 * 2005-08-27: Initial extraction code