Manual talk:MWDumper

Process Error
When I tried to process an xml file exported by the wikipedia export page, i.e. import it into mysql database, I got this error "XML document structures must start and end within the same entity". Has any one come across this before? How did you solve the problem eventually? Or is it a bug at the moment? Thank you all in advance. I look forward to someone discussing it.

Table Creation
I am interested in using wikipedia for research and do not need the web front end. I cannot use the browser based setup used by mediawiki. is there either a list of create table statements necessary to make this database, or a not browser version of the mediawiki setup?
 * Just install MediaWiki and SHOW CREATE TABLE `table_name` in your SQL client.

GFDL
From http://mail.wikipedia.org/pipermail/wikitech-l/2006-February/033975.html:


 * I hereby declare it GFDL and RTFM-compatible. :) -- brion vibber


 * So this article, which started as a the README file from MWDumper, is allowed on the wiki. This might be good, as I tend to read wikis more than I read READMEs! --Kernigh 04:53, 12 February 2006 (UTC)

Tips To produce a Congrats Resume
To be able to know how you can hone to the full potential, we've produced an infographic showcasing exactly what a contains. A great resume is a perfect method to stick out in the competition. These only works are having a resume, obvious subsections, enhanced summary sentences, proper and so forth. During these couple of seconds, explore only have to explain the popular features of your profile but additionally showcase experience and achievements. Whenever a recruiter otherwise a large for the job, he merely has a couple of seconds to scan using it ., and understand should you be considered a good fit to do the job role or otherwise. This story isn't an uncommon one, so we chain of occasions ourselves or known somebody who has gone same. So, now you understand how to produce a, we can dive just a little much deeper which could increase your resume potential. Such scenarios, he may get frustrated and proceed to the following resume. files might land in junk e-mail, that is frequently submitted http://donasydow.soup.io/. For example, should you send a thing doc, and version differs from might include alternation in his system. In the PDF is the greatest format to transmit it in because it gives the recruiter might find it how you sent it off the items program you'd accustomed to producing the resume. You will find multiple formats by which it's possible to send resume. You'll be how you may have overlooked a lot of mistakes. Run your resume from your buddies and family. Use tools, for example, Grammarly to spell look at your resume. These create only paint an undesirable individuals while watching recruiter. Spellings, typing error and so forth. Your resume when the last place you would like to make any error. This helps the recruiter become familiar with you best and key skills. Make certain you concentrate on exactly what the award is all about and just what you probably did to get it. If you have obtained any awards, whether it is workplace or else, list them on your resume. Awards are the easiest method to showcase that the work has produced this kind of impact that you simply were famous for it. For those who have work, remember to speak about them. Also, nowadays lots of for the similar. A lot of who lead inside a positive method to the nearby community. It teaches you to possess skills for example event planning, working together, fundraiser. work are wonderful things to increase your resume. They that you want to learn it further despite your regular hrs. It discipline. Also, finishing a web-based course, by itself, is impressive. Additionally, towards the certifications completed in college along with other professional institutes, another web-based course and speaking by what you learned from this, will make you better to have an interview. Keywords likewise incorporate experience and skills with regards to the job opening. For instance, if your recruiter is on the consider an innovator, he may search with keywords for example Supervised, Headed, Approved. Thus, it’s crucial that better just what you had been doing for the reason that current role. To remove, recruiter uses keywords that they are searching for in resumes. As possible begin to see the showcased the abilities, while the second reveals exactly what has been doing and just what impact it's produced. This should help you answer that which was the main difference that you simply made regarding, etc., using your job role. This essentially questions if the your capacity. Although the first mention that which was done, the second helps the recruiter be aware of change up the person had produced. Also, figures are the easiest method to avoid adding any extended sentences. Converting the outcome produced on your part, inside your career, to some scalable unit causes it to be simpler to know your abilities. Anything further than two lines becomes much like a paragraph with no recruiter will spend his time studying lengthy chunks of text. If you're trying to get a Marketing and advertising Affiliate position, there’s you don't need to mention because you have labored like a programmer inside your internships/ job. Adding it might simply take up and end up being irrelevant information for that recruiter.

MWDumper error
Running WinXP, XAMPP, JRE 1.5.0_08, MySQL JDBC 3.1.13

http://f.foto.radikal.ru/0610/4d1d041f3fd7.png --89.178.61.174 22:09, 9 October 2006 (UTC)

MWDumper Issues
Using MWDumper, how would I convert a Wikipedia/Wikibooks XML dump to an SQL file?

ANSWER
java -jar mwdumper.jar --format=sql:1.5 x.xml > y.sql where x.xml is the name of your input file and y.sql is the name of your output file.

Problems with MWDumper
When I run: java -jar mwdumper.jar -–format=sql:1.5 enwiki-latest-pages-articles.xml.bz2 | c:\wamp\mysql\bin\mysql -u wikiuser -p wikidb

I get:

Exception in thread "main" java.io.FileNotFoundException: -ûformat=sql:1.5 (The system cannot find the file specified) at java.io.FileInputStream.open(Native Method) at java.io.FileInputStream. (Unknown Source) at java.io.FileInputStream. (Unknown Source) at org.mediawiki.dumper.Tools.openInputFile(Unknown Source) at org.mediawiki.dumper.Dumper.main(Unknown Source)

Please help!

SOLUTION:
For the above problem here is the fix:

java -jar mwdumper.jar -–format=sql:1.5 enwiki-latest-pages-articles.xml.bz2 | c:\wamp\mysql\bin\mysql -u wikiuser -p wikidb

Notice "-ûformat=sql:1.5" in the error message? The problem is one of the "–" is using the wrong char (caused by copy&paste)...just edit and replace (type) them by hand. so replace "-û" "--" next to format=sql:1.5

P.S For a really fast dump (60min vs 24hrs) unbzip the enwiki-latest-pages-articles.xml.bz2 file so that it becomes enwiki-latest-pages-articles.xml Then use the command: java -jar mwdumper.jar -–format=sql:1.5 enwiki-latest-pages-articles.xml | c:\wamp\mysql\bin\mysql -u wikiuser -p wikidb

Page Limitations?
I'm attempting to import a Wikipedia database dump comprized of about 4,800,000 files on a Windows XP system. I'm using the following command: java -jar mwdumper.jar --format=sql:1.5 enwiki-20070402-pages-articles.xml" | mysql -u root -p wikidb

Everything appears to go smootly, the progress indicator goes up to the expected 4 million and someting, but only 432,000 pages are actually imported into the MySQL database. Why is this? Any assistance is greatly appriciated. Uiop 02:31, 15 April 2007 (UTC)


 * MySQL experienced some error, and the error message scrolled off your screen. To aid in debugging, either save the output from mysql's stderr stream, or run mwdumper to a file first, etc. --brion 21:15, 20 April 2007 (UTC)

PROBLEM SOLVED
Mate, I had the same problem with it stopping at 432,000 pages. I'm assuming you're using WAMP here.

The problem is with with the log files. If you go to C:\wamp\mysql\data (or whatever's your equivalent directory) you'll see two files ib_logfile0 and ib_logfile1. You'll notice they are both 10Mb. They need to be much bigger. This is how you fix it.

To start off, you'll need to delete the dump you've been doing so far. Left click on the WAMP icon in the taskbar, choose MySQL, then MySQL Console. It will ask you for a password, which is blank by default so just press enter. Now type the following commands:

use wikidb; delete from page; delete from revision; delete from text; quit

OK. Now, left click on the WAMP icon in the taskbar and choose Config Files and then 'my.ini'. Find the line innodb_log_file_size, and set this to 512M (was 10M in mine). Scroll to the bottom, and add the following line:

set-variable=max_allowed_packet=32M

Left click on the WAMP icon in the taskbar and select MySQL->Stop Service. Open C:\wamp\mysql\data (or whatsoever your equivalent directory) and delete ib_logfile0 and ib_logfile1. Left-click on the WAMP icon in the taskbar again, and select MySQL->Start / Resume Service.

Now go ahead and run mwdumper.

Happy dumping!

Same problem as above, how to disable innodb_log_file or make it greater than 2048M?
I am having the same problem as above - with innodb_log_file_size set to 500MB, about 400k pages are created. With inno_db_log_file_size set to 2000MB, I get 1.1 million pages created. I would like to import Enwiki's 5 million pages, so I need a much larger inno_db_log_file_size. However, Mysql crashes on startup if I set this to a value larger than 2047MB. According to http://dev.mysql.com/doc/refman/5.0/en/innodb-configuration.html, the size of both log files is capped at 4GB. Does anyone know why this log file is being written to so much by MWDumper and how we can reduce the output to that file.

Can dump be resumed?
I am using mwdumper to import enwiki-20070402-pages-articles.xml. I got up to 1,144,000 pages and, instead of showing how many pages per second it was importing, it said (-/sec). 1,115,000 said the same thing.

After that, the import sped up dramatically: It says it's processing around (20/sec) but in fact it seems to be doing about 2000/sec, because those figures are flying past!

I'm afraid that after 30 hours of importing, I may have lost it. I don't want to start again, especially if that means the possibility of the same happening again. Debugging by trial and error could take the rest of my life!

Is there any way that I can resume the dump from 144,000 pages if need be?

(Hopefully this dump DOES work anyway. Maybe I need to increase innodb_log_file_size to 1024G or perhaps 2048G.)

ANSWERS: 1) I suggest

a) java -jar mwdumper.jar --format=sql:1.5 enwiki-latest-pages-articles.xml > a.sql

b) remove statements from a.sql for whatever already inserted in tables.

c) mysql -u -p < a.sql

2) Previous fix:
 * Download the source. Edit src/org/mediawiki/dumper/ProgressFilter.java change the following functions, replacing [REVISION_NUMBER] with the appropriate number.
 * Rebuild and execute. Disclaimer: Use at your own risk.
 * Rebuild and execute. Disclaimer: Use at your own risk.

Importing to a database with table prefix for wiki
I want to import "XML export" of Mediawiki to a local wiki but it tries to import them to non-prefixed tables however my tables have got prefix. How can I solve this problem? Is there a solution to import xml to prefixed tables (like fa_page, fa_text, fa_revisions) by this software? It's so bad if it doesn't have this feature.--Soroush 16:40, 5 September 2007 (UTC)

Yes. Open text processor, paste
 * 1) !/usr/bin/perl

while(<>) { s/INTO /INTO yourprefixhere_/g; print; }

save it as prefixer.pl. Run MWDumper with -output=file:temp.sql option (instead of --output=mysql:..). Execute perl prefixer.pl < temp.sql > fill.sql Run mysql -u wikiuser -p yourpasswordhere Type use wikidb then source fill.sql --Derbeth talk 21:11, 31 October 2007 (UTC)

Source Code
Is the source code to mwdumper available?
 * http://svn.wikimedia.org/viewvc/mediawiki/trunk/mwdumper/ --78.106.145.69 22:17, 23 October 2007 (UTC)
 * http://svn.wikimedia.org/svnroot/mediawiki/trunk/mwdumper/ --82.255.239.71 09:52, 16 March 2008 (UTC)
 * https://git.wikimedia.org/git/mediawiki/tools/mwdumper.git

Overwrite
How to overwrite articles which already exist? Werran 21:08, 10 April 2008 (UTC)

Size restrictions
Maybe you can add a feature for how many pages or how big the resulting dump can be?

More recent compiled version
The latest compiled version of MWDumper in tools/ dates from 2006-Feb-01, while http://svn.wikimedia.org/svnroot/mediawiki/trunk/mwdumper/README shows changes to the code up to 2007-07-06. The *.jar version from 2006 doesn't work on recent Commons dumps, and I don't know how to compile the program under Windows. Could you please make a more recent compiled version available? -- JovanCormac 06:28, 3 September 2009 (UTC)

This one? http://downloads.dbpedia.org/mwdumper_invalid_contributor.zip


 * Can someone compile latest rev for Windows? Version above replaces the contributor credentials with 127.0.0.1 ;<

Filtering does not seem to work
I want to import only a few pages from the whole English Wikipedia to my database. I suppose, I should give the title of the desired pages in a file (line by lines) and use the "--filter=list:fileName" option. But, when I tried this option, it seems that filtering does not have any effect and the script starts to import pages, saying 4 pages, 1000 versions, 4 pages, 2000 versions and so on which imports some other pages not listed in the filtering option.

This is the command that I use:

java -jar mwdumper.jar --filter=exactlist:titles --filter=latest --filter=notalk --output=file:out.txt --format=xml datasets/enwiki-latest-pages-meta-history.xml

0.4 compatible version?
Given that dumps are now in version 0.4 format ("http://www.mediawiki.org/xml/export-0.4/ http://www.mediawiki.org/xml/export-0.4.xsd" version="0.4") and MWDumper's page says "It can read MediaWiki XML export dumps (version 0.3, minus uploads)," are there plans to support the 0.4 version? I didn't have success with it as it is, perhaps operator error, but I think not. Thanks

Encoding
Here: Manual:MWDumper This is mentioned: '' Make sure the database is expecting utf8-encoded text. If the database is expecting latin1 (which MySQL does by default), you'll get invalid characters in your tables if you use the output of mwdumper directly. One way to do this is to pass --default-character-set=utf8 to mysql in the above sample command.

Also make sure that your MediaWiki tables use CHARACTER SET=binary. Otherwise, you may get error messages like Duplicate entry in UNIQUE Key 'name_title' because MySQL fails to distinguish certain characters.''

How is it possible to use --default-character-set=utf8 and make sure the character set=binary at the same time?

If the character set is utf8 is not binary... Can somebody explain how to force CHARACTER SET=binary while using --default-character-set=utf8? Is this possible?

Steps taken to restore Frysk Wikipedia

 * Make sure you select the option 'Experimental MySQL 4.1/5.0 binary' when selecting the type of database (in Mediawiki 1.11.2 on Ubuntu 8.04).
 * This is the batch-file (thanks for the tip in bug 14379):
 * especially the &characterEncoding=UTF-8 helps a lot
 * the mwdumper program was updated to allow for continuing when a batch of 100 records fails because of dumplicate keys. (Yes, they still happen) Please contact gerke dot ephorus dot groups at gmail dot com to request a updated version. (Sorry, no github version available yet, maybe after my holiday ;-) )

SQL Output Going to the Wrong Place
I am trying to simply take an XML dump and convert it to SQL code, which I will then run on a MySQL server. The code I've been using to do so is below:

java -jar mwdumper.jar --format=sql:1.5 --output=file:stubmetahistory.sql --quiet enwiki-20100312-stub-meta-history.xml > out.txt

What I've found is happening is that the file that I would like to be a SQL file (stubmetahistory.sql) is an exact XML copy of the original file (enwiki-20100312-stub-meta-history.xml). However, what is appearing on the screen and being piped to the out.txt file is the SQL file I am looking for. Any thoughts on what I am doing wrong, or what I am missing here to get this correct? The problem of course with just using the out.txt to load into my MySQL server is that there could be problems with the character encoding.

Thank you, CMU Researcher 20:37, 19 May 2010 (UTC)

Alternatives
For anyone unfamiliar with Java (such as myself), is there any other program we can use? 70.101.99.64 21:09, 21 July 2010 (UTC)


 * There are a bunch listed here.. Manual:Importing XML dumps

Java GC
There seems to be a problem with the garbage collection in mwdumper. On trying to import the Wikipedia 20100130 English dump containing 19,376,810 pages and 313,797,035 revisions, it aborts with the error after 4,216,269 pages and 196,889,000 revs:

Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded at java.util.Arrays.copyOfRange(Arrays.java:3209) at java.lang.String. (String.java:215) at org.mediawiki.importer.XmlDumpReader.bufferContents(Unknown Source) at org.mediawiki.importer.XmlDumpReader.bufferContentsOrNull(Unknown Source) at org.mediawiki.importer.XmlDumpReader.readText(Unknown Source) at org.mediawiki.importer.XmlDumpReader.endElement(Unknown Source) at org.apache.xerces.parsers.AbstractSAXParser.endElement(Unknown Source) at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanEndElement(Unknown Source) at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl$FragmentContentDispatcher.dispatch(Unknown Source) at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source) at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source) at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source) at org.apache.xerces.parsers.XMLParser.parse(Unknown Source) at org.apache.xerces.parsers.AbstractSAXParser.parse(Unknown Source) at org.apache.xerces.jaxp.SAXParserImpl$JAXPSAXParser.parse(Unknown Source) at org.apache.xerces.jaxp.SAXParserImpl.parse(Unknown Source) at javax.xml.parsers.SAXParser.parse(SAXParser.java:198) at org.mediawiki.importer.XmlDumpReader.readDump(Unknown Source) at org.mediawiki.dumper.Dumper.main(Unknown Source)

The recommended suggestion (http://forums.sun.com/thread.jspa?threadID=5114529; http://java.sun.com/javase/technologies/hotspot/gc/gc_tuning_6.html#par_gc.oom) of turning off this feature is NOT feasible as the error is thrown when 98% of the program is spent in GC and under 2% of the heap is recovered. I would appreciate any help or comments.

Some section on building from SVN checkout?
How do you build the JAR from an SVN checkout. I think we should include it on this page. I'm a java JAR newbie and I couldn't get it to work.


 * In the root folder (folder with build.xml), type "ant". It should put the new jar file in mwdumper/build/mwdumper.jar.

Editing Code to add tab delimited output
I've had success using the mwdumper to dump Wikipedia data into MySQL, but I'd like to do some analysis using Hadoop (Hive or Pig). I will need the Wikipedia data (revision, page, and text tables) in tab delimited or really any other delimiter to dump it into a cluster. How difficult would it be to make those modifications? Could you point out where in the code I should be looking? It would also be nice to be able to filter by table (e.g., have a seperate txt output for each table).

Needed to DELETE before running MWDumper
I installed and configured a fresh Mediawiki 1.16.2 install, and found that before I could run MWDumper successfully I had to delete all rows from the page, text, and revision tables (USE wikidb; DELETE page; DELETE text; DELETE revision). If I didn't do this first I received the error: "ERROR 1062 (23000) at line xxx: Duplicate entry '1' for key 'PRIMARY'". Dcoetzee 10:32, 21 March 2011 (UTC)

FIX: Editing Code to add tab delimited output
I've update mwdumper with a new dumper class that can be used to export a flat file (tab delimited). The updated code is at https://github.com/bcollier/mwdumper.

error: 11 million pages in enwiki
I have installed mwdumper and just ran it against an uncompressed version of enwiki-20110901-pages-articles.xml.bz2.

I was expecting ~4 million articles but it processed ~11 million before stopping; however, at that point it was sending all the sql output to null (oops). Now I fixed the sql problem and am using phpMyAdmin to watching mwdumper trundle along writing rows.

mwdumper is on 1,567,000 rows and phpMyAdmin is seeing this:

Table 	 Action	Rows 	Type	Collation	Size	Overhead

page		~1,564,895	InnoDB	binary	296.6 MiB revision	~1,441,088	InnoDB	binary	590.1 MiB textp	~10,991,108	InnoDB	binary	10.9 GiB

Should it complete when mwdumper gets to 4 million or when it gets to 11 million?

'''I made an import of enwiki in Mai 2015. Result was about 17 million imported pages.'''

Error: Duplicate entry '1' for key 'PRIMARY'
If there is ERROR 1062 (23000) at line 35: Duplicate entry '1' for key 'PRIMARY' when restoring database dump to new mediawiki install it is because mwdumper expects that database is empty. With default mediawiki install there is sample pages. Error can be solved by clearing the database. I did it with this: echo "TRUNCATE TABLE page; TRUNCATE TABLE revision; TRUNCATE TABLE text;" | mysql -u $mediawikiuser -p $mediawikidatabase --91.153.53.216 01:07, 22 October 2011 (UTC)

at line 1: Duplicate entry '0-' for key 'name_title' Bye
Hi, I wanted to import a backup in mysql db get stuck with this error but SQLDumpSplitter have helped me so find the exact line. I don't know why but I get this error importing these two to mysql server. if know anybody knows the reason I will be happy to know.--Pouyana (talk) 22:00, 21 August 2012 (UTC)


 * I have this same issue when using the direct connection but not when piping through MySQL. It most likely has something to do with character encoding, but I'm not sure how to fix it. Dcoetzee (talk) 01:17, 2 April 2013 (UTC)

Under construction
It's been "very much under construction" since 11 February 2006. Is that still the case, or should it be considered stable yet? Leucosticte (talk) 21:16, 16 September 2012 (UTC)

Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 2048
Hello

I have the following error : Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 2048 at org.apache.xerces.impl.io.UTF8Reader.read(Unknown Source) at org.apache.xerces.impl.XMLEntityScanner.load(Unknown Source) at org.apache.xerces.impl.XMLEntityScanner.scanContent(Unknown Source) at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanContent(Unknown Source) at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl$FragmentContentDispatcher.dispatch(Unknown Source) at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source) at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source) at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source) at org.apache.xerces.parsers.XMLParser.parse(Unknown Source) at org.apache.xerces.parsers.AbstractSAXParser.parse(Unknown Source) at org.apache.xerces.jaxp.SAXParserImpl$JAXPSAXParser.parse(Unknown Source) at javax.xml.parsers.SAXParser.parse(SAXParser.java:392) at javax.xml.parsers.SAXParser.parse(SAXParser.java:195) at org.mediawiki.importer.XmlDumpReader.readDump(XmlDumpReader.java:88) at org.mediawiki.dumper.Dumper.main(Dumper.java:142) when i try to import enwiki-20130708-pages-articles it crashes around the page 5 800 000

i have tested the build from june 26 without any luck

how can I fix this ?

thx

Also having the same problem. any help would be deeply appreciated. -- NutzTheRookie (talk) 11:31, 5 September 2013 (UTC)

I also find that: java -server -jar mwdumper-1.16.jar --format=sql:1.5 enwiki-20131202-pages-articles.xml.bz2 crashes soon after 4 510 000 pages with an identical stack trace.


 * ==Same problem==

4,510,000 pages (5,294.888/sec), 4,510,000 revs (5,294.888/sec) Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 2048 at org.apache.xerces.impl.io.UTF8Reader.read(Unknown Source) at org.apache.xerces.impl.XMLEntityScanner.load(Unknown Source) at org.apache.xerces.impl.XMLEntityScanner.scanContent(Unknown Source) at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanContent(Unknown Source) at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl$FragmentContentDispatcher.dispatch(Unknown Source) at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source) at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source) at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source) at org.apache.xerces.parsers.XMLParser.parse(Unknown Source) at org.apache.xerces.parsers.AbstractSAXParser.parse(Unknown Source) at org.apache.xerces.jaxp.SAXParserImpl$JAXPSAXParser.parse(Unknown Source) at javax.xml.parsers.SAXParser.parse(SAXParser.java:392) at javax.xml.parsers.SAXParser.parse(SAXParser.java:195) at org.mediawiki.importer.XmlDumpReader.readDump(XmlDumpReader.java:88) at org.mediawiki.dumper.Dumper.main(Dumper.java:142)

This is a Xerces bug, documented at https://issues.apache.org/jira/browse/XERCESJ-1257

The workaround suggested is to use the JVM's UTF-8 reader instead of the Xerces UTF8Reader. I tried this suggested workaround, and it seemed to fix it for me. I made this change:

public void readDump throws IOException { try { SAXParserFactory factory = SAXParserFactory.newInstance; SAXParser parser = factory.newSAXParser; Reader reader = new InputStreamReader(input,"UTF-8"); InputSource is = new InputSource(reader); is.setEncoding("UTF-8"); parser.parse(is, this); } catch (ParserConfigurationException e) { throw (IOException)new IOException(e.getMessage).initCause(e); } catch (SAXException e) { throw (IOException)new IOException(e.getMessage).initCause(e); }		writer.close; } When in src\org\mediawiki\importer\XMLDumpReader.java Don't forget to add import java.io.*; and add import org.xml.sax.InputSource; to the top of your file, to settle the imports.

Importing multiple XML files iteratively
Is there any setting in MWDumper that will allow the import of multiple XML exports iteratively? For example, if I have 100 pages with full history in separate XML files, is there any way to command MWDumper to import all files (full path data) from an external text file (a la wget -i)? Thanks. Wikipositivist (talk) 22:26, 19 November 2013 (UTC)

Any succes importing English wiki XML dump with mwdumper?
Was anyone able to import recent English Wikipedia XML dump with mwdumper? I tried multiple dumps from past few months and I'm getting varius erros like https://bugzilla.wikimedia.org/show_bug.cgi?id=57236 or https://bugzilla.wikimedia.org/show_bug.cgi?id=24909. Could someone share the last dump that can be imported with mwdumper? Jogers (talk) 21:46, 18 April 2014 (UTC)

How to use MwDumper on my shared host to import Dumps into my wiki?
I'm basically a noob. Can anyone tell me step by step of how can i import xml dumps into my wiki which is hosted. I'm not understanding anything from the manual. thanks.


 * You need to run it on your computer, and it will output a SQL file (you should redirect the output to a file). Then load the SQL file on the database server. Your shared host probably provides some sort of cpanel where you can access PhpMyAdmin to write SQL queries. From PhpMyAdmin there's an option to load an SQL file. --Ciencia Al Poder (talk) 18:57, 26 September 2015 (UTC)

Unable to import xml dump duplicate entry error
I am getting an error of a duplicate entry while I am importing the wikipedia 2008 dump into media wiki 1.24.4. I am using one of the methods on the import xml wiki page using java direct connection to mysql and mwdumper. After importing 182,000 pages it fails saying there is a duplicate entry. 182,000 pages (155.23/sec), 182,000 revs (155.23/sec) Exception in thread "main" java.io.IOException: com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException: Duplicate entry '0-?' for key 'name_title' at org.mediawiki.importer.XmlDumpReader.readDump(XmlDumpReader.java:92) at org.mediawiki.dumper.Dumper.main(Dumper.java:142) Caused by: org.xml.sax.SAXException: com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException: Duplicate entry '0-?' for key 'name_title' at org.mediawiki.importer.XmlDumpReader.endElement(XmlDumpReader.java:229) at org.apache.xerces.parsers.AbstractSAXParser.endElement(Unknown Source) at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanEndElement(Unknown Source) at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl$FragmentContentDispatcher.dispatch(Unknown Source) at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source) at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source) at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source) at org.apache.xerces.parsers.XMLParser.parse(Unknown Source) at org.apache.xerces.parsers.AbstractSAXParser.parse(Unknown Source) at org.apache.xerces.jaxp.SAXParserImpl$JAXPSAXParser.parse(Unknown Source) at javax.xml.parsers.SAXParser.parse(SAXParser.java:392) at javax.xml.parsers.SAXParser.parse(SAXParser.java:195) at org.mediawiki.importer.XmlDumpReader.readDump(XmlDumpReader.java:88) ... 1 more

At the time my charset was utf8 so after I read a troubleshooting page someone with the same issue said it might be caused by not having the charset to binary. So I changed it to binary and still got the same error. I am not sure how to proceed to fix this issue. Does anyone know why I am getting this error? --Ksures (talk) 20:38, 30 October 2015 (UTC)

SOLUTION:
When you first make a DB to store the dump, even before applying mwdumper, you should specify the charset for that DB: CREATE DATABASE wikidb DEFAULT CHARACTER SET utf8;

java.lang.IllegalArgumentException: Invalid contributor
Doing this on a debian linux machine, i get this exception:

2,439 pages (0.595/sec), 55,000 revs (13.42/sec) 2,518 pages (0.614/sec), 56,000 revs (13.656/sec) 2,630 pages (0.641/sec), 57,000 revs (13.891/sec) 2,865 pages (0.698/sec), 58,000 revs (14.13/sec) Exception in thread "main" java.lang.IllegalArgumentException: Invalid contributor at org.mediawiki.importer.XmlDumpReader.closeContributor(Unknown Source) at org.mediawiki.importer.XmlDumpReader.endElement(Unknown Source) at org.apache.xerces.parsers.AbstractSAXParser.endElement(Unknown Source) at org.apache.xerces.parsers.AbstractXMLDocumentParser.emptyElement(Unknown Source) at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanStartElement(Unknown Source) at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl$FragmentContentDispatcher.dispatch(Unknown Source) at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source) at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source) at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source) at org.apache.xerces.parsers.XMLParser.parse(Unknown Source) at org.apache.xerces.parsers.AbstractSAXParser.parse(Unknown Source) at org.apache.xerces.jaxp.SAXParserImpl$JAXPSAXParser.parse(Unknown Source) at org.apache.xerces.jaxp.SAXParserImpl.parse(Unknown Source) at javax.xml.parsers.SAXParser.parse(SAXParser.java:195) at org.mediawiki.importer.XmlDumpReader.readDump(Unknown Source) at org.mediawiki.dumper.Dumper.main(Unknown Source) root@odroid-server:/var/storage/temp#

Any Ideas how to fix this?

Update from Dezember 2015
Where can i download the latest version (jar-file) from Dezember 2015? https://dumps.wikimedia.org/tools/ is outdated. Thx


 * There are no new dumps available AFAIK. But compilation should be straightforward. Install maven and then follow the steps in the "How to build MWDumper from source" section --Ciencia Al Poder (talk) 17:08, 10 February 2016 (UTC)

Cannot import recent dumps
Is mwdumper still considered a recommended way to import Wikipedia database? I couldn't get it to work at all in English Wikipedia dumps since May 2015.

It always fails with either "MySQLIntegrityConstraintViolationException: Duplicate entry" or "SAXParseException: XML document structures must start and end within the same entity". I was able to get around the first exception by removing uniqueness constraint on page title field. But it fails with the second one later or with "SAXParseException: The element type "parentid" must be terminated by the matching end-tag " "


 * The duplicate error may be because of edits happening while the dump is being generated, so the XML effectively has duplicate entries. That's probably a bug of the XML dump generation. About the not matching, it would be good to see the XML portion where this happen to see if this is indeed the problem, or maybe the XML dump you downloaded is cut off somehow... If the XML is not well formed, that's a bug on the XML dump generation, something that should be reported on phabricator --Ciencia Al Poder (talk) 11:37, 27 March 2016 (UTC)