Manual talk:Importing XML dumps

From MediaWiki.org
Jump to: navigation, search

link tables?[edit | edit source]

(regarding mwdumper import) I want to avoid the expensive rebuildall.php script. Looking at download:enwiki/20080724/, I'm wondering - should we import ALL of the SQL dump files, or are there any that should be skipped? --JaGa 00:50, 23 August 2008 (UTC)

OK, I went through maintenance/tables.sql, and compared what an importDump.php populates and what mwdumper populates (only page, revision, and text tables), so I'm thinking this is the list of SQL dumps I'll want after mwdumper finishes:
  • category
  • categorylinks
  • externallinks
  • imagelinks
  • pagelinks
  • redirect
  • templatelinks
Thoughts? --JaGa 07:04, 24 August 2008 (UTC)

When I try to import using this command: C:\Program Files\xampp\htdocs\mediawiki-1.13.2\maintenance>"C:\Program Files\xampp\php\php.exe" importDump.php C:\Users\Matthew\Downloads\enwiki-20080524-pages-articles.xml.bz2

It fails with this error: XML import parse failure at line 1, col 1 (byte 0; "BZh91AY&SYö┌║O☺Ä"): Empty document

What do you think is wrong?

table prefix[edit | edit source]

I have a set of wikis with a different table prefix for each of them. How to I tell importDump.php which wiki to use?

Set $wgDBprefix in AdminSettings.php —Emufarmers(T|C) 11:10, 25 February 2009 (UTC)

Importing multiple dumps into same database?[edit | edit source]

If we try to import multiple dumps into the same database, what happens?

Will it work this way?

For example, if there are are two articles with the same title in both databases, what will happen?

Is it possible to import both of them into the same database and distinguish titles with prefixes?

Merging with an existing wiki[edit | edit source]

How do I merge the dumps with another wiki I've created without overwriting existing pages/articles?

.bz2 files decompressed automatically by importDump.php?[edit | edit source]

It seems inly .gz files, not .bz2, are decompressed on the fly. --Apoc2400 22:40, 18 June 2009 (UTC)

Filed as bug 19289. —Emufarmers(T|C) 05:15, 19 June 2009 (UTC)

Add

if( preg_match( '/\.bz2$/', $filename ) ) {
                        $filename = 'compress.bzip2://' . $filename;
                }

to the importFromFile function

Having trouble with importing XML dumps into database[edit | edit source]

I have been trying to upload one of the latest version of the dumps, pages-articles.xml.bz2 from download:enwiki/20090604/. I dont want the front end and other things that comes with wikimedia installations, so i thought i would just create the database and upload the dump. I tried using mwdumper, but it breaks with the following error. bugzilla:18328I also tried using mwimport, that also failed due to the same problem. any one have any suggestions to import the dump successfully to the database ?

Thanks Srini

Error Importing XML Files[edit | edit source]

A colleague has exported Wikipedia help contents and when attempting to import ran into an error. One of the errors had to do with Template:Seealso. The XML that is produced has a tag <redirect /> which causes the import.php module to error out. If I remove the line from the XML the imports just fine. We are using 1.14.0. Any thoughts?

I am using 1.15. , and I get the following errors:
<b>Warning</b>: xml_parse() [<a href='function.xml-parse'>function.xml-parse</a>]: Unable to call handler in_() in <b>/home/content/*/h/s/*hscentral/html/w/includes/Import.php</b> on line <b>437</b><br />
<br />
<b>Warning</b>: xml_parse() [<a href='function.xml-parse'>function.xml-parse</a>]: Unable to call handler out_() in <b>/home/content/*/h/s/*hscentral/html/w/includes/Import.php</b> on line <b>437</b><br />
By analyzing what entries kill the script, I found that it is protected redirects- these errors come when a page has both <redirect /> and the <restrictions></restrictions> lines. Manually removing the restrictions line makes it work. I get these errors both from importdump.php and in my browser window on special:import when there is a protected redirect in the file. 76.244.158.243 02:55, 30 September 2009 (UTC)

simple download updated import.php from here: http://svn.wikimedia.org/viewvc/mediawiki/trunk/phase3/includes/Import.php?view=co and replace original file in /includes directory. work fine!

Above import.php doesn't work, tested under ubuntu 12

PHP Fatal error:  Call to undefined function wfRandomString() in /usr/share/mediawiki/includes/Import.php on line 832


xml2sql has the same problem:
xml2sql-0.5/xml2sql -mv commonswiki-latest-pages-articles.xml 
unexpected element <redirect>
xml2sql-0.5/xml2sql: parsing aborted at line 10785 pos 16.

212.55.212.99 12:22, 13 February 2010 (UTC)

Error message[edit | edit source]

The error message I get is "Import failed: Loss of session data. Please try again." Ikip 02:50, 27 December 2009 (UTC)

Fix: I got this error while trying to upload a 10 MB file. After cutting it down into 3.5 MB pieces, each individual file received "The file is bigger than the allowed upload size." error messages. 1.8 MB files worked though. --bhandy 19:24, 16 March 2011 (UTC)

THANK YOU! This was driving me mad! LOL But your fix worked. ;) Zasurus 13:00, 5 September 2011 (UTC)

Another Fix: Put the following into your .htaccess file (adjust these figures according to the size of your dump file):

php_value upload_max_filesize 20M
php_value post_max_size 20M
php_value max_execution_time 200
php_value max_input_time 200

Another Fix: Set upload_max_filesize = 20M in php.ini

Does NOT allow importing of modified data on my installation[edit | edit source]

If I export a dump of the current version using dumpBackup.php --current, then make changes to that dumped file, then attempt to import the changed file back into the system using importDump.php, NONE of the changes come through, even after running rebuildall.php.

Running MW 1.15.1, SemanticMediaWiki 1.4.3.

Am I doing something wrong, or is there a serious bug that I need to report? --Fungiblename 14:09, 13 April 2010 (UTC)

And for the necro-bump.... yes, I was doing something wrong.[edit | edit source]

For anyone else who has run into this problem, you need to delete revision IDs from your XML page dumps if you want to re-import the XML after modifying it. Sorry for not posting this earlier, but this issue was addressed almost instantly as invalid in response to an admittedly invalid bug report that I filed on Bugzilla in 2010: This is exactly how it's supposed to work to keep you from overwriting revisions via XML imports. --Fungiblename 07:57, 21 September 2011 (UTC)

Error message: PHP Warning: Parameter 3 to parseForum[edit | edit source]

Two errors:

PHP Deprecated:  Comments starting with '#' are deprecated in /etc/php5/cli/conf.d/imagick.ini on line 1 in Unknown on line 0


PHP Warning:  Parameter 3 to parseForum() expected to be a reference, value given in /home/t/public_html/deadwiki.com/public/includes/parser/Parser.php on line 3243

100 (30.59 pages/sec 118.68 revs/sec)

Adamtheclown 05:11, 30 November 2010 (UTC)

XML that does NOT come from a wiki dump[edit | edit source]

Can this feature be used on an xml file that was not created as, or by, a wiki dump? I am looking for a way to import a lot of text documents at once, that can be wikified later. Your advice, wisdom, insight, etc, greatly appreciated.

NO - XML is a structure not a format, so the mediawiki xml-reader only accepts xml-dumps for mediawiki or simalary formated xml.

Altered Display Titles[edit | edit source]

Hi, I am using MediaWiki 1.18.2 and I have been experimenting with the XML Import process.

I have notice that the title actually displayed on each page that has been altered by the Import process appears as Special:Import until it is edited and saved again. I assume that this is supposed to indicate that the page has been edited by the Import process, but it can be very confusing to less knowledgeable users and it also means the page RDF links produced with the SemanticMedia wiki process are incorrectly rendered.

I have noticed a similar process of the display name being cosmetically changes after other forms of mass edits, such as the MassEditRegex extension, so I assume this probably more of a core Mediawiki process, but I have not been able to find any information about this issue.

I would love to be able to turn this feature off, or perhaps at least be able to hide it for certain groups of users, any help would be greatly appreciated.

Thanks Jpadfield (talk) 11:41, 5 April 2012 (UTC)

No Page error[edit | edit source]

When I try to import a template XML file from Wikipedia, I receive an error message that says "No page available to import." Any ideas why it won't find the XML file and what work arounds? 12.232.253.2 15:56, 26 April 2012 (UTC)

First thing to check is that the XML file actually has any <page> nodes within. --98.210.170.91 18:49, 5 May 2012 (UTC)

Manual error -- importImages.php ?[edit | edit source]

Why on the manual page for importdump are there examples referencing importImages.php? Looks like a cut n paste error to me.

Cant open file error[edit | edit source]

For some reason I keep getting this error suddenly in my MW 1.18. I did not have this problem in the past, I import xml files almost every month. I tried importing a small file of a single page I exported from the same wiki, but the problem persists. Obviously the files have no permissions problems.

Any idea what could be the cause of this? I'm using PHP 5.2.17, on IIS windows 2008 r2. Thank you Osishkin (talk) 23:17, 19 August 2012 (UTC)

XML Imported pages don't seem to show up in search?[edit | edit source]

I imported several hundred pages through an xml import (via import pages) and none of these pages appear in the search or auto-suggest when I start typing them into the search box. I tried seeing if somehow this was a queued job (it wasn't) as well as creating new pages afterwards to check if there was some kind of lag before new pages appear.

It seems like imported pages somehow don't get recognized as part of Mediawiki's search or auto-suggest. I specifically created/imported these pages intending to use them as simplified pages that people could see come up in search or auto-suggest, yet it seems that somehow they are not indexed?

Any help would be greatly appreciated.

I found an answer. After manually importing pages, they aren't necessarily added to be searched. I believe you have to run two scripts: updateSearchIndex.php and rebuildtextindex.php.
However, you need to specify a few parameters for updateSearchIndex.php such as a starting date. The below command worked for me
"php maintenance/updateSearchIndex.php -s 20081020224040"
I was only interested in getting the page titles to come up as searchable in the auto-suggest, so I think the updateSearchIndex.php did the trick for me. The date used is some random date that's before when I imported pages but if you have an older wiki you may need to make a modification.

Importing pages with question marks in the title[edit | edit source]

It would seem that when one imports pages with question marks in the title, and then navigates to those pages, one gets: "The requested page title was invalid, empty, or an incorrectly linked inter-language or inter-wiki title. It may contain one or more characters which cannot be used in titles." See also the comment at w:Template talk:?, "If you get the Bad Title page with the following text it means that you tried to enter the url 'yoursite.com/wiki/Template:?' instead of searching for Template:? then clicking on 'create page'". As an example, see http://rationalwikiwikiwiki.org/wiki/RationalWikiWiki:What_is_going_on_at_RationalWiki%3F , which resulted from importing http://rationalwikiwiki.org/wiki/RationalWikiWiki:What_is_going_on_at_RationalWiki%3F . Leucosticte (talk) 19:31, 2 September 2012 (UTC)

Remove a node form a dump after import[edit | edit source]

Is there an "easy" way to edit Import.php to remove an XML node after it's been properly imported?

Right now my server is parsing the whole en dump, and every time I restart it has to read through everything it's already imported... even at ~200 rev/sec, It'd take days of non stop running just to read to the end of the file, let alone import, so I wanted to try and delete everything from the compressed dump on the fly as it's imported. Any ideas?

69.179.85.49 21:40, 7 March 2013 (UTC)