Help:Export

Wiki pages can be exported in a special XML format to upload import into another MediaWiki installation (if this function is enabled on the destination wiki, and the user is a sysop there) or use it elsewise for instance for analysing the content. See also Syndication feeds for exporting other information but pages and Help:Import on importing pages.

How to export
There are at least four ways to export pages:


 * Paste the name of the articles in the box in Special:Export or use.
 * The backup script dumpBackup.php dumps all the wiki pages into an XML file. dumpBackup.php only works on MediaWiki 1.5 or newer. You need to have direct access to the server to run this script. Dumps of mediawiki projects are (more or less) regularly made available at http://download.wikipedia.org.
 * Note: you might need to configure AdminSettings.php in order to run dumpBackup.php successfully. See http://meta.wikimedia.org/wiki/MediaWiki for more information.
 * There is a OAI-PMH-interface to regularly fetch pages that have been modified since a specific time. For wikimedia projects this interface is not publicly available. OAI-PMH contains a wrapper format around the actual exported articles.
 * Use the Python Wikipedia Robot Framework. This won't be explained here

By default only the current version of a page is included. Optionally you can get all versions with date, time, user name and edit summary.

Additionally you can copy the SQL database. This is how dumps of the database were made available before MediaWiki 1.5 and it won't be explained here further.

Using 'Special:Export'
To export all pages of a namespace, for example.

1. Get the names of pages to export

 * 1) Go to Special:Allpages and choose the desired namespace.
 * 2) Copy the list of page names to a text editor
 * 3) Put all page names on separate lines
 * 4) You can achieve that relatively quickly if you paste the names into say MS Word - use paste special as unformatted text - then open the replace function (CTRL+h), entering ^t in Find what, entering ^p in Replace with and then hitting Replace All button.
 * 5) Prefix the namespace to the page names (e.g. 'Help:Contents'), unless the selected namespace is the main namespace.
 * 6) Repeat the steps above for other namespaces (e.g. Category:, Template:, etc.)

Alternatively, a quick approach for those with access to a machine with Python installed:
 * 1) Go to Special:Allpages and choose the desired namespace.
 * 2) Save the entire webpage as index.php.htm
 * 3) Run export_all_helper.py in the same directory as the saved file.
 * 4) Save the page names output by the script.

2. Perform the export
and finally...
 * Go to Special:Export and paste all your page names into the textbox, making sure there are no empty lines.
 * Click 'Submit query'
 * Save the resulting XML to a file using your browser's save facility.
 * Open the XML file in a text editor. Scroll to the bottom to check for error messages.

Now you can use this XML file to perform an import.

Exporting the full history
A checkbox in the Special:Export interface selects whether to export the full history (all versions of an article) or the most recent version of articles. A maximum of 100 revisions are returned; other revisions can be requested as detailed in MW:Parameters to Special:Export.

Export format
The format of the XML file you receive is the same in all ways. It is codified in XML Schema at http://www.mediawiki.org/xml/export-0.3.xsd This format is not intended for viewing in a web browser. Some browsers show you pretty-printed XML with "+" and "-" links to view or hide selected parts. Alternatively the XML-source can be viewed using the "view source" feature of the browser, or after saving the XML file locally, with a program of choice. If you directly read the XML source it won't be difficult to find the actual wikitext. If you don't use a special XML editor "<" and ">" appear as &amp;lt; and &amp;gt;, to avoid a conflict with XML tags; to avoid ambiguity, "&amp;" is coded as "&amp;amp;".

In the current version the export format does not contain an XML replacement of wiki markup (see Wikipedia DTD for an older proposal). You only get the wikitext as you get when editing the article.

Example
 Page title edit=sysop:move=sysop 2001-01-15T13:15:00Z Foobar I have just one thing to say! A bunch of text here. 2001-01-15T13:10:27Z 10.0.0.2 new! An earlier revision. Talk:Page title 2001-01-15T14:03:00Z 10.0.0.2 hey WHYD YOU LOCK PAGE??!!! i was editing that jerk

DTD
Here is an unofficial, short Document Type Definition version of the format. If you don't know what a DTD is just ignore it.

<!ELEMENT mediawiki (siteinfo,page*)>

<!ATTLIST mediawiki version CDATA  #REQUIRED xmlns CDATA #FIXED "http://www.mediawiki.org/xml/export-0.3/" xmlns:xsi CDATA #FIXED "http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation CDATA #FIXED "http://www.mediawiki.org/xml/export-0.3/ http://www.mediawiki.org/xml/export-0.3.xsd" > <!ELEMENT siteinfo (sitename,base,generator,case,namespaces)> <!ELEMENT sitename (#PCDATA)> <!ELEMENT base (#PCDATA)> <!ELEMENT generator (#PCDATA)> <!ELEMENT case (#PCDATA)> <!ELEMENT namespaces (namespace+)> <!ELEMENT namespace (#PCDATA)> <!ATTLIST namespace key CDATA #REQUIRED> <!ELEMENT page (title,id?,restrictions?,(revision|upload)*)> <!ELEMENT title (#PCDATA)> <!ELEMENT id (#PCDATA)> <!ELEMENT restrictions (#PCDATA)> <!ELEMENT revision (id?,timestamp,contributor,minor?,comment,text)> <!ELEMENT timestamp (#PCDATA)> <!ELEMENT minor EMPTY> <!ELEMENT comment (#PCDATA)> <!ELEMENT text (#PCDATA)> <!ATTLIST text xml:space CDATA #FIXED "preserve"> <!ELEMENT contributor ((username,id) | ip)> <!ELEMENT username (#PCDATA)> <!ELEMENT ip (#PCDATA)> <!ELEMENT upload (timestamp,contributor,comment?,filename,src,size)> <!ELEMENT filename (#PCDATA)> <!ELEMENT src (#PCDATA)> <!ELEMENT size (#PCDATA)>

Processing XML export
There are undoubtedly many tools which can process the exported XML. If you process a large number of pages (for instance a whole dump) you probably won't be able to get the document in main memory so you will need a parser based on SAX or other event-driven methods.

You can also just use regular expressions to directly process parts of the XML code. This may be faster than other methods but not recommended because it's difficult to maintain.

Please list methods and tools for processing XML export here:


 * Parse::MediaWikiDump is a perl module for processing the XML dump file.
 * Processing MediaWiki XML with STX - Stream based XML transformation
 * The IBM History flow project can read it after applying a small Python program, export-historyflow-expand.py.

Details and practical advice
/mediawiki/siteinfo/namespaces/namespace
 * To determine the namespace of a page you have to match its title to the prefixed defined in
 * Possible restrictions are
 * sysop (protected pages)