MediaWiki is a great tool for collaborative document writing, however it doesn't necessarily give you your finished document in a suitable format for use outside of a wiki context. This page explores the best ways of extracting MediaWiki content in a manner suitable for publishing using other media.
Nowadays, non-digital formats are normally created from digital source materials, so this question largely boils down to 'what formats can I extract my data in'.
Types of content you may want to extract
There are generally four types of data that you may wish to publish from MediaWiki:
- Individual pages
- Collections of pages
- Individual media files (e.g. images)
- Collections of media files
In the case of the latter two, these will not normally be created collaboratively on the wiki, but the wiki may have been used to collate the files from various sources. However, manipulating that file outside of MediaWiki is likely to give you the best results, whatever other medium you plan to publish in. In cases where an individual image/file is required, simpy go to the file's description page and download the original from there. In cases where you want to download multiple files, follow the instructions on exporting all the files of a wiki, but filter the file list so it just contains the files you want.
The rest of this page therefore focuses on the first two items: individual pages and collections of pages.
Built-in methods of exporting data via the interface
- You can export the HTML content of a page by appending
?action=renderto the URL, like this. This outputs just the rendered HTML content of the page, without any of the MediaWiki skin elements. Note that it is not a valid HTML page, but a page fragment, and does not include any CSS styling.
- You can export one or more pages using Special:Export. This will give you the raw wikitext wrapped up in an XML structure. You will need to do further processing in order for this output to be useful.
- You should be able to extract pages using the API.
Built-in methods of exporting data via the command-line
/maintenance/dumpHTML.phpallows you to publish the whole wiki as static HTML. This was removed in MediaWiki 1.12, and is now available as a separate extension, instead.
/maintenance/getText.phpallows you to get the wiki text for a specific page.
- As a Hack, the following command will output page html (make sure to run it in your maintenance directory. Replace Main_Page with the page you want)
echo '$a = new ApiMain( new FauxRequest( array( "action" => "parse", "page" => "Main_Page", "prop" => "text" ))); $a->execute(); $d = $a->getResultData(); echo $d["parse"]["text"]["*"];'|php eval.php
- The above could be replaced by a proper maintenance script if there is demand (similar to getText.php for page text).
Extensions to help with exporting data
This list is not by any means exhaustive, nor should it be considered a recommendation to use any of these extensions. It is more a pointer to some extensions that may be worth investigating further.
- Extension:DumpHTML allows you to publish the whole wiki as static HTML.
- There are various extensions which you can install that allow exporting of individual pages as PDF files:
- Extension:EPubExport allows export in ePub format for e-readers.
- Extension:Collection allows you to publish individual pages or collections of pages in a number of formats.
- Extension:OpenDocument Export exports in ODF format.
- Category:Output extensions also lists some further options.
- Category:Data extraction extensions is currently a bit of a mixed bag, but contains some useful items not already covered by the above.