Manual:Grabbers

From MediaWiki.org
Jump to: navigation, search

This page describes a series of grabber scripts designed to get a wiki's content without direct database access. If you don't have a database dump or access to the database and you need to move/back up a wiki, the MediaWiki API provides access to get most all of what you need.

Appropriate access on the target wiki is required to get private or deleted data. This document was originally compiled and scripts assembled in order to move Uncyclopedia; because the overall goal was to just get the damn thing moved, 'pretty' was not exactly in our vocabulary when we were setting this up so some of it/them are still kind of a horrible mess.

Stuff to get[edit]

If you're moving an entire wiki, these are probably what you need to get. More information on the tables can found on Manual:database layout, but the secondary tables can be rebuilt based on these. Otherwise you probably know what you want.

  • Revisions: text, revision, page, page_restrictions, protected_titles, archive (most hosts will provide an xml dump of at least text, revision, and page tables, which add up to the bulk of a wiki)
  • Logs: logging
  • Interwiki: interwiki
  • Files (including deleted) and file data: image, oldimage, filearchive
  • Users: user, user_groups, user_properties, watchlist, ipblocks
  • Other stuff probably (this is just core; extensions often add other tables and stuff).

Scripts[edit]

  • php files should be in the code repository.
  • python files have been added to the repository too.
  • java is MWDumper.
  • No ruby is involved. So far.

PHP scripts[edit]

These are maintenance scripts and output their grab straight into the wiki's database. To "install" them:

Those scripts require that you have sufficient privileges to create files on the directory from where you're executing them. That's because curl attempts to write a cookie file on the current directory, and failing to do so will result in a failed login attempt, without any other explanation of what's going on.

Script Target Tables Notes
grabText.php Page content (live).
  • Separate calls for getting all live revisions or for starting from after a specified revision, except I'm not sure the latter is actually implemented.
  • Also lacks proper support for revision deletion and oversight, and instead skips or inserts dummy content.
  • After doing this if page revisions include user ids, which they should, you will probably need to set the userid autoincrement to after the highest rev_user in the revision table. Otherwise there may be some weird attribution issues with new accounts.
  • It is highly recommended to import a dump first if you can and just fill in the missing stuff with grabNewText.php instead - revisions are huge and take a long time to download - so use MWDumper for that (the importDump maintenance script included with MediaWiki was broken as 1.20, and probably still is). Missing stuff generally includes deleted revisions (archive), protection information (page_restrictions, protected_titles), and obviously anything that changed since the dump was created.
grabNewText.php New content, for filling in edits from after a dump was created and imported.
  • This script is horrible and works best if you use it before filling in the secondary tables from the initial dump import. Latest version has been thoroughly tested in MediaWiki 1.25 against a MediaWiki 1.19 remote wiki (25/08/2016).
grabDeletedText.php Deleted content.
  • Lacks support for revision deletion and oversight, so probably skips affected revisions (or possibly just dies, don't remember).
grabNamespaceInfo.php Namespaces n/a
  • Prints out a list to add to the LocalSettings.php because namespace information is not stored internally.
grabLogs.php Stuff that shows up on Special:Log.
  • Inserts dummy content for revdeleted entries
  • Uses legacy log_params format, not technically correct for 1.19+
grabInterwikiMap.php Supported interwiki links - show up on Special:Interwiki if Extension:Interwiki is installed.
  • Can either import all interwikis or just the interlanguage links, though getting all the interwikis is generally recommended to maintain compatibility.
grabFiles.php Files and file info, including old versions (descriptions are page content).
  • Use this for a full dump - it imports files directly (such that log entries and file descriptions from other scripts are used), and includes old revisions.
grabImages.php Current file versions, without database info n/a
  • If you only want to download the files off something and don't care about the descriptions or old revisions, use this to only download them without affecting the database (and then use the importImages maintenance script that comes with core to import them into the wiki)
    Otherwise use grabFiles.php as that imports files directly as well as downloading them.
grabDeletedFiles.php Deleted files and file info.
  • Only works if the target wiki uses a known (assumes default) deleted file hashing configuration. If you don't know it you will need a screenscraper due to a lack of API support for actually downloading deleted files. (At least as of when this was written.)
grabUserGroups.php Groups users belong to.
  • Assumes ids are/will be the same on source and target wiki, and not much can be done about this because it generally runs before the accounts are actually created.

Python scripts[edit]

  • The python scripts will currently populate the ipblocks, user_groups, page_restrictions, and protected_titles tables.

It's recommended that you use python 2.7.2+. You will need to install oursql, and requests.

You need to edit settings.py and set the site you want to import from, and your database information.

The easiest way to run everything is just $ python python_all.py which executes all four individual scripts. You can also run each script individually if you choose (so you can run them concurrently).

Note: Autoblocks will not be imported since we do not have the data about which IP address is actually being blocked

Extension:MediaWikiAuth[edit]

Imports user accounts on login. Note that this requires the site you are copying from to still be active to use their authentication.

Affects user, user_properties, watchlist tables

  • Uses screenscraping as well as the API due to incomplete functionality.
  • Updates user ids in most other tables to match the imported id, though apparently not userid log_params for user creation entries

Other stuff[edit]

Not grabbers, but things to potentially worry about.

  • Configuration stuff - groups, namespaces, etc
  • Extensions
  • Extension stuff - abuse filter, ajaxpoll, checkuser, socialprofile, and others have their own tables and stuff
  • Secondary tables - the above grabber scripts generally just set the primary tables; secondary tables such as category, redirect, site_stats, etc can be rebuilt using other maintenance scripts included with MediaWiki, usually rebuildall.php.