Extension:OAIRepository

About (from the README)
This is an extension to MediaWiki to provide an OAI-PMH repository interface by which page updates can be snarfed in a relatively sane fashion to a mirror site.

OAI-PMH protocol specs: http://www.openarchives.org/OAI/openarchivesprotocol.html

A harvester script forms the client half. Apply oaiharvest_table.sql to clients to allow saving a checkpointing record; this ensures consistent update ordering.

At the moment this script is quite experimental; it may not implement the whole spec yet, and hooks for actually updating may not be complete.

The extension adds an 'updates' table which associates last-edit timestamps with cur_id values. A separate table is used so it can also hold entries for cur rows which have been deleted, allowing this to be explicitly mentioned to a harvester even if it comes back after quite a while.

Clients will get only the latest current update; this does not include complete old page entries by design, as basic mirrors generally don't need to maintain that extra stuff.

As of May 19, 2008, the updater will attempt to update the links tables on edits, and can fetch uploaded image files automatically.

(Uploads must be enabled locally with  or no files will be fetched. image table records will be updated either way.)

Settings
''From the talk page... This comes from the CommonSettings.php (similar to LocalSettings.php in most MediaWiki installations) on actual Wikimedia servers.''

Add to localSettings.php :

MySQL part
I did this from the command line, so bear with me and/or adapt to the graphical version. It's assumed here you know the mySQL root password.

mysql wikidb -uroot -p < update_table.sql
 * Replace  in update_table.sql with the actual value of the prefix (which is set in LocalSettings.php).
 * update_table.sql goes for the wiki DB (replace wikidb with your wiki database name if necessary). NOTE: This will take a significant amount of time on rather large wikis.

mysql oai -uroot -p < oaiaudit_table.sql mysql oai -uroot -p < oaiharvest_table.sql mysql oai -uroot -p < oaiuser_table.sql
 * oaiuser_table.sql, oaiharvest_table.sql , oaiaudit_table.sql goes for an OAI DB, at which the wiki DB user must have access
 * If you want everything in the same database follow section 1, otherwise follow step 2.
 * EITHER Change the following in LocalSettings.php:
 * OR Create a separate database for the oai info.
 * log to mysql mysql -uroot -p
 * Once inside, create the oai database and give your "wiki" user (the login used in your LocalSettings.php for mySQL connections) all rights on it CREATE DATABASE oai; GRANT ALL PRIVILEGES ON oai.* TO 'wikiuser'@'localhost';  FLUSH PRIVILEGES;  exit
 * Go into the remaining .sql files and make sure to add your table prefix that is found in your LocalSettings.php as the value for .  Replace each instance of   with the actual prefix.
 * Create the tables by feeding the commands to mysql (where "oai" is the database you are putting the data into and "root" is your mySQL user):

echo "INSERT INTO /*$wgDBprefix*/oaiuser(ou_name, ou_password_hash) VALUES ('thename', md5('thepassword') );" | mysql oai -uroot -p
 * to be able to log to the OAIRepository, you'll have to add a login to the oaiuser table. These don't need to be the same as   and , but you will need to know them in the next section where you have to add them to lsearch.conf (again, remember to replace   with the table prefix for your wiki).

Install on Lucene-search server

 * 1) Create a new mysql database, e.g. lsearch and make sure it's an utf-8 database.  It's needed to store the article ranking data. This data is normally recalculated by the importer at each import.  This can be done by issuing the mysql command: CREATE DATABASE lsearch DEFAULT CHARACTER SET utf8;
 * 2) Setup the Storage section in local configuration (lsearch.conf). These should be the username/password and administrative username/password for accessing the databases.
 * This warning is only for sites which use master/slave replication If you use the load-balancing provided by the "Storage.slaves" options, you will need to make sure that your 'lsearch' database created above is also replicated as part of your master/slave replication. This can be done by adding another line to your mysql.cnf on your master and slave databases.  The slave should have a new line which says   and the master should have a new line which says  .  Do not just add the database name to any existing [whatever]-do-db lines - each database should have its own line.
 * 1) Supply the username/password in the Log, ganglia, localization section as for the OAI.username and OAI.password tables.  This should be the username/password you created above in the step where you inserted into the   table.
 * 2) Rebuild article rank data, You can put it on a cron job once a week, or once a month (article ranks typically change very slowly): php maintenance/dumpBackup.php --current --quiet > wikidb.xml &&  java -Xmx2048m -cp LuceneSearch.jar org.wikimedia.lsearch.ranks.RankBuilder wikidb.xml wikidb The "-Xmx2048m" is optional and should only be used if you have 2gigs of RAM to devote to the loading.  If you don't include this setting at all, you will likely run out of heap-space during the update. If you don't have as much RAM to devote, just put in a smaller number of megs instead of 2048.
 * 3) Create the initial version of the index - you can do this using the importer described on the Lucene-search server page.
 * 4) Setup OAI repository for the incremental updater, in global config (lsearch-global.conf), setup a mapping of dbname : host, and in local settings supply username/password in OAI.username/password if any.
 * 5) Setup [OAI] section in global config (lsearch-global.conf) like this,
 * 6) Start incremental updater with: java -Xmx1024m -cp LuceneSearch.jar org.wikimedia.lsearch.oai.IncrementalUpdater -n -d -s 600 -dt start_time wikidb The parameters are:
 * 7) * -n - wait for notification from indexer that articles has been successfully added
 * 8) * -d - daemonize, i.e. run updates in an infinite loop
 * 9) * -s 600 - after one round of updates sleep 10 minutes (600s)
 * 10) * -dt timestamp - default timestamp (e.g. 2007-06-17T15:00:00Z) - This is the timestamp of your initial index build. You need to pass this parameter the first time you start the incremental updater, so it knows from what time to start the updates. Afterward the incremental updater will keep the timestamp of last successfull update in indexes/status/wikidb.

Alternative to (2),(3) and (4) is not to use ranking. You can do this by passing --no-ranks parameter to the incremental updater, and it won't try to fetch ranks from the mysql database. If your wiki is small and has some hundreds of pages, you probably don't need any ranking. But if you have or plan to have hundreds of thousands of pages, you will definitely benefit from ranking data.

The above only sets up incremental updates to the index. To instruct the indexer to make a snapshot of index periodically (which get picked up by searchers), put this into your cron job: curl http://indexerhost:8321/makeSnapshots

Indexer has a command http interface. Other commands are getStatus, flushAll, etc ...