Incremental dumps

Name and contact information

 * Name: Petr Onderka
 * Email: gsvick@gmail.com
 * IRC or IM networks/handle(s):
 * jabber: gsvick@gmail.com
 * Location: Prague, Czech Republic
 * Typical working hours: 15:00–22:00 CEST (13:00–20:00 UTC)

Synopsis
Mailing list thread

Currently, creating a database dump of larger Wikimedia sites takes a very long time, because it's always done from scratch. Creating new dump based on previous one could be much faster, but not feasible with the current XML format. This project proposes to create a new binary format for the dumps, which would allow efficient modification of the dump, and thus creating new dump based on the previous one.

Another benefit would be that this format would also allow seeking, so a user can directly access the data they are interested in. A similar format will be also created, which will allow downloading only changes since the last dump was made and applying them to previously downloaded dump.

Deliverables
I will start working right after my final exam on 27 June.

Required by midterm

 * script for creating dumps in the new format
 * using basic compression
 * includes full history dumps, current version dumps and “stub” dumps (which contain metadata only)
 * library for reading the dumps by users

Detailed timeline

 * 28 June – 30 June
 * create proposal of the file format
 * set up my working environment


 * 1 July – 7 July
 * write code for creating and updating stub current dump based on a range of revisions (getting required information from PHP, saving it in C/C++)


 * 8 July – 14 July
 * add other dump types (with history, with page text, with both; text of each page will be compressed separately using a general purpose algorithm like 7z)


 * 15 July – 21 July
 * handle articles and revisions that were deleted or undeleted


 * 22 July – 28 July
 * write library for reading dumps

Required by final deadline

 * script for creating dumps
 * using smarter compression techniques for better space efficiency
 * will also create incremental dumps (for all 3 types of dumps) in a similar format, containing only changes since the last dump
 * user script for applying incremental dump to previous dump
 * the file format of the dumps will be fully documented

Optional

 * user library for downloading only required parts of a dump (and only if they changed since the last download)
 * script to convert from the new format to the old format
 * SQL dumps?

About you
For quite some time now, I've been interested in working with Wikipedia data and usually creating some sort of report out of it. Over time, I have accessed this data in pretty much any form: XML dumps, SQL dumps, the API and accessing the database using SQL on Toolserver. I haven't done much in this area lately (partly due to work and school), but the interest is still there.

I am also interested in compression and file formats in general, specifically things like the format of git's pack files or ProtocolBuffers. Though this is more of a passive interest, since I didn't find anything where I could use this. Until now.

I'm also among the top users on Stack Overflow for the mediawiki and wikipedia tags.

Participation
If it's clear what I'm supposed to do (or what I want to do), I tend to work alone. For example, in my last GSoC (for another project), where the interface was set in stone, I didn't communicate with my mentor (or the community) much. This project is much more open-ended though, so I plan to talk with my mentor more (and to a lesser degree, the community).

I will certainly publish my changes to a public git repo at least daily. If I'm going to get access to my own branch in the official repo, I will push my changes there, otherwise, I will use github. I already have some experience with Wikimedia's gerrit, though I don't expect to use it much in this project.

Past open source experience
Lots of my work is open source in name only: I have published the code, but no one else ever worked on it.

But for my bachelor thesis, I needed to extend the MediaWiki API, so I did just that. During my work on the thesis, I also noticed some bugs in the API, so I fixed them.

Last summer, I participated in GSoC for mono. The goal was to finish implementation of a concurrency library TPL Dataflow, which I finished successfully.