Wikimedia Developer Summit/2016/T114019/Minutes

Intro

 * Session name: Dumps 2.0 for realz (planning/architecture)
 * Meeting goal: Make headway on the question: "what should the xml/sql/other dumps infrastructure look like in order to meet current/future user needs, and how can we get there?"
 * Meeting style: Depending on participants, either Problem-solving or Strawman
 * Phabricator task link: https://phabricator.wikimedia.org/T114019

Topics for discussion

 * use cases for the dumps, known and desired
 * where we currently fall short or are expected to fall short in the future
 * an ideal architecture for dumps that would address the main issues would look like... what?

Problems/Desired Features

 * Ariel: would like to get to the list of big things that users lack from dumps, or will find they need soon, or things that are broken about dumps. How would we meet these needs in an ideal world, not bound by the existing code
 * Ariel: Things users dislike (Also listed here - https://phabricator.wikimedia.org/T114019 )
 * requests for different formats (JSON, for example) generally ignored
 * speed of generation, once a month is not enough, partial dumps are not enough.
 * reliability: broken "pretty often" is not fun to hear
 * download speed: pretty slow to download the dumps
 * incremental dumps don't exist
 * Wikidata perspective: Wikidata has its own dumps and it would be nice to have this integrated and have a machine generated index of all the dumps, and a nice user interface on top of all of it


 * Aaron: dumps available in "line-by-line" chunked format so analysis type processing can be parallelized -> some kind of unique section marker. One of the use cases is insurance against anything bad happening, protecting the corpus. Some data is not available, mediawiki core had to be changed to make some of it available
 * Mark: how does this related to sanitization for labs?
 * Ariel: not sure, but we want to make sure anything that's not staff-only should be available
 * Mark: don't want to solve this problem twice, sanitization. A wiki farm doing the same dumping process would run into too many problems, so the next implementation of dumps should be friendly to third party use
 * Nuria: is this worth it for other sites that don't have the scale problem?
 * Mark: we should be consistent in our SOA approach, so even if this isn't the highest priority we should make sure it gets done just in case someone else needs it
 * Ariel: agree that we need to have the baseline dump script available for smaller wikis, and simple to use
 * Andrew: sort of a philosophical question about how far do we go to make our stack very easy to use for third parties.
 * Andrew: no XML please, hard to deal with in distributed systems (strings get cut off and the resulting pieces are invalid XML)
 * Ariel: when we talk about improvements we're talking about two areas: one python script that does the dumping, and one mediawiki code base that I (Ariel) am not as familiar with
 * Ariel: three layers of jobs, hideous and organic, doesn't account for any new dumps (wikidata, html), would love to chuck the code base (both python and mw code)

Thoughts on Architecture

 * Ariel: let's talk architecture
 * Marko: Event Bus could be a good solution, formats just become a matter of adaptors that dump in different ways
 * Dan: snapshot vs. full stream of changes
 * Ariel: I want to get the status at any arbitrary moment in time.
 * debating stream vs. snapshot
 * Dan: hybrid approach of dumping DB state for certain things (like categories), dumping the full event stream for other things (like revisions)
 * Andrew: would take a long time to recompute the state at an arbitrary period in time, might not be worth it for large wikis
 * Ariel: it'd be interesting to see those new streams, but I'd want to see some use cases before we invest effort
 * Nuria: can we do a prototype of doing dumps off of Event Bus?
 * Andrew: what takes a long time?
 * Andrew: do you have dedicated slaves that generate these dumps?
 * Ariel: yes, we have a separate cluster
 * Mark: this logic was created in 2006, and back then it made sense, but now it could be faster / better to just dump the full db every time
 * Andrew: thinks we should look into stopping the slaves and dumping
 * Ariel: revisions get deleted, so we have to re-generate full history sometimes
 * Mark: right, otherwise we could use a git style system
 * Mark: what's consuming the time
 * Ariel: file processing (reading the old dumps, uncompressing / re-compressing)
 * Mark: new compressions considered?
 * Ariel: yes, but we're not getting an order of magnitude better
 * How much time does the querying the database take?
 * Ariel: we only query in a few cases, so it's not a bottleneck
 * For Wikibase, we just iterate over the database and read stuff from external storage. It takes about 11 hours to dump all current revisions of wikidata.  A lot of it is waiting for external storage.  About 70% in snapshot service.
 * Wikibase also has bz2, which took hours to create, so we use pbzip2, and some libraries can't read these. Takes 4 or 5 hours to compress
 * Nuria: many different projects
 * dumps getting slower and slower, performance
 * incremental dumps are separate from that, and more like a user requested feature
 * Andrew: why is the dump job not distributed
 * Ariel: don't want to complicate and add even more layers to a complicated system
 * manually figure out which jobs to run and babysit problems like PHP memory leaks
 * would love to just dump things in a centralized place that farms out the work to a distributed systems
 * Marko: Event Bus can be used to distribute the jobs to available consumers
 * Ariel: my plan: take 100,000 pages, send them off and say run them, time limit max 1 hour, get the job back and it looks like it only ran 20,000 jobs, split them up, throw them back to the queue and run again
 * Nuria: seems like a logistics problem, because we'll have to spend resources working on a new solution and maintaining the old system
 * Andrew: 15 min. warning

Next steps

 * Ariel: I want to walk out with a list of things we should investigate further, possible solutions, people interested in following up
 * Possible Ideas:
 * Event Bus for the incremental part of the dumps
 * Change Propagation system to coordinate the jobs
 * HTML dumps to use the html directly instead of asking mediawiki to render it
 * Ariel: would like to have a system where someone just sets up a little puppet configuration, and that's all that's needed to run a specific dump job
 * Mark: think there should be consideration about what is the actual format we want to dump. HTML and raw event dumps are different enough that transformations need to be thought of ahead of time (more so than XML to JSON)
 * Dan: idea: use heuristics to regenerate data only when it makes sense. For example, if most revisions are deleted within 5 days, don't do daily regeneration of revisions that are more than 5 days old.  Those would still change, but we can regenerate them monthly and still keep the level of vandalism in the dumps nomially low
 * Mark: if we had rolling regeneration and de-coupled the streams of raw data from the output, we could have flexibility to generate different outputs and they would all be as clean as possible
 * Andrew: should look at hadoop because this problem matches the map-reduce paradigm pretty well
 * Nuria: maybe celery's easier as an interrim solution, since Hadoop doesn't talk to mediawiki easily
 * (?) be able to view the state of a wiki page at that moment in time (not just wikitext but how the templates would expand, what category it's in, etc), what would be needed, are there use cases?
 * Post-meeting discussion: keep pages plus their revision content as single items in HDFS, incremental update means adding pages, removing pages? Regenerating means rolling these up into small downloadable files?

Jaime discussion notes

 * incremental dumps for sql tables. some of these tables we could keep track of the last row dumped and then dump everything inserted afterwards.
 * some tables are insert only, these in particular can be easily managed for incrementals


 * private data and dumping only certain columns based on values of flags in other columns (or even fields in other tables)
 * would it make sense for performance to not rely on MW managing the decision about which data is public? do we want to rely on the views we use for replication to labs?
 * want a more formalized process for changes to schema, where every item in every table is annotated by the devloper as private, public, or depends on certain flag values.
 * formalize the db schema change workflow. See: https://wikitech.wikimedia.org/wiki/Schema_changes This will not cover cases where the use of a field is changed; look at past changes for examples.


 * note that not all changes to dbs are done via mw but some are directly out of band, need to track those.
 * note tokudb + mysqldump of all tables = huge speed win. takes 12 hours. use this as a basis for filtering out private data? but there is still es. use the "labs view"
 * https://git.wikimedia.org/summary/operations%2Fsoftware%2Fredactatron : this is the production-side filtering, done with replication filters + triggers. Maintained by production DBAs (Jaime)
 * https://git.wikimedia.org/tree/operations%2Fsoftware.git/master/maintain-replicas: Maintained by labs (was Coren) Views restricting access to certain fields based on flags in other fields, or based on user privileges
 * if we had a db with only public data in it, how useful would that be? we could export it as full sql dbs and provide the data in that format. so bypass mwbzutils conversion tools!
 * sort revision content by primary key order and retrieve that way, might be much faster, up to ten times because of mysql db insert order - close physical proximity on the disk! and then a read of one item pulls in the rest etc
 * jaime mysql wrapper abstracting lb class, for use by monitoring and db admin, in development, eventually want to remove dependence on MW so this would be a separate library. currently opens new connection for every request (need this to be dependent on an option)

Hoo discussion notes

 * wants api for the user to find out which dumps are in progress, complete, one-click download of latest wiki etc.
 * desired: way to capture data from history that is not revision/page metadata and would only be present in a full dump of the specific date (e.g. category changes, we don't have a history of those)

Adam Wight discussion notes
Coming soon

Gabriel Wicke discussion notes
The main needs seem to be to speed up the dump process by doing incrementals and to allow third parties to incrementally update their dumps.

For HTML dumps we resolved this by using compressed sqdblite dbs. Lots of tools for sqlite, only issue is concurrency, but since updates take so little time you can do without.

Current revisions for the main namespace for en wikipedia are 200GB uncompressed, 27.4GB with xz compression, it takes less than two hours to walk through all page titles and revisions, retrieve information via the api and compare to current sqlite db contents. Note that we uncompress the db first, run this, then recompress, but even so it's not that expensive. The db is currently page-title based instead of page id, so page renames are a bit of a drag. For large dbs (with revision history) we would want to divide up the db into several smaller ones, probably by page id range. Storing revisions per page together would mean better compression too.

We were looking into brotli compression, not enough support for it yet. Have a look at benchmarks: https://quixdb.github.io/squash-benchmark/

So once you have a full dump, an incremental works like this: uncompress full db, get a working copy, get the changes for the page id range, apply those, recompress, atomically move to new location. If it fails you just get a new copy of the db and uncompress again, have many many shards so it's fast. This assumes you get a feed of relevant events. There is no authentication right now for the events feed til we go to kafka 0.9, you use a kafka consumer, there's a python library for that.

Eventbus stuff:
 * See https://grafana-admin.wikimedia.org/dashboard/db/eventbus for what's happening.
 * https://phabricator.wikimedia.org/T114443 this is the tracking bug for our eventbus deployment
 * https://phabricator.wikimedia.org/T116247 discussion about the schema. soon there will be user related events like user suppression.
 * https://github.com/wikimedia/mediawiki-event-schemas/tree/master/jsonschema/mediawiki and the schema itself.
 * MW extension for this .... https://github.com/wikimedia/mediawiki-extensions-EventBus

Misc notes:
 * if people download new dumps, how do we support people with xml based tools? write a converter
 * not all fields in xml dumps are yet in the sql dbs, but adding additional fields is relatively easy
 * most folks have import step from xml to db or something else. some people also stream it, most researchers don't stream
 * note that db format allows parallel access right away (but you do have to uncompress first)

Supporting OpenZIM production:
 * zim format is good for random access into a file, has lzma compression
 * a while ago they switched to restbase (the openzim / kiwix folks), they want the html dumps instead of hitting the restbase api for everything
 * they need hardware to really get their job done, maybe share our server for the html dumps? or just rent a server somewhere, they have been doing by themselves so far
 * funding is covered right now, they are mainly interested in comaintaining and being integrated into normal dump process somehow
 * there's a ticket open for network isolation about isolating services from each other, that might help, but this is very manual right now, mark and faidon are worried about doing this right now for that reason: https://phabricator.wikimedia.org/T121240

For sql table dumps, maybe leave as is, if we need incrementals, we could consider providing a script that reads sqlite & writes each row to mysql (or generally $DB)

We might allow users to apply changes from eventbus themselves;(we had talked about daily event logs for download with a tool to apply them). Most of those API requests (for recent changes) can be cached, so shouldn't cause terribly high load on our backend infrastructure, varnish would cache and serve these

Milimetric notes
The whole team participated in a discussion so these notes are a summary of that discussion.

EventBus discussion
Concern raised about a recent site outage related to MW failure to send events when kafka server was down. This turned out to be an hhvm bug in fsockopen implementation, so that when you try to connect to a dead host, it will not respect the timeout you pass it. Ticket: https://phabricator.wikimedia.org/T125084 So we really should not have normally the situation where MW app servers are backed up waiting to write events. However, we then have the problem of how not to miss events from MW.

For MW an event is 'committed' when sql transaction is complete, but then there is the possibility of failure to emit the event. One possibility: have a process comb through dbs periodically looking for events not emitted; this would require db schema change to have some field indicating that an action had its corresponding event emitted. Another option less likely; have a process reading the binlog and feeding the entries to a script that generates the appropriate events.

The binlog approach is potentially hard because we have to translate from that format to get actual clean events. But it might be a good way if used in addition to MW emitting events, because events produced from these two sources could be mutually compared and cleansed.

kafka mysql binlog libs:
 * https://github.com/mardambey/mypipe
 * https://github.com/zendesk/maxwell
 * https://github.com/pyr/recordbus

Question: how well do the binlog producers replicate events, how many events do they miss? What about if they're put under stress? kill -9, memory constraints, etc. existing event stream, improvements pending: https://phabricator.wikimedia.org/T124741

Users might want only a subset of the dumps, e.g. some wiki project, do we want to apply filtering to events somehow? Is the api just 'fast enough'? We always have to provide full history dumps though because right to fork. Depending on answers to the previous, look into store pre-computed partial dumps (would be trading storage and computation for bandwidth and potentially helping researchers); haivng a daily file (santized, reformatted) of events for users to grab and apply, and for processing incrementals, instead of having to real time process events; converter scripts that convert our canonical format to format the users want (maybe we can get them to write those scripts!)

Output formats: Aaron wanted line by line format so each revision is in its own object (record, whatever) atomic so it can be dealt with, json event level data would be good, you could partition it in folders and import it into hive. Note that we can't just do a straight up grouping of revisions by page id, because there are too many revisions in some pages to deal with. We do however want to connect the revisions by page somehow and order them.

A weird problem: sometimes events are received that are a month old or even older. Why? How far back? what happens when we receive these events, how would we process them?

Question: what is the nature of the events that we need to store? Do we really get revision events from over a year in the past? Knowing this will help us decide on an incremental file storage scheme. Without knowing this, we might have to store files containing revisions in two separate directory hierarchies, so we can access page-lev el information and time-based information (THE FOLLOWING IS NOT A SUGGESTION!! :)): :-) But if we do know the nature of the events, we may be able to be more efficient
 * <>/<>/<>_revision_metadata.json
 * <>/<>/<>_revision_metadata.json
 * <>/<>/<>/<>/revision_text.wiki

we want to have the text in one canonical location and then (like indices) have multiple "Pointers" to the text. (otherwise we run the very real risk of the text copies getting out of sync, and then we're screwed.) this lets us not have to do very expensive sorting operations. Time-stamped based? One thing about how revisions are applied is that imports of pages + their revisions from other wikis and subsequent merges can really screw the pooch for that (just a thing to keep in mind). Note that undelete of pages now finally means we keep the old page id at any rate, that change was recently merged.

Otto links for eventbus info

 * Schemas:
 * https://github.com/wikimedia/mediawiki-event-schemas


 * EventLogging:
 * https://github.com/wikimedia/eventlogging

service:
 * https://github.com/wikimedia/eventlogging/blob/master/bin/eventlogging-service
 * https://github.com/wikimedia/eventlogging/blob/master/eventlogging/service.py

Action items with owners

 * Create a project and get everyone on it for further discussion (Ariel)
 * identify other action items from discussion on project, coordinate further discussion (Ariel)
 * discussion items UNRANKED:
 * Can the work that other reams are doing on EventBus for MW events be used for incremental dumps?
 * Addresses: user desire for incrementals; speed issues with generation of the dumps
 * Consider having slave cluster frozen while dumps across all wikis take place, for consistency reasons): how long would this tak to run, how long before slaves catch up? How many slaves needed? Would this be reasonable performance-wise?
 * Addresses: dump of a wiki currently is inconsistent, the page table as dumped will not correspond to the page metadata produced the the 'stubs' xml file, etc.
 * How should job submission and management to multiple nodes be implemented? Is celery the best approach (vs hadoop, standalone spark, other), how could eventbus be used e.g. for job queue?
 * Addresses: speed of dumps generation (expansion across multiple nodes), reliability (rerun small jobs that fail automatically), unified infrastructure for all dumps produced (one job management system for all dumps, stored in the same location, etc)
 * How do we store dump output so that rolling regeneration is fast? Where in the dump generation do we want to choose compression for user downloadable files? What about a raw internal format? How big would dump "items" be, stored in raw format, a single page with its revisions, more? What storage sytem would handle this well?
 * Addresses: speed of dumps generation, smaller download files for users, faster uncompression for users, usability of dumps by researchers/analysts
 * How can we break up page content dumps into small pieces (.e.g runs for an hour or less), without manual intervention?
 * Addresses: maintainability of dumps
 * What is the minimum that a new dumps producer would have to submit (configs? job wrapper + script + how to split up/reassemble jobs?) to get a new dump type into the system?
 * Addresses: unified infrastructure for all dump types; ease of adding new dumps and new formats
 * How can we speed up the wikidata entity dumps? Currently takes 11 hours plus 405 hours of recompression (or is it 11 hours total?) for entites current revisions only.
 * Addresses: reliability of dumps (don't have long-running jobs)
 * How can we best provide partial dumps of tables with sensitive information? Can we piggyback off of the work done to provide 'sanitized' views of the dbs for labs?
 * Addresses: user requests for dumps of enough data to have the 'right to fork' not by an empty right
 * How much of our dumps infrastructure must be easily useable by third parties? Would a small script suffice for one/a few wikis, for third parties? (Loesely agreed: DEFER FOR NOW)
 * How can the new system get developed in a timely way without maintenance and support of the current system being adversely impacted?
 * How can we reasonably minimize the probability of dumps containing data that should have been deleteed (and will be soon), whether spam or personal info that should not have been added?

This list of questions for further discussion will go on a phab ticket in the new project once it has been created; all people in the session are encouraged to edit/add/delete questions, rank them, etc.

DON’T FORGET: When the meeting is over, copy any relevant notes (especially areas of agreement or disagreement, useful proposals, and action items) into the Phabricator task.

See https://www.mediawiki.org/wiki/Wikimedia_Developer_Summit_2016/Session_checklist for more details.