User:Brion VIBBER/Compacting the revision table round 2

Overview
Per ongoing discussion in Archcom and at WikiDev17 about performance, future requirements, and future-proofing for table size it's proposed to do a major overhaul of the revision table, combining the following improvements:


 * Normalization of frequently duplicated data to separate tables, reducing the dupe strings to integer keys
 * Separation of content-specific from general-revision metadata to support:
 * Multi-content revisions allowing for storing of multiple content blobs per revision
 * general reduction in revision table size will make schema changes easier in future

cf other & older notes:
 * User:Brion VIBBER/Compacting the revision table
 * Multi-Content_Revisions/Content_Meta-Data
 * Wikimedia Developer Summit/2017/Scaling the Wikimedia database schema

Provisional
/tables.sql

Thoughts

 * That seems like a lot of tables!
 * Most of them are the small tables for inlining strings -- content models, content formats, content slot roles for MCR, and user refs/IP addresses for user_entry. These should save a fair chunk of duplicated space. Additionally the MCR split between revision & content makes each of the two tables smaller and more malleable.


 * What happened to rev_text_id?
 * content.cont_address replaces it.
 * It may be an open question whether we want to make that change immediately, or whether to change the 'text' table as well, etc.


 * Why isn't rev_deleted moved to content?
 * rev_deleted is a bitfield and most of its options apply to things that aren't part of a Content object, such as the edit comment and the username. If separately "rev-deleting" just one content item is needed, a second bitfield or flag will need to be added on the content table too...


 * What about rev_len, rev_sha1 -- do they belong in content?
 * Not sure about this. Do we need to keep the fields for summing from multiple content objects?


 * How hard will it be to change queries?
 * Those that WHERE on rev_user/rev_user_text directly, or read fields directly, etc will need to be updated. :(
 * Things that just use Revision::*FromConds and the accessor functions will be able to fall back to lazy loading without
 * Stuff that touches rev_text_id directly will need changing.
 * Stuff that wants to pre-join a bunch of data may need changing. May be able to add abstractions on Revision function to hide some of that, or build new abstractions that are less freaky.


 * What would a transition look like? What kind of background processing and what kind of downtime to expect?
 * We'll need a transitionary mode in MediaWiki where a background process runs filling out the new tables. This may take some time -- several weeks sounds likely for the biggest sites. This may increase load on servers and will require additional disk space.
 * Transitionary mode may also require having (at least some) write updates to the revision tables mirrored to the in-progress table, or else recorded for some kind of playback later in the process. For instance revision deletion may change the rev_deleted values of a revision that's already been transitioned to the new table, and will need to be updated.
 * In principle, once the background process is complete, it should be possible to switch a wiki to read-only, flip its mode, and then switch back to read-write with little downtime for editors.
 * This could also allow quickly aborting/reverting to the previous state of revisions if things look bad in read-only... but if a rollback after going read-write is desired, that's a lot freakier to deal with.