Flow/Architecture

Purpose of this Document
This document is intended to be an early brainstorming session regarding the first flow prototype. Final implementation will likely only tangentially look like what you see here now. In other words this is a pre-implementation brain dump which will be referenced while building the first prototype. In my experience with the ideas for internal architecture laid out the first implementation doesn't take long to actually convert into code.

DISCLAIMER: All decisions, especially in regards to storage layout, are fluid. We fully expect to delete the data created by the prototype on extensiondb1 before deploying the final solution.

Big Ideas
Flow is about workflow management. A "discussion" is a type of workflow - a very simple one.

Templates are bad, mmkay
In many cases templates are used to encourage workflow within local wikis. The goal for the various workflow models is to be dynamic enough to be managed from the wiki (by local administrators) to cover use cases currently handled by workflow suggestions inside templates. In other words flow will implement a whole bunch of Lego pieces, and the individual community's will stick them together into the various workflows they need.

WorkflowObject
Globally unique identifier of an individual instance of a workflow. A discussion topic is a FlowObject instance. A request for deletion of page Abc is a FlowObject instance. etc.


 * ID - maybe UUID
 * Home wiki ?
 * Only needed if we decide to have a single combined flow that supports multiple wikis.
 * May not be needed even then.
 * Home page
 * Essentially where is this flow accessed from/was created
 * createDate
 * lastUpdateTime
 * creatorId
 * does this distinctly identify the user? likely needs to be combined with homeWiki to decide exactly which user this is
 * If we need two pieces of data to uniquely identify a user then any and all relationships (tags/revisions/etc) must also store those two pieces of data.
 * At a minimum we will need some way to map from a creatorId into a displayable username.
 * The common solution to this is to denormalize and store the users name in the record
 * Could we piggy back off of whatever the single sign on does?
 * title/name
 * Possibly tied into wikidata for automatic localization?
 * summaryText
 * lockState
 * Enumeration, not boolean. Allows for multiple states beyond locked/unlocked,
 * workflowModel
 * contentLanguage
 * Workflows may occur in different languages, this can help act as a filter for workflows a user can understand.

Considerations
For example, if an open (based on lockState?) request for deletion exists on a specific article, cannot open a new one. If a closed RFD exists for a page it should possibly be linked to a newly opened RFD for context. Various restrictions like this need to be definable in a generic way by the wiki community. The prototype implementation may hard code these types of considerations to be fleshed out at a later point.

ID Generation
First step to enabling horizontal scalability by removing dependence on a central sequence to give out id numbers. As we plan to use these ids within mysql for inserts they need to be constantly increasing, randomly generated ids cause performance issues in mysql's storage system.

UUID

 * MediaWiki contains a UIDGenerator class which generates either 88 or 128 bit uuids
 * uses file locking on the application server to guarantee uniqueness in non-random uuids
 * Can generate timestamp prefixed uuid's which fulfills the requirement for constantly increasing id values
 * It would be pointless to store the GUID's in their text representation, must be binary(16).
 * text->binary: UNHEX( '7b95ee9a9d8a4f3283e2c413cc764b2d' )
 * binary->text: HEX( binary_uuid_column )

Ticket Servers
http://code.flickr.net/2010/02/08/ticket-servers-distributed-unique-primary-keys-on-the-cheap/

tl/dr: The Tickets64 schema looks like:

CREATE TABLE `Tickets64` (     `id` bigint(20) unsigned NOT NULL auto_increment,      `stub` char(1) NOT NULL default '',      PRIMARY KEY  (`id`),      UNIQUE KEY `stub` (`stub`)    ) ENGINE=MyISAM

SELECT * from Tickets64 returns a single row that looks something like:

+---+--+   | id                | stub | +---+--+   | 72157623227190423 |    a | +---+--+

When I need a new globally unique 64-bit ID I issue the following SQL:

REPLACE INTO Tickets64 (stub) VALUES ('a'); SELECT LAST_INSERT_ID;

To prevent SPOF they use this on multiple servers, with one chosen at random when an insert needs to be done, which serve up differing ids using mysql auto-increment-offset and auto-increment-increment settings.

Time + shard + autoincr
http://instagram-engineering.tumblr.com/post/10853187575/sharding-ids-at-instagram

https://github.com/twitter/snowflake

tl/dr: Instagram combines 3 different values into a 64 bit number. The first 41 bits of the id is a timestamp in ms, so the keys naturally increase and provide a logical sorting order across shards. Snowflake from twitter uses the same id generation method, but with worker_id rather than shard_id. Really its about the same thing.


 * 41 bits for time in milliseconds (gives us 70 years of IDs with a custom epoch)
 * 13 bits that represent the logical shard ID
 * 10 bits that represent an auto-incrementing sequence, modulus 1024. This means we can generate 1024 IDs per shard per millisecond

This is done with a stored procedure in postgresql in their version. Our setup with MariaDB as of 5.3 can serve up millisecond timestamps, and the REPLACE INTO method from can be used for the auto-incrementing sequence. We should prefer to use the database ms timestamp so the timestamp and auto-incr counter come from same host strengthening the uniqueness of the id. If we wish to separate id generation from the database connection we could request ops to run the snowflake id generation service, but the added complexity of yet another service may not be necessary. We can also shuffle the bit counts around a little bit to get different tradeoffs, the proposed 864 logical shards will need only 10 bits for the shard id.

Other
As i see it, there are basically three ways to choose/store a shard_id:


 * 1) Generate a UUID and calculate a shard from it, that will get us random distribution of objects across the shards which is unacceptable for our use case.
 * 2) Store an independant column that holds the shard id or something that a shard id can be calculated from like page_id or wiki+ns+text
 * 3) Choose your own shard id and integrate the into the UUID

By choosing our own shard key in options 2 & 3 we can co-locate all FlowObjects related to the same article onto the same shard. By integrating the shard id into the UUID as in option 3 we simplify an extra field out of all relationships and queries that interface with the FlowObject.

Proposed ID Generation Scheme
Because we want to shard on a hash of wiki+ns+title we need to either create an additional shard_id column, or we need to integrate the shard_id that we have chosen into the identifier. I propose we go with the latter, and use an approach like instagram and twitter snowflake to generating ids.

I propose we use a two layer scheme for generating shard ids:
 * 864 logical shards. Evenly divides into 2, 3, 4, 6, 8, 12, 16, etc. shards
 * Gives us 10 bits for logical shard, 10 bits for db sequence number, and 44 bits for timestamp(557 years in ms, no need for custom epoch like instagram when using 44 bits for time)
 * Logical shard determined via consistent hashing: crc32(json_encode(array( 'wiki'=>$wiki, 'nstext' => $nstext, 'text' => $titleText ) ) ) % 864
 * Logical shards are forever, they do not change
 * Logical shards mapped to Physical shards with a ring list
 * e.g. Server A serves 0-432, server B serves 433-864
 * Splitting a shard involves promoting 2 slaves to master with each receiving half(or really, an arbitrary portion) of the previous masters ring allocation.

Benefits:
 * Flow requests that are not for a users 'Flow' page(subscriptions) will have a the wiki+ns+title available in the request parameters making the choice of shard happen once at the beginning of the request
 * ID generation does not require any new servers, it uses an auto-increment field on the shard itself.
 * Shard ID, Creation Timestamp, and Unique Identifier compacted into a single 64 bit value
 * The object and any optional tables( summary, topic list, etc ) that have a 1 to 1 relationship with the object can have a single primary key rather than a more complex composite key which includes the shard_id
 * Unique: ID field supports 1024 new ids per ms per shard. A quick test against a laptop mariadb instance(so, not representative) generated 1 id per 2ms with a single client, and 1 id per 0.5ms with 16 clients(but each client waited ~6ms).  Basically indicates the format supports many  more unique id's than we can generate, leaving room for growth.
 * Available: If the shard you want to write to is available, then the ID generation service is also up. No mismatch.
 * Uncoordinated: Each shard is effectively its own id generation service.
 * Compact: Other systems use 128 bit ids which contain timestamp and some random info. If you need a shard id that isn't based on the main id you need 128bits + shard id bits. This uses 64bits for the timestamp and the shard id combined.
 * (Roughly) Time Ordered: (according to twitter snowflake) The id's are guaranteed to be k-sorted within 1s.
 * Directly Sortable: The ids should be sortable without loading the full objects that the represent. This sorting should be the above ordering.

Downsides:
 * A clock moving backwards on the database server could trigger insert errors with duplicate keys
 * When splitting shards have to ensure the old servers and new servers have sync'd clocks or you risk duplicates
 * Not too hard to just generate a new ID and try again, shouldn't happen very often
 * Could use the real twitter snowflake service which will stall id generation when it detects the clock moving backwards
 * It uses a slightly different balance of bits than proposed above, but likely easily patchable(or our ideas can change even easier0
 * Ties ID's to current architecture?

I think the benefits outway the downsides, other opinions?

For more information about sharding, see

WorkflowMetadata
Each flow object maps to some type of workflow metadata. The metadata is specific to the type of workflow that is being worked with. The specific type of metadata to expect is defined by the workflowModel field of the FlowObject.

The various models need to be programmable such that local wikis can use them as they need and not be locked into pre-programmed ideas. Templates currently allow a great deal of flexibility with no automated enforcement. These models must represent a middle ground between template flexibility and pre-programmed strict workflows.


 * 1 to Many mapping between FlowObject and the Workflow Metadata rows
 * Each row of the metadata is a key/value pair and the object id
 * Metadata needs to be rather abstract to support all the different use cases. Likely it could be implemented like user options are currently.
 * Collecting all the metadata for an object would be something like SELECT metadata_key,metadata_value FROM flow_metadata where metadata_object_id = '12345'

WorkflowModel
Each wiki will be able to define their own models. In the initial prototype the discussion model will be hardcoded implementation, but the model storage will store the info needed to turn the wiki's model definition into an instantiated object capable of understanding the metadata and performing actions.

For wiki defined models there are at least two options:
 * Real wiki page containing the definition in the WDL format
 * Special page and corresponding model table in database

Possible workflows to support include but are not limited to:
 * 2 way user conversation (user talk page owner<->talker)
 * 3 way user conversation (talk page any<->any)
 * Request for deletion
 * Request for adminship
 * General consensus discussions
 * Village Pump, Forum, etc.
 * AN/I
 * Help desk
 * Barnstars/Wikilove (and other templates of this variety)
 * Block Notices (you've been blocked, click this button to appeal)
 * moar

Special page implementation could offer a hook for models to provide options needed to configure the model on wiki. A Db table can then store the configured model to a unique name allowing it to be used as a model on any articles flow. A model should not be limited to a single type of block, it should be able to string together multiple blocks into a single result page. For example the Discussion is a Summary block at the top of the page and a TopicList block below that. The TopicList is a Block acting as a wrapper around a set of Topic Blocks. A Topic Block is a set of SinglePost Blocks arranged in a tree. (This may be too abstract)

The implementation must be generic enough so each Block is its own subscribable workflow object; gaining subscriptions, tagging, and other properties of the object. The implementation must gracefully handle updates to the topics triggering notifications at the discussion level.


 * FlowDiscussion
 * 0 or 1 Summary
 * 1 to Many: FlowPostSingular


 * FlowRequestForDeletion
 * 1 Summary (Reason for request)
 * 1 to Many: FlowEnumeratedLines
 * FlowBlockNotice
 * 1 Summary (Block Reason)
 * Functional Elements
 * Button for 'appeal this block'
 * Completely dynamic and described by a 'Workflow Description Language' but not a part of the initial prototype implementation.
 * 1 to Many: FlowPostSingular


 * And Many More

WorkflowAction
This typically represents an action performed by a single user. In some cases like the Summary object there is a single object representing all actions, but internally it will reference the Revisions table which maintains the single-action relationship. In general cases this could be a reply to a message or a vote in a consensus discussion, etc. There will typically be a 1 to Many relationship between a WorkflowObject and WorkflowActions as defined by the WorkflowModel.


 * FlowSummary - Summary objects are 1 to 1 with a particular model, allowing for arbitrary content to be displayed with the models that require a summary.
 * objectId
 * text
 * comment


 * FlowEnumeratedLines
 * objectId
 * Enum value (e.g. vote yes/no, etc.)
 * text
 * comment


 * FlowPostSingular - A tree structure representing a discussion. Possible methods of storing trees at Flow_Portal/Architecture/Discussion_Storage


 * objectId
 * If sharding by object_id, it may be required for consistency sake to always include the object uuid.
 * createdByUserId
 * replyToFlowPostSingularId?
 * revision
 * content
 * summary
 * etc.

Revision
Posts, Summarys, and possibly other content within Flow needs to be revisioned much like normal wiki pages. Ideally we want to reuse the existing revisions code that is already written. Revisions within flow will be slightly different from revisions of a wiki page though, most notably wiki revisions store a full pages content, but we will have many small fragments that should be individually stored.

The simplest way to store all these fragments would be to create a wiki page for each possible FlowObject. The wiki page's revision content would then be a multi-part content type that stores the various fragments.
 * In an ideal world we should use a multi-part content type that allows us to pull individual fragments out of the document using simple string indexing, rather than having to parse the content of the entire revision before being able to pluck out just the fragments which are interesting to us.
 * The parsoid team will be starting on something along these lines in July: https://bugzilla.wikimedia.org/show_bug.cgi?id=49143

Store Flow revisions in the master database in the standard Revisions table

 * Flow revisions don't map directly to a pageId or Title object, we would need a fake page (probably in a custom namespace thats not user editable)
 * The total number of new revisions per day/week/etc shouldn't change very much versus existing workflows that also edit wiki pages directly
 * Currently every new post or summary update is already a new revision
 * Flow will most likely be storing the HTML+RDFa from Visual Editor rather than wikitext that most of the core revisio entries use.
 * ContentHandler exists and can be used to store 'different' types of data in the revision text.
 * enwiki primary database is already heavily loaded, even if we are only using it for the revisions table and pushing the content to ExternalStorage.
 * Would lose some portion of the benefit from sharding the Flow database

Copy from the core Revision object just the revision functionality that flow needs in a new object?

 * There are many years of built up bug fixes and knowledge within the current Revision implementation, ideally we want to benefit from that instead of re-implementing.
 * Any future upgrades/bugfixes/etc to the core revision object will also have to be duplicated into the flow revision to maintain parity.
 * In other words: Maintaining feature parity will be a constant battle, perhaps not a great idea.

Start from scratch with a brand new revisions implementation?

 * Could attempt to define an interface that is shareable between Flow and Core Revisions.
 * Still copy the userCan series of functions, the API for this is reasonable and should be continued
 * Gives the opportunity to simplify revision functionality to that which flow requires
 * Flow revisions have a different use case than Article revisions
 * Articles have hundreds of revisions
 * If flow stores a revision per post, most posts will only ever have a single revision
 * Other pieces of flow, like Discussion Summarys, will have perhaps dozens or hundreds of revisions but still not nearly as many as the article its related to.
 * My intuition is flow objects will have a fraction of the # of revisions of the same text that wiki articles do, but how to be more certain?

Other thoughts

 * With flow's sharded database model we dont need to push revision content to external storage?
 * Content storage is a separate concern from the revisions, the revisions should continue to point somewhere else for the actual storage. See Flow_Portal/Architecture/Discussion_Storage

Revisions, Abuse Filter, and Patrolling
If we move outside the core Revision scheme, we still need to handle tasks like the Abuse Filter, Patrolling, and more. I met Krinkle while he is in SF and got a quick 20 minute rundown on how some of this works, and all I can say at this point is its an organically designed system that would be significantly difficult to come up with a new solution for. It would also be significantly difficult to integrate into the existing system.

I'm kind of hoping that we can ignore this part at the beginning, and later with the help of Krinkle find some way to integrate with the existing solution.

Problems which need to be addressed:
 * Flow wants to store the HTML+RDFa directly from VE. The current system for auto-magically detecting abuse works by regex'ing against wiki text.  The two are not compatible.
 * Something about Bots/IRC/etc. Basically the bots
 * Anyone who knows more about this system than i do (everyone) feel free to expand this section.

Where does a post's content actually get stored?
FlowPostSingular should probably just be metadata about the post. The literal post content should likely be stored in either wiki markup or the parsoid DOM representation of the wiki markup. It should be stored in a separate table comparable to revisions to facilitate moderation requirements like tweaking the content of a post.

See Content Storage

A Users Flow (or feed of interesting things)
All flow objects will be subscribe-able by any number of users Generating the users feed is a sort on the subscribed flow objects ordered by last updated date. SQL is going to hate us, mysql cant answer this with a single index. Even if it could the objects are sharded across multiple database clusters.
 * Independant subscription table implementing a Many to One relationship between flow objects and users
 * Tables sharded by the object_id field, or the user_id field?
 * Basically depends on which set of queries we want to optimize for, and which will have to fan-out.

Thats ok because we plan to always read it from the cache. We will pre-build this cache as the objects are updated. This will require maintaining two sets of lists. One a simple set, per object, of all users subscribed to that object, and the other a sorted set, per user, containing object ids they are subscribed to ordered by the last update time and capped to a reasonable size(100?). Redis sorted sets will never have contain duplicate ids; adding the same value twice will just update its score(timestamp).

In relation to above, we must investigate the performance implications of a flow object that has 50k subscriptions. I don't see any particular performance issues in this case for the user with 50k subs, although their usability may decline as the proposed 100 item list will only hold a short period of time. I don't think you can reasonably expect usability from 50k subscriptions anyways.

Remembering what a user has already seen
In addition to individual objects being subscribable, to generate feeds matching the current UI prototype we also need to know what WorkflowModel's have been previously seen by the user
 * Remembering a boolean true seen status for every model instance multiplied by all editors is a ton of data and seems sub-optimal.
 * Could remember the last date a user viewed items from that WorkflowObject within the subscription and only display items newer than that
 * Each topic within a discussion is its own WorkflowObject, so the memory is more granular than just the main page, but is it enough?
 * Need to figure out how that interacts with the suggested redis implementation of subscriptions suggested above.

Suppressed Revisions
Need comparable functionality
 * Could use real wiki pages ala LiquidThreads
 * Could implement a 'work alike' along with some sort of interface to generalize the current suppressed revision code (likely rather difficult)

Search
Handled user side in javascript, in the backend, or both (probably both)?
 * Hooking into lucene extension?
 * Hooking into solr cloud?
 * What elements do the UI call for in search results?

Tags
Individual discussion topics must be taggable. How should the tagging implementation work?
 * Tags can be public or private
 * Should public and private tags be stored in the same table?
 * Conceptually it might be simpler to store them separately.
 * Storing the separately also provides a much stronger guarantee of not accidently displaying private tags
 * If sharding by object_id, where to store table mapping tag_name to object_id? Don't want to have to query every shard.

So you want to find a talk page
MediaWiki should work great without flow. After installing the flow extension it should continue to work great and the talk pages will be flow discussions.

Currently you take the title of the page and look it up within the NS_TALK namespace. With flow we need something different, but conceptually similar?

Old talk pages: should the current urls still point to the old talk pages, or should they move and flow replaces them on those urls?
 * Current Talk pages comments likely(?) link to Talk:Something directly. For best results they should go to the new flow discussions?
 * If so, then there is also Talk:Something and we would like to continue pointing to the correct data.
 * Same concern as the 2 points above, but with urls from the internet at large. do talk pages get linked directly from outside with any frequency?

Talk page urls:
 * An article in NS_MAIN: /wiki/Talk:Volcano
 * An article in any other NS: /wiki/NameOfNamespace_Talk:Volcano

Kiss flow urls for first prototype: * /wiki/Special:Flow/NameOfNamespace:Volcano?flow=discussion

Possible Flow urls? Not currently supported in core.
 * An article in NS_MAIN: /wiki/Flow:Volcano/Talk
 * An article in any other NS: /wiki/NameOfNamespace_Flow:Volcano/Talk

Mapping URL's to Flow objects
The path Flow:Volcano/Talk should not be some super special case, there should be some way to attach specific types of flows to default paths. Visiting the page when nothing currently exists must work much like the current system, where the user is given the opportunity to create that specific thing. While we may initially hard-code /Talk into the flow prototype, it would be much better if the mappings from a name to some defaulted type of flow object is managed on wiki by the community. Additionally i18n and l10n considerations need to be taken into account as not every wiki uses the word 'Talk' for their talk pages. Mostly this matters in relation to GUID generation and what we use as the GUID namespace/name.

For the prototype i propose the following:

/wiki/Special:Flow/NameOfNamespace:Volcano?flow=discussion&name=foo


 * The flow query parameter refers to the name of a Workflow Definition. Every workflow definition must have a unique name(on a per-wiki basis).  Flow will have, by default, a few Workflow Definitions pre-defined.  One of these will be a 'discussion' workflow, and if the flow query parameter is not provided it will be defaulted to 'discussion'.
 * I'm uncertain at this point in time how i18n and l10n considerations come into play with this parameter, partially because it is defined on-wiki in the general case.


 * The name query parameter refers to the Flow Objects name. The flow object table will have a unique index on ( wiki, namespace, title, definitionId, name ).  This allows for multiple instances of a specific workflow to exist at the same time.  Workflows that require only a single object instance to exist must use a blank string as their name.
 * TBH i'm not entirely convinced object names are a great solution here, but I'm undecided on what is a better solution. We need some method of allowing multiple Flow Objects to exist for a given wiki+title+definition to account for workflow like RFD which can be completed and Locked.  In the future it must be possible to create a new RFD and have the old one arround for historical purposes.

Performance Considerations
How much data will flow need to store? For estimation purposes, Wikipedia Statistics shows there are approximatly 26M articles across all wiki's. Not all of these will have talk page, but many will. Assuming they all have talk pages ranging from just a post or two, to a couple thousand posts on the largest, we should expect a lower bound of perhaps 100M individual replies will need to be stored in perhaps 20M seperate discussion graphs. If each reply consumes 1kb of space that puts a lower bound of at least 100GB of post content before we even get into metadata, indexes, etc.

And that is just the discussions, flow will need to handle many more workflows than just discussions.

To help get an idea of the space required i applied the EchoDiscussionParser to one of the enwiki database dumps ( enwiki-latest-pages-meta-current10.xml-p000925001p001325000 ). Within this file it detected:


 * 43952 pages in either Talk: or *_Talk: namespaces
 * 211904 individual section headers
 * 514916 user signatures

That works out to an average of:


 * 5 sections per talk page
 * 2.5 signatures per section
 * 12 signatures per talk page

These pages were of course built up over time, but give us a general idea of the size of the problem we need to handle. It may be worthwhile to re-run this code and split the stats between article talk pages and user talk pages. There will be several orders of magnitude more article talk pages than user talk pages, so if they have different characteristics that may be useful information.

ext/mysqlnd
When sharding it is nearly impossible to get away from having a small selection of queries that must be answered by querying multiple shards. To cover this use case flow can query the shards serially with the current ext/mysql extension, but we should not settle for such a mediocre solution.

As of php 5.4 the ext/mysqlnd extension has replaced ext/mysql. ext/mysqlnd contains full backwards compatability with the mysql_* series of functions and is also available in packaged form for use in php 5.3. ext/mysqlnd includes asynchronous query support giving us the ability to contact all shards we are interested in at the same time. This still suffers from long-tail distribution as the number of shards contacted increases, but is obviously prefered over serial querys.

I propose we develop a testing protocol to certify 5.3 + mysqlnd for use on the foundations production application servers. This is basically a backwards compatible driver change, and should require absolutly no changes to existing MediaWiki code.

What needs to be done to get the ball rolling on this, as it will likely take at least a month to certify?


 * First step likely puppet config and labs instance. But this is a far cry from a full test
 * To get full advantage Database.php needs to be mysqli aware: https://gerrit.wikimedia.org/r/#/c/62173/7
 * To take advantage of async queries, some sort of API to utilize them will need to be developed.

Sharding key
The most natural key choices for flow are
 * the page id
 * or a hash of wiki+ns+title

Independent of the key choice is how to store the shard_id. With the page_id we can either store a composite key of (uuid, page_id) or we can integrate the determined logical shard id into a 10 bit portion of the uuid.

With the hash of wiki+ns+title we could store that as a composite key, but i dont' see any benefit there. We would likely integrate the logical shard id determined from the hash into a 10 bit portion of the uuid.

For flows use case there are a couple types of user requests:


 * 1) Requests that contain the namespace + title
 * 2) Requests that contain the Flow Object UUID(and probably still contain namespace+title)
 * 3) Requests for an individual users flow(or feed)
 * 4) Requests involving search/tagging/etc

With the first two types of request, to choose a shard when sharding by page_id we must first lookup namespace+title and find the page_id it refers to. When sharding by wiki+ns+text the information to locate the shard is contained within the request.

For the last two types of requests, which will be storing information about flow objects in cache/solr/etc. I think complicating things by storing a composite (uuid, page_id) is unnecessary. I think we could do ourselves a great favor by instead pre-calculating and integrating the shard_id directly into the uuid giving a single value to identify not only the object but the logical shard it exists on.

For all the reasons described above i encourage us to utilize the same methods as instagram and twitter snowflake for id generation.

Sharding schemes

 * ranges - assign different ranges of ids to different servers. With GUID's this is reasonable, with seq id's that mysql requires not so much
 * hash or modulus - preferred when using sequential ids, gives fairly even distribution. Downsides include being fairly difficult to add new shards or rebalance existing shards

We can combine the above two options to get the best of both worlds. Two layers with a hash+modulus as the first layer mapping wiki+ns+text to logical shards, or we can modulus the page_id to a logical shard, both of which are forever. Second layer will use a ring topology to distribute those logical shards to specific database clusters.
 * Possible issues with redirects on wiki+ns+text ?

Number of Shards
How many shards to start with? Using the two layer solution we can create 864 logical shards up front. That feels like an arbitrary number, but evenly divides into 2, 3, 4, 6, 8, 9, 12, 16, 18, 24, etc. allowing for a variety of choices when it comes to shard splits.

Shard Colocation
Do shards coexist within a DB schema(same table) o within a MySQL instance(different table/database same server)?
 * Upon initial review co-locating shards within the same mysql instance but in independant tables may provide the most natural forward progression in terms of redistributing logical shards between servers. It would make OPS life much easier when it comes time to split a shard and delete unrelated data from the new masters.
 * On the other hand, fan-out queries which are perhaps just annoying when you have 4 physical shards become a serious performance concern when all logical shards require being queried explicitly
 * Which will encourage developers to find non-fan out solutions.
 * This decision primarily affects ops, both in fan-out query volume and shard split requirements, along with how much data individual indexes contain. We should likely defer this answer until Asher weighs in with his opinion.

Data that doesn't naturally shard
There are a couple pieces of data which cannot be sharded by user_id or object_id. Examples of this data are the Workflow Definitions which are defined per wiki. A workflow definition is all the information describing a user-created workflow.

The simple method involves using the primary wiki database for this data, this feels like a perfectly valid solution for such a small amount of data. The other option is to replicate this data to all shards so it can be joined against. From an operations perspective this can get a bit hairy, it involves slaving the shard masters to a 'global shard master' that contains only the shared data. I don't think its worth the complexity. It is a trade off though, it necessitates client side collection(join) of the definitions when performing an action like generating a users flow.

Other sharding implementations
Does anyone offer servers or libraries that could simplify our implementation?
 * MySQL NDB Cluster?
 * There are no durable commits on disk. Commits are replicated, but there is no guarantee that logs are flushed to disk on commit.
 * Not as free as MariaDB
 * Its use case is very narrow: Its good at primary key lookups.  Anything else and it falls on its face.
 * MariaDB Galera Cluster
 * All data is stored on all machines, a write is only complete when it is committed to every active machine in the cluster.
 * Uses a custom commit protocol that saves time vs. 2-phase commit
 * Doesn't scale writes nearly as well as partitioning/sharding


 * External to MYSQL, there are horizontally sharding proxies like HarvestDB(not OSS, uses custom protocol), twitters Gizzard(OSS but no longer maintained), CitusDB(not OSS), and Spock Proxy(OSS, no updates since 2009).


 * There is one horizontally sharding proxy still receiving active updates: shard-query, but it attempts to transform mysql into a column-store for OLTP processing, not quite what we are looking for.


 * While not a sharding implementation, MariaDB does have something called multi-source replication. We can setup one server which slaves to all existing shards and offers a consistent view across the entire sharded data set for certain types of queries.   If we utilize this feature it should be strictly for analytics/reporting capabilities and not used as part of the main Flow implementation.

Other options would be to move to a data store that gracefully and naturally handles our multi-master sharde. Other data stores may be better targeted for our use case, but there is also perhaps a significant knowledge gap not only with application development, but also supporting client libraries may need to be improved and operations would have to build out new standards with regards to running a different kind of data store cluster.

When is a shard chosen?
There are a few possibilities with regards to when, in the code, to chose a shard:


 * Session based sharding. In this case we look at the data in the incoming request and decide which shard it should run against.  With the required guarantee that all data the request is interested in is on a single shard this is by far the simplest to implement.


 * Transaction based sharding. Determines which shard to used based on the begining of a transaction.  Can be achieved either through an explicit API, or by parsing and extracting the shard id from the query.  Other schemes also include including comments in the SQL, but mediawiki does not, for the most part, deal directly with sql and instead uses the Database abstraction.


 * Statement based sharding. Determines which shard to used based on each individual statement.  By far the most complex, and the most versatile.

Splitting logical shards
I'm envisioning a shard split working something like the following, but tests would have to be done to determine if this works:


 * 1) Attach two new slaves(your future masters) to a current shard master
 * 2) Possibly delete the data on new slaves thats on the wrong side of the split, if that doesn't break mysql replication.  This will make step 4(and hence the time spent in read-only mode) shorter.
 * 3) When the new slaves are all ready adjust the Flow configuration to enter read-only mode and at the same time change the lookup table so that writes go to the new servers and the old server is no-longer a shard.
 * 4) Once there are no more writes going to the old master(meaning all application servers have the updated lookup table and are in read only) sever the link between the old master and the new masters and turn off the old master.
 * 5) On each new shard delete the data that doesn't belong to this master
 * 6) Adjust the Flow configuration to leave read-only mode

4 & 5 might get swapped around, would have to test.

I don't think the lookup table becomes a SPOF if things are run this way, but i could certainly be missing something. It should be possible to develop flow in such a way that only a specific cluster within the shard pool must be set to read only, rather than the whole thing during a shard split.

So, what should we do?
It may add complexity, but the only way to scale with regards to users and articles is to have two data stores for flow. One data store must be sharded by article, the other data store must be sharded by user_id. Flow will also make some small use of the global wiki db (the one from wfGetDB ).

We should shard these database into a future proof number of logical shards which map into a configurable concrete set of database clusters. We must develop the map from logical shards to clusters in a way that it does not become a SPOF.

To accomplish this is application code i propose we implement a simple DbFactory object which defaults to session based shard access, set once per request. This must be available to each component that reads or writes to th

The constructor signature could be:

public function __construct( array $shardClusters, ShardChooser $shardChooser, ShardChooser $logicalShardChooser = null );

The ShardChooser should be a super simple interface:

interface ShardChooser { public function pickShard( $shardKey, DbFactory $factory ); } We can then instantiate ShardChoosers configured to properly apply our methods to go from a Title object or user_id /user_ip to a logical shard key, and to go from a logical shard key to a literal database cluster. We can instantiate two different DbFactory objects; one for the database sharded by user_id, and the other for the database sharded by Title. These objects will be thin wrappers around wfGetLB returning the proper LB/DB associated with the logical id.

Most requests within Flow will only need to access a single shard from each Factory, pre-chosen when the request is initialized. Both the User making the request and the Title the flow is being constructed for are available at the very beginning of the request. We must initialize the DbFactorys with the correct sharding keys as early as possible and not muck around with changing shards in the middle of the request.

Completely independant from the DbFactory, to prevent 'accidentally' changing the active shard, we should create some class capable of querying multiple shards via a fan-out strategy. It will be provided with the same list of clusters and the same ShardChooser instances as the DbFactory, but provide some sort of fan-out abstraction. I'm uncertain at this point how the fan-out queries should operate. We really don't want to resort to serially accessing all shards, but we can not easily utilize MYSQLI_ASYNC yet.
 * If we can migrate production from ext/mysql to ext/mysqlnd while on 5.3, we could use mysqli async queries in certain special case scenarios.
 * ext/mysqlnd still contains the mysql_* series of functions. a quick test shows mediawiki to load on 5.3+ext/mysqlnd
 * PHP 5.4 and all future releases have ext/mysqlnd by default
 * Need to develop a testing protocol to verify mysqlnd can be used in production without introducing any regretions.
 * The beginings of Mysqli support have been merged to Database.php here: https://gerrit.wikimedia.org/r/#/c/62130/, actual support for Mysqli is being developed in branches linked from there.
 * This has impedance mis-match issues with async queries. We will need some bit of extended API to use the Database.php query building abilities with the async fan-out queries.
 * There are existing solutions within php to parse sql into some form of AST. They may be heavy to run, and will be difficult to cache unless we use bound parameters to normalize the query string vs escaping.
 * We will already be using mysqli to access async queries. We can utilize client side bound parameters, but we will lose the current query building functionalities of Database.php like IN generation which also performs parameter escaping.

Caching possibilities
See Flow_Portal/Architecture/Memcache

Crazy Idea
One flow instance, many wikis. What if Flow was an independent wiki used as a service by other wikis. This is mostly brainstorming, probably too complex to tackle effectively. Ideas here could possibly propagate into the main implementation ideas if viable.

Benefits:
 * The same flow can be referred from any wiki. For example, commons and en.wiki
 * The database can be independent from any specific wiki
 * All wikis share benefit from work by ops team necessary to provide sharded database support.
 * It could (possibly) replace the mapping of url -> page giving flow full control of its URL structure.

Downsides:
 * cross-site javascript requests?
 * Difficulty for the project to be used in general mediawiki installs outside wikimedia?
 * Many, many more that I probably don't know yet

Questions:
 * How would you refer to a flow from a different wiki via markup?
 * Use standard interwiki links?
 * What do the urls look like for those flows? Do user browsers communicate directly with the central flow instance, or do users talk to the wiki and the wiki talks to flow?
 * If flow is independant, how does flow notify echo to generate notifications?

Inconsequential Things to Consider (or not)

 * use php namespaces?
 * one class per file?

Reference Material

 * en.wiki info about flow
 * Tumblr mysql sharding presentation
 * Ruby scripts for handling operations side of tumblr sharding