Flow/Architecture

Purpose of this Document
This document is intended to be an early brainstorming session regarding the first flow prototype. Final implementation will likely only tangentially look like what you see here now. In other words this is a pre-implementation brain dump which will be referenced while building the first prototype. In my experience with the ideas for internal architecture laid out the first implementation doesn't take long to actually convert into code.

Big Ideas
Flow is about workflow management. A "discussion" is a type of workflow - a very simple one.

Templates are bad, mmkay
In many cases templates are used to encourage workflow within local wikis. The goal for the various workflow models is to be dynamic enough to be managed from the wiki (by local administrators) to cover use cases currently handled by workflow suggestions inside templates. In other words flow will implement a whole bunch of Lego pieces, and the individual community's will stick them together into the various workflows they need.

WorkflowObject
Globally unique identifier of an individual instance of a workflow. A discussion topic is a FlowObject instance. A request for deletion of page Abc is a FlowObject instance. etc.
 * GUID
 * Home wiki ?
 * Only needed if we decide to have a single combined flow that supports multiple wikis.
 * May not be needed even then.
 * Home page
 * Essentially where is this flow accessed from/was created
 * createDate
 * lastUpdateTime
 * creatorID
 * title/name
 * Possibly tied into wikidata for automatic localization?
 * summaryText
 * lockState
 * Enumeration, not boolean. Allows for multiple states beyond locked/unlocked,
 * workflowModel
 * contentLanguage
 * Workflows may occur in different languages, this can help act as a filter for workflows a user can understand.

Considerations
For example, if an open (based on lockState?) request for deletion exists on a specific article, cannot open a new one. If a closed RFD exists for a page it should possibly be linked to a newly opened RFD for context. Various restrictions like this need to be definable in a generic way by the wiki community. The prototype implementation may hard code these types of considerations to be fleshed out at a later point.

Why GUID?

 * First step to enabling horizontal scalability by removing dependence on a central sequence to give out id numbers
 * Assuming somewhat even distribution of GUIDs can bucket and horizontally shard by GUID (more on this twords the end of the document)
 * The flow object and flow metadata will be used more like a key/value store mapping from GUID to the appropriate hash of information. The relational aspect is limited to the WorkflowModel's
 * Not decided yet, could certainly use standard auto-increment ids

WorkflowMetadata
Each flow object maps to some type of workflow metadata. The metadata is specific to the type of workflow that is being worked with. The specific type of metadata to expect (and therefore the type of Metadata and Model objects to use) is defined by the workflowModel field of the FlowObject.

The various models need to be programmable such that local wikis can use them as they need and not be locked into pre-programmed ideas. Templates currently allow a great deal of flexibility with no automated enforcement. These models must represent a middle ground between template flexibility and pre-programmed strict workflows.


 * 1 to 1 mapping between FlowObject and the Workflow Metadata
 * Use same GUID for both?

Possible workflows to support include but are not limited to:
 * 2 way user conversation (user talk page owner<->talker)
 * 3 way user conversation (talk page any<->any)
 * Request for deletion
 * Request for adminship
 * General consensus discussions
 * Village Pump, Forum, etc.
 * AN/I
 * Help desk
 * Barnstars/Wikilove (and other templates of this variety)
 * Block Notices (you've been blocked, click this button to appeal)
 * moar

FlowDiscussion

 * 1 to Many: FlowPostSingular

FlowRequestForDeletion

 * Reason for request
 * 1 to Many: FlowEnumeratedLines

FlowBlockNotice

 * Block Reason
 * Functional Elements
 * Button for 'appeal this block'
 * Completely dynamic and described by a 'Workflow Description Language' but not a part of the initial prototype implementation.
 * 1 to Many: FlowPostSingular

WorkflowModel
The models represent an action performed by a single user. This could be a reply to a message or a vote in a consensus discussion, etc.

There will typically be a 1 to Many relationship between a WorkflowObject and WorkflowSupportObjects. Support objects model actions by individual users like discussing a topic, voting on an RFD, etc.

FlowEnumeratedLines

 * flowGUID
 * Enum value (e.g. vote yes/no, etc.)
 * text
 * comment

FlowPostSingular

 * flowGUID
 * createdByUserId
 * replyToFlowPostSingularId?
 * revision
 * content
 * summary
 * etc.

FlowPostSingular is really a DAG (directed acyclic graph). Alternatively a tree is a special case of a DAG that likely works in this use case.
 * Are there any benefits to using a DAG over a tree?
 * I dont know.
 * In the mathematical sense a tree is an undirected graph in which any two vertices are connected by exactly one simple path.

Possible methods of storing graphs/trees in sql:


 * Moved to Flow_Portal/Architecture/Discussion_Storage

What happens to content the article maintainers wish to display above a talk page, like guidelines or whatever?
There will be an "unstructured" area to handle that. in an ideal world, we wouldn't need it (maintenance templates would become tags, etc.) but we are where we are. Eventually we will want to remove this unstructured area but it seems like a reasonable middle ground.

Possibly within the discussion, as opposed to at the very top, users may wish to have a scratchpad where multiple users can tweak a bit of wiki text that is intended to be added to the article being discussed or some such. This should be gracefully handled.

Where does a post's content actually get stored?
FlowPostSingular should probably just be metadata about the post. The literal post content should likely be stored in either wiki markup or the parsoid DOM representation of the wiki markup. It should be stored in a separate table perhaps comparable to revisions?


 * In LiquidThreads the posts are stored as literal wiki pages within the main wiki master database
 * This is not scalable. Loading more and more things onto a singular master database just spells doom and gloom for everyone.
 * Horribly slow. Pages are parsed on the way out every tim
 * The individual pages shouldn't be parsed every time, the parser cache should hold onto the html for 30 days.
 * If LT is just reading from the parser cache, and its way too slow, that means simply that we cannot store the html per posting in sql. Some sort of optimizations would need to be applied that cache at a higher level
 * It is counter intuitive that the parser cache wouldn't work here, some investigation is necessary into what exactly makes LQT slow.


 * Duplicate the wiki page tables structure into an independant master
 * Likely impossible from a code perspective. the literal names of tables and the question of which database connection to use is made hundreds of times throughout the code base and can't easily be swapped around depending on where we want to load pages from


 * Storing post data as pre-parsed HTML may be faster to assemble on the fly

A Users Flow (or feed of interesting things)
All flow objects will be subscribe-able by any number of users Generating the users flow is a sort on the subscribed flow objects ordered by last updated date
 * SQL is going to hate you, mysql cant answer this with a single index
 * Real answer: Echo
 * Echo already has everything in place for generating a flow, we just need to fire the events so echo knows about them and remembers them

Suppressed Revisions
Need comparable functionality
 * Could use real wiki pages ala LiquidThreads
 * Could implement a 'work alike' along with some sort of interface to generalize the current suppressed revision code (likely rather difficult)

Possible caching ideas / issues to keep in mind

 * In an ideal world can fetch the rendered HTML fragments without having to ask the main db about the actual content
 * Can store the adjacency list for each top level post (FlowPostSingular with no parent id) in memcache to prevent the recursive query inside db

So you want to find a talk page
MediaWiki should work great without flow. After installing the flow extension it should continue to work great and the talk pages will be flow discussions.

Currently you take the title of the page and look it up within the NS_TALK namespace. With flow we need something different, but conceptually similar?

Old talk pages: should the current urls still point to the old talk pages, or should they move and flow replaces them on those urls?
 * Current Talk pages comments likely(?) link to Talk:Something directly. For best results they should go to the new flow discussions?
 * If so, then there is also Talk:Something and we would like to continue pointing to the correct data.
 * Same concern as the 2 points above, but with urls from the internet at large. do talk pages get linked directly from outside with any frequency?

Talk page urls:
 * An article in NS_MAIN: /wiki/Talk:Volcano
 * An article in any other NS: /wiki/NameOfNamespace_Talk:Volcano

New flow urls for prototype:
 * An article in NS_MAIN: /wiki/Flow:Volcano/Talk
 * An article in any other NS: /wiki/NameOfNamespace_Flow:Volcano/Talk

Mapping URL's to Flow objects
The path Flow:Volcano/Talk should not be some super special case, there should be some way to attach specific types of flows to default paths. Visiting the page when nothing currently exists must work much like the current system, where the user is given the opportunity to create that specific thing. While we may initially hard-code /Talk into the flow prototype, it would be much better if the mappings from a name to some defaulted type of flow object is managed on wiki by the community. Additionally i18n and l10n considerations need to be taken into account as not every wiki uses the word 'Talk' for their talk pages. Mostly this matters in relation to GUID generation and what we use as the GUID namespace/name.

How to move forward? unsure at this juncture. Possibly the prototype will defer this solution for later and hardcode.

Performance Considerations
How much data will flow need to store? For estimation purposes, Wikipedia Statistics shows there are approximatly 26M articles across all wiki's. Not all of these will have talk page, but many will. Assuming they all have talk pages ranging from just a post or two, to a couple thousand posts on the largest, we should expect a lower bound of perhaps 100M individual replies will need to be stored in perhaps 20M seperate discussion graphs. If each reply consumes 1kb of space that puts a lower bound of at least 100GB of post data.

And that is just the discussions, flow will need to handle many more workflows than just discussions.

To help get an idea of the space required i applied the EchoDiscussionParser to one of the enwiki database dumps ( enwiki-latest-pages-meta-current10.xml-p000925001p001325000 ). Within this file it detected:


 * 43952 pages in either Talk: or *_Talk: namespaces
 * 211904 individual section headers
 * 514916 user signatures

That works out to an average of:


 * 5 sections per talk page
 * 2.5 signatures per section
 * 12 signatures per talk page

These pages were of course built up over time, but give us a general idea of the size of the problem we need to handle. It may be worthwhile to re-run this code and split the stats between article talk pages and user talk pages. There will be several orders of magnitude more article talk pages than user talk pages, so if they have different characteristics that may be usefull information.

Horizontally sharding the master database
With GUIDs we could, for example, divide the GUID key space into some future-proof number of buckets (4096?) At first all buckets could be assigned to a single master db, but as things grow buckets could be distributed across multiple masters.

If we horizontally shard by GUID a user's flow (timeline essentially?) cannot be easily built without querying all possible masters. Long tail distribution basically says this will always be slow, especially since php does not support async queries.
 * Would likely need to pre-build the timelines as they happen (like twitter?) rather than a join query between the users subscribed flows and the active flows sorted by the flows most recent update.

Caching possibilities
Too early to really know anything about what this will look like. Profile, Review, Cache as required.

Crazy Idea
One flow instance, many wikis. What if Flow was an independent wiki used as a service by other wikis. This is mostly brainstorming, probably too complex to tackle effectively. Ideas here could possibly propagate into the main implementation ideas if viable.

Benefits:
 * The same flow can be referred from any wiki. For example, commons and en.wiki
 * The database can be independent from any specific wiki
 * All wikis share benefit from work by ops team necessary to provide sharded database support.
 * It could (possibly) replace the mapping of url -> page giving flow full control of its URL structure.

Downsides:
 * cross-site javascript requests?
 * Difficulty for the project to be used in general mediawiki installs outside wikimedia?
 * Many, many more that I probably don't know yet

Questions:
 * How would you refer to a flow from a different wiki via markup?
 * What do the urls look like for those flows? Are they fetched from the central flow instance, or do users talk to the wiki and the wiki talks to flow?
 * If flow is independant, how does flow notify echo to generate notifications?

Inconsequential Things to Consider (or not)

 * use php namespaces?
 * one class per file?

Reference Material

 * en.wiki info about flow
 * Flow Portal
 * Experienced User Responses