Flow/Architecture/Internals


 * Not particularly useful

data models

 * Flow data models manage their own state, there are no setter methods. For example to generate a new revision that is a reply to specific post you call $post->reply( ... ) rather than new'ing up your own object and setting its state externally.
 * Model the problem domain, and do not directly interact with the database beyond the ability to convert between database rows and domain objects
 * Simplifies tests, domain state can be created simply in tests without touching any global state. ex:

code resources

 * container.php provides resources.
 * container.php is configuration, includes/Container.php is implementation class
 * centralizes all access to global variables into one file instead of spread across the application
 * Global variables are provided to objects via constructor or setter injection
 * centralizes service object(as opposed to domain models) instantiation, providing one place to look while refactoring
 * uses third-party Pimple to provide lazily-evaluated closures
 * The container should not be accessed statically, but is in a variety of places within Flow. We need to evaluate and remove these where possible.

actions
Likewise, top-level FlowActions.php, then includes/Actions.php implementation

This configures how different actions work, e.g.
 * moderation actions get logged in Special:Log (log_type)
 * nearly everything gets written to RecentChanges (rc_insert)

permissions: if there's nothing there, it's blocked altogether
 * currently no support for moderation of header

Links (for queries) and actions (for writes), are outputs. The configuration says "If the change type of a revision is this, then the API output will include URLs for the specified actions. E.g. response to create-header, outputs these links. Even though e.g. 'board-history' isn't used, the page chrome adds the ?action

API post parameters
I edited the header of a new board (not on ), adding some links. After I solved the captcha, the POST was:
 * action=flow

ehcontent=Second try: Creating a header just to see what the API hands back. Links to User:spage, format=json page=User_talk:JunkTest2 submodule=edit-header token=34d7a48e72f868bcd5a6f98372796be6+\ wpCaptchaId=NNN wpCaptchaWord=SEKRIT

API response
The response body, reformatted, for this  action is below. Note how it has the URLs for links and actions that FlowActions.php specifies. The front-end uses (some of) these to create links and buttons in the UI. the  key's HTML, reformatted, is: and this is the HTML that the front-end code inserts into the page. Note that the data-parsoid attribute from parsoid can change at any time, while data-mw contains stable repeatable data. The data-parsoid attribute should generally be ignored.

Caching
There are multiple layers of caching:
 * Varnish caches full HTML pages of boards and topics for anonymous users. These are purged whenever actions are performed against a related board or topic.
 * Flow extends a BagOStuff implementation to remember the keys that have previously been requested, this means requesting the cache key a second time does not incur a network roundtrip.
 * Flow additionally extends that BagOStuff implementation to offer transactional-like writes to the cache. Specifically:
 * When inside a transaction all write commands to the BagOStuff are deferred by appending to an array
 * When the transaction completes successfully, all attempted write commands are flushed to memcache
 * If any of those write commands fail, all cache keys that have already been written out already are deleted so that they properly repopulate on read
 * If the transaction does not complete the deferred commands are discarded
 * Flow has a concept of Indexes stored within a BagOStuff. This is all driven by the code found in the  namespace.
 * Indexes receive individual database rows, each Index instance buckets those database rows based on a set of column names provided in its constructor
 * Individual buckets are maintained through BagOStuff::merge operations which retrieve the current cache value of the matching bucket and insert/remove the provided row before writing the data back out.
 * Indexes are updated in-process after write operations to keep in consistent state
 * Two primary kinds of Index, UniqueIndex and TopKIndex
 * There is one UniqueIndex per domain model, it is used to provide a direct lookup from the domain models primary key to the database row representing that model.
 * TopKIndex buckets database rows by a set of columns provided in the constructor, Is typically used to answer queries like 'First 100 posts on board X'.
 * TopKIndex utilizes the interface.  Using this interface a TopKIndex can be backed by a UniqueIndex.  The result of this is that a single bucket of the TopKIndex will hold only a list of primary keys. At query time that list of primary keys is resolved into the original database rows by querying the related UniqueIndex.
 * This is done to ensure consistency, each domain model has a single representation within the cache. Other parts of the caches just point to this single representation.
 * There are additionally some special case Indexes extending from TopKIndex

All in includes/data/ ObjectManager drives it.

From a list of queries
 * determine cache keys
 * if it finds all the keys, then it returns
 * otherwise goes to "backing store", the database.

The posts that are output map to memcached keys according to index classes, e.g. includes/Data/Index/FeatureIndex.php

ObjectManager objects implement either Find or FindMulti

rowCompactor removes keys you don't need.

Tries to reduce the amount of joins so that we can eventually shard the data.

Example
A topic list query includes/Block/TopicList.php

In general, split into two parts, query and formatter. TopicList also has a paginator deciding what the list is.

Talk:Flow has a workflow identifier, each topic has a workflow identifier.
 * One memcached key for topics in the page
 * trim to a slice of 10 of these
 * Does a MultiGet to get those 10 topics

FormatterRow holds all the information,

may turn out that stuff can't be viewed by user, RevisionFormatter will remove those. Then the pager will

To debug what is cached
purge.php script does the query, but replaces from Hash bag of stuff to memcache bagOfStuff, forcing a repopulation, and then you can examine the keys.

Future
Store entire topics, but then have to filter out moderated stuff the current user isn't supposed to see.

Listeners
Updates of links, IRC updates, etc. are handled by listeners on

Parsoid directory
Extractors find references in the Parsoid output.

Core doesn't know about parsoid output, so we have to get stuff (links, etc.) out of Parsoid and hand it to core. Eventually our code should be useful to Parsoid extension