Requests for comment/Workflows editable on-wiki

This is a draft overview of a potential design for the workflow description system. It is written for an audience of MediaWiki system architects and workflow description authors.

Stakeholders

 * Editor community: Replace ad-hoc on-wiki processes with a well-defined workflow, rendering it transparent and consistent. We need to describe processes in a way that is readable and easy to change.  Workflows can be customized on a per-site basis, or even per-user when desired.
 * Extension authors: Tools like UploadWizard could be made customizable
 * Fundraising: We need a formal and verifiable way of enforcing rules about how we handle donations.
 * Product designers: We need to have a consistent user experience across wikis, while accounting for necessary local differences in workflows.

Alternatives
Less intrusive tools to help with onwiki process. For example, parser functions to set a 7-day reminder alarm. Queue managers to help with pages that list work items.

Considerations

 * This engineered approach might seriously damage wiki process discussions, by adding a layer of arcana that only tech wizards can manipulate.
 * The machine-executable state machine is rarely the simplest and most readable way to explain a process. It can be confusing as bloody hell, even when graphed as a picture.  Also, capturing process descriptions will introduce weird-looking artifacts like parallel subprocesses and extra states.
 * Resource management might not be appropriate to implement in a library, and would then have to go into the core engine.
 * Documentation must be kept in sync with the implementation, especially with regard to the workflow description syntax.

Don't expose another Turing-complete DSL! Workflow customizability is entirely defined by the supporting code (engine and implementation). The idea is that with each workflow we are creating a DSLL, (like in Forth ;), which we can use to define very concise descriptions of workflow solutions covering a small set of problems.

Complex workflows should always be decomposed into a set of smaller, self-contained workflows, which can be executed in parallel or in sequence.

No job will change state unless it's moving along a predefined transition. There will be no UI or admin tool which can set a job to an arbitrary state. Otherwise, we lose verifiability.

The unconditional processing done on entry to a state simplifies the state machine graph so that it can always be trivially transformed to a Petri net. This type of graph has the lovely properties of being both pretty and easy to understand.

Discuss: The configuration may be inlined with the spec, or stored as a separate file so that we can regulate user edit access independently.

State variables are strictly serializable so that jobs are recoverable, may be paged out, or migrated between servers. Data scope is a discussion.

Node/browser javascript implementation

Control flow
These sequence diagrams are examples of how the workflow system can be driven from MediaWiki extensions, and how user interactions take place.



Workflow state can be loaded directly and used like an ordinary variable. Here, a page is rendered from templated wikitext or a SpecialPage, and state variables are pulled in to generate the page content. The job is retrieved by ID, and perhaps the ID or a token (if the job does not belong to the current user) are embedded in the rendered content or stored in the session, to enable further user interaction.



When a workflow defines consecutive user interactions, we need a mechanism for the workflow engine to determine what page is rendered, or where to redirect the browser.



MediaWiki can be used to schedule tasks which affect a workflow, for example a task which simply fires a signal at a given job_id. When the target workflow transitions, it may make calls back into MediaWiki.

Extensibility
The extension points are,
 * Writing libraries that define new actions
 * Writing state machine descriptions, which may call actions
 * We could allow jobs to dependency inject a new workflow engine core, as long as it complies with IStateMachine.
 * Hooks into and out of. Well, that's a TODO but also a must-have.

Asynchronous vs. self-stimulating states
States are asynchronous by default, meaning that the job will be paused indefintely upon entering the state. The job will remain in that state until it receives another signal. A self-stimulating state will provide itself with a signal.

Any user interaction steps should guarantee they will self-stimulate if granted control flow, see yieldSignal(signal). The engine should throw an exception if the machine enters asynchronous wait.

Concurrency
The base workflow is single-threaded, it is either not running (sitting in an asynchronous state), or performing a transition transaction. Transitions are synchronized using an exclusive mutex for each job instance.

Multithreading and joins will be defined in a library.

Data
There is only one scope for data, at the job level. The default data storage is a map of keyed values, accessed using $workflow->getValue(key) and setValue. These state variables are backed by the database, and behave transactionally across transitions. Configuration (statically defined in the workflow description) and state variables share the same namespace, so be aware that state variables will override any identically-named configuration.

Each machine instance is solely responsible for its variables, there are no assumptions about the naming or structure of the data, and nothing in the engine will mess with your data.

Access Control
Workflows have rules regarding who can initiate and interact. Access control could potentially vary at state, or signal granularity. Maybe libraries should be whitelisted for use in workflow descriptions&mdash;are there potentially dangerous libraries?

Tokens may be issued for special variances, redeemable via API or on the server. Tokens have an expiration time.

Exception handling
A specification may define global signals, which can be sent to a job in any state. This is a shortcut which expands to an implicit transition from every state in the workflow to a special exception state. An example would be, a workflow in when the user can "cancel" at any step, which fires cleanup processing and transitions to the exit node.

If a job becomes uncompletable for any reason, it should be flagged as permanently frozen, and cleanup performed outside of the workflow system. There is no "universal" exception mechanism to catch unexpected errors.

ACID
The easiest way to understand atomicity in workflows is to look at the steady-state, this is when a job has arrived in a state and processing stops. Every transition between these steady-states must be atomic, the machine cannot be paused, and changes will be rolled back in case of error.

Each transition will be protected by a database transaction on the state variables, and any side-effects should be transactional as well. Library actions are responsible for guaranteeing that any side-effects are rolled back if the transition fails, even if it fails from another action. TODO: design the interface for actions with side-effects.

Versioning
Modifications to a production workflow are tricky, because there may be jobs in the queue already. The base behavior is that the system caches each revision of a workflow description, and jobs are version-locked to the description used to initiate them. Job migrations are always explicit, even when they are a no-op.

If no version is available, for instance, the description came from the filesystem, then a SHA-256 hash of the description contents is used in its place.

Any workflow description that is used in production will be cached in the database indefinitely.

Diagnostics
Jobs will be logged as they move through a workflow, including any signals or actions. We could cache state variables in debug mode.

Implementing a workflow
Components:
 * Libraries - you will only have to write a new library for special effects that must be supported in PHP.
 * Default descriptions - your base workflows, which may be extended downstream
 * Default configuration - reasonable default values

Description
name: Articles for Deletion Queue
 * 1) TODO: flesh out
 * 2) The AfD extension will hook on article save, and will check the article
 * 3) content for new deletion tags.  If this condition is present, we
 * 4) instantiate a new AfD job with the new revision as its argument, and
 * 5) begin the workflow.
 * 6) The workflow is split up into parallel and child workflows, a strategy
 * 7) that should be used liberally, everywhere.  We use the same implementation
 * 8) for all specifications here out of laziness, but there are really three
 * 9) archetypes: discussion queue, provisional endorsement, and admin review.
 * 10) Pages will be wired to send the following signals to this workflow:
 * 11)   extend
 * 12)   keep
 * 13)   delete
 * 1)   keep
 * 2)   delete

libraries: # Allows us to respond to user requests with a wiki page - WikiPages

# Enables synchronous states by sending self a signal - SelfStimulating

# Send self a signal in the future - Alarm

# Tag pages and fork depending on existing tags - TaggedPage

# Provides limit_jepoardy and delete_in - ArticlesForDeletion

states: Start: initial: true actions: # Append this article to the AfD discussion page, then signal "open" add_to_afd_queue # This is a soft keep. If a child workflow later acts on "delete_in", # the expiration date and default outcome will be overridden. keep_in: normal_grace_period # Takes a map from deletion tag name to workflow specification title. # A child workflow is begun, which can signal back to this machine. fork_on_tag: PROD: Proposed Deletion BLP-PROD: Proposed Deletion, Biographies CSD: Speedy Deletion Queue Copyvio: Copyright investigation transitions: open: Discussion Discussion: transitions: # There is logic in here to limit total time open to maximum_discussion. extend: Discussion # Proposed Deletion PROD: actions: # Only allow PROD once per article. On successive invocations, # automatically send a "keep" signal and wait for admin review. limit_jepoardy: 1 scan_ # Sets an alarm to run delete_in: normal_grace_period transitions: # signaled by the implementation when the template is removed from the article, or     keep: Keep #     delete: Delete

Keep: actions: signal: review # No transitions, this is a final state

Delete: actions: signal: review # No transitions, this is a final state

Review: transitions: endorse: End reverse: End

exceptions: early_renomination: Keep
 * 1) Shoot us out of the state machine if premature renomination for deletion is demonstrated.

configuration: normal_grace_period: 7 days longer_grace_period: 10 days maximum_discussion: 21 days afd_queue_page: "Wikipedia:Articles for deletion" deletion_review_queue_page: "Wikipedia:Deletion review"
 * 1) Constants to be customized