Requests for comment/Workflows editable on-wiki

This is a draft overview of a potential design for the workflow description system. It is written for an audience of MediaWiki system architects and workflow description authors.

Stakeholders

 * Editor community: Replace ad-hoc on-wiki processes with a well-defined workflow, while leaving it customizable. They need to describe processes in a way that is readable and easy to change.
 * Extension authors: Tools like UploadWizard could be made customizable
 * Fundraising: We need a formal and verifiable way of enforcing rules about how we handle donations.
 * Product designers: We need to have a consistent user experience across wikis, while accounting for necessary local differences in workflows.

Alternatives
Less intrusive tools to help with onwiki process. For example, parser functions to set a 7-day reminder alarm. Queue managers to help with pages that list work items.

Considerations

 * This approach might seriously damage wiki process discussions, by adding a layer of arcana that only tech wizards can manipulate.
 * The machine-executable state machine is rarely the simplest and most readable way to explain a process. Transcription will introduce weird complications like parallel subprocesses and extra states.
 * Documentation must be kept in sync with the implementation.

Architecture decisions
Don't expose another Turing-complete DSL! Workflow customizability is entirely defined by the supporting code (engine and implementation). The idea is that we are creating a DSLL, (like Forth ;), which we use to define very concise descriptions of workflow solutions to a small set of problems.

Complex workflows should always be decomposed into a set of smaller, self-contained workflows, which can be executed in parallel or in sequence.

No job will change state unless it's following a defined transition. There is no UI tool which will set a job to an arbitrary state.

Restricting "actions" to only unconditional processing done on entry to a state simplifies the state machine graph so that it can always be trivially transformed to a Petri net. These are both pretty and easy to understand.

The configuration may be inlined with the spec, or stored as a separate file so that we can regulate user edit access independently.

State variables are strictly serializable so that jobs are recoverable, may be paged out, or migrated between servers.

Only one signal can be queued at a time, the last signal sent wins. This is a tricky call. Machines must be self-stimulating, but I think the only case for a deeper signal stack would be "continue 2"-style evile magick, something like "make the next default action something different than the ordinary default." Not much of a case. Also, queue vs stack behavior is a mind-rending paradox to even consider. As for a signal overriding any previous, I think this is the behavior desired from a state with multiple actions.

Exception handling
A specification may define global signals, which can be sent to a job in any state. This is a shortcut which expands to an implicit transition from every state in the workflow to a special exception state. An example would be, a workflow in when the user can "cancel" at any step, which fires cleanup processing and transitions to the exit node.

If a job becomes uncompletable for any reason, it should be flagged as permanently frozen, and cleanup performed outside of the workflow system. There is no "universal" exception mechanism to catch unexpected errors.

Transactionality
The easiest way to understand atomicity in workflows is to look at the steady-state, this is when a job has arrived in a state and processing stops. Every transition between these steady-states should be atomic, cannot be paused, and will be rolled back in case of error.

Transition will be protected by a database transaction.

The workflow implementation is responsible for guaranteeing that any effects outside of the database are rolled back if the transition fails.

Asynchronous vs. synchronous states
States are asynchronous by default, meaning that the job will be paused after entering this state, and will remain in that state until receiving a new signal. A synchronous state is one in which the implementation provides a callback which runs upon entering this state. This callback will usually perform processing, and then send its own job a signal.

Versioning
Modifications to a production workflow are tricky, because there may be jobs in the queue already. Let's do something like semantic versioning, where a minor version change is merely informational, but a major version bump means that the engine must provide a migration to upgrade older jobs.

Diagnostics
Jobs will be logged as they move through a workflow, including any signals or actions.

Implementing a workflow
Components:
 * Implementation
 * Base specification
 * Default configuration
 * Customized specification and configuration

Example: Articles for Deletion
Overview:

Specification
name: Articles for Deletion Queue implementation: ArticlesForDeletion states: Start: actions: # Append this article to the AfD discussion page, then signal "open" add_to_afd_queue # This is a soft keep. If a child workflow later acts on "delete_in", # the expiration date and default outcome will be overridden. keep_in: normal_grace_period # Takes a map from deletion tag name to workflow specification title. # A child workflow is begun, which can signal back to this machine. fork_on_tag: PROD: Proposed Deletion BLP-PROD: Proposed Deletion, Biographies CSD: Speedy Deletion Queue Copyvio: Copyright investigation transitions: open: Discussion Discussion: transitions: # There is logic in here to limit total time open to maximum_discussion. extend: Discussion # Proposed Deletion PROD: actions: # Only allow PROD once per article. On successive invocations, # automatically send a "keep" signal and wait for admin review. limit_jepoardy: 1 scan_ # Sets an alarm to run delete_in: normal_grace_period transitions: # signaled by the implementation when the template is removed from the article, or     keep: Keep #     delete: Delete Keep: actions: signal: review Delete: actions: signal: review Review: transitions: endorse: End reverse: End exceptions: early_renomination: Keep configuration: normal_grace_period: 7 days longer_grace_period: 10 days maximum_discussion: 21 days afd_queue_page: "Wikipedia:Articles for deletion" deletion_review_queue_page: "Wikipedia:Deletion review"
 * 1) The AfD extension will hook on article save, and will check the article
 * 2) content for new deletion tags.  If this condition is present, we
 * 3) instantiate a new AfD job with the new revision as its argument, and
 * 4) begin the workflow.
 * 5) The workflow is split up into parallel and child workflows, a strategy
 * 6) that should be used liberally, everywhere.  We use the same implementation
 * 7) for all specifications here out of laziness, but there are really three
 * 8) archetypes: discussion queue, provisional endorsement, and admin review.
 * 9) Pages are wired to send the following signals to this workflow:
 * 10)   extend
 * 11)   keep
 * 12)   delete
 * 1)   keep
 * 2)   delete