Requests for comment/Workflows editable on-wiki

This is a draft overview of a potential design for the workflow description system. It is written for an audience of MediaWiki system architects and workflow description authors.

Stakeholders

 * Editor community: Replace ad-hoc on-wiki processes with a well-defined workflow, rendering it transparent and consistent. We need to describe processes in a way that is readable and easy to change.  Workflows can be customized on a per-site basis, or even per-user if desired.
 * Extension authors: Tools like UploadWizard could be made customizable
 * Fundraising: We need a formal and verifiable way of enforcing rules about how we handle donations.
 * Product designers: We need to have a consistent user experience across wikis, while accounting for necessary local differences in workflows.

Glossary

 * Action
 * Custom code called during job execution. All actions are defined in a library.  Actions may accept arguments passed in the description.  Actions can be associated with a transition, or be run upon entering a state.


 * Configuration
 * Constant inputs to the workflow that are edited on-wiki. These variables may be referenced in the description, or stored and loaded from action code.


 * Description
 * Workflow model and behaviors, expressed as nested attribute-value pairs. A description may be hosted and edited on-wiki, or   This data provides an outline of the workflow's actual states, transitions, and actions.  Its contents are interpreted by the specific workflow implementation, so the exact details vary widely.  (for example)


 * Engine
 * The MediaWiki extension which interprets and executes workflow specifications.


 * Library
 * PHP code defining a set of potential actions. This are broken into small components which can be included by a workflow.


 * Job
 * Active process that moves through a workflow. Jobs can be suspended and resumed during multitasking, or according to schedule.


 * Signal
 * Event name sent to a job. The job will move along a transition of the same name.  If there is no transition matching the signal, an error is raised.


 * State
 * Also refers to the string value identifying which state a job is in.


 * State variables
 * Job-specific data accessed from action code. This must be serializable and have transactional behavior.


 * Transition
 * Named, unidirectional association (aka, an arrow) pointing to the next state, along which the state machine is transformed when a signal of the same name is sent to a job at the transition's root state.


 * Workflow
 * An abused term usually referring to the business process we are modelling.

Alternatives
Less intrusive tools to help with onwiki process. For example, parser functions to set a 7-day reminder alarm. Queue managers to produce pages that list work items.

Considerations

 * This engineered approach might seriously damage wiki process discussions, by adding a layer of arcana that only tech wizards can manipulate.
 * The machine-executable state machine is rarely the simplest and most readable way to explain a process. It can be confusing as bloody hell, even when graphed as a picture.  Also, capturing process descriptions will introduce weird-looking artifacts like parallel subprocesses and extra states.
 * Resource management might not be appropriate to implement in a library, and would then have to go into the core engine.
 * Documentation must be kept in sync with the implementation, especially with regard to the workflow description syntax.

Don't expose another Turing-complete DSL! Workflow customizability is entirely defined by the supporting code (engine and implementation). The idea is that with each workflow we are creating a DSLL, (like in Forth ;), which we can use to define very concise descriptions of workflow solutions covering a small set of problems.

Complex workflows should always be decomposed into a set of smaller, self-contained workflows, which can be executed in parallel or in sequence.

No job will change state unless it's moving along a predefined transition. There will be no UI or admin tool which can set a job to an arbitrary state. Otherwise, we lose verifiability.

The unconditional processing done on entry to a state simplifies the state machine graph so that it can always be trivially transformed to a Petri net. This type of graph has the lovely properties of being both pretty and easy to understand.

Discuss: The configuration may be inlined with the spec, or stored as a separate file so that we can regulate user edit access independently.

State variables are strictly serializable so that jobs are recoverable, may be paged out, or migrated between servers. Data scope is a discussion.

Node/browser javascript implementation

Overview
Workflows are captured as text, on-wiki in the Workflow: namespace, or on the filesystem. These text contents describe process models as network graphs, like the familiar state machine, plus some associated configuration constants. When one of these networks is executed, it waits for signal words, and may respond by making calls into a tightly restricted set of custom PHP helper libraries. Complex workflows should be broken down into a cohort of sub-networks.

The goal of this process language is to provide a readable, minimalistic, and provably secure authoring tool, expressive enough to represent the majority of controller logic present in MediaWiki, while rendering it customizable in a standard way.

Control flow
These sequence diagrams are examples of how the workflow system can be driven from MediaWiki extensions, and how user interactions take place.



Workflow state can be loaded directly and used like an ordinary variable. Here, a page is rendered from templated wikitext or a SpecialPage, and state variables are pulled in to generate the page content. The job is retrieved by ID, and perhaps the ID or a token (if the job does not belong to the current user) are embedded in the rendered content or stored in the session, to enable further user interaction.



When a workflow defines consecutive user interactions, we need a mechanism for the workflow engine to determine what page is rendered, or where to redirect the browser. Control flow is yielded to the workflow engine, and we are responsible for routing the next page.



MediaWiki can be used to schedule tasks which affect a workflow, for example a task which simply fires a signal at a given job_id. When the target workflow transitions, it may make calls back into MediaWiki.

Extensibility
The extension points are,
 * Writing state machine descriptions, which may call actions
 * Writing libraries that define new actions
 * Hooks into and out of. Well, that's a TODO but also a must-have.

Asynchronous vs. self-stimulating states
States are asynchronous by default, meaning that the job will be paused indefintely upon entering the state. The job will remain in that state until it receives another signal. A self-stimulating state will provide itself with a signal.

Any user interaction steps should guarantee they will self-stimulate if granted control flow, see yieldSignal(signal). The engine will throw an exception if the machine enters asynchronous wait and the user is waiting.

Concurrency
The base workflow is single-threaded, it is either not running (sitting in an asynchronous state), or performing a transition transaction. Transitions are synchronized using an exclusive mutex for each job instance.

Multithreading and joins will be defined in a library.

Data
There is only one scope for data, StateVariables which are accessible through the job's (StateMachine) get/setValue(key...) methods. These state variables are backed by the database, and behave transactionally across transitions.

Configuration (statically defined in the workflow description) and state variables share the same namespace, so be aware that state variables will override any identically-named configuration.

Each machine instance is solely responsible for its variables, there are no assumptions about the naming or structure of the data, and nothing in the engine will modify or read from your data.

Access Control
Workflows have rules regarding who can initiate and interact. Access control could potentially vary at state, or signal granularity. Maybe libraries should be whitelisted for use in workflow descriptions&mdash;are there potentially dangerous libraries?

Tokens may be issued for special variances, redeemable via API or on the server. Tokens have an expiration time.

Exception handling
A specification may define global signals, which can be sent to a job in any state. This is a shortcut which expands to an implicit transition from every state in the workflow to a special exception state. An example would be, a workflow in when the user can "cancel" at any step, which fires cleanup processing and transitions to the exit node.

If a job becomes uncompletable for any reason, it should be flagged as permanently frozen, and cleanup performed outside of the workflow system. There is no "universal" exception mechanism to catch unexpected errors.

ACID
The easiest way to understand atomicity in workflows is to look at the steady-state, this is when a job has arrived in a state and processing stops. Every transition between these steady-states must be atomic, the machine cannot be paused, and changes will be rolled back in case of error.

Each transition will be protected by a database transaction on the state variables, and any side-effects should be transactional as well. Library actions are responsible for guaranteeing that any side-effects are rolled back if the transition fails, even if it fails from another action.

Versioning
Modifications to a production workflow are tricky, because there may be jobs in the queue already. The base behavior is that the system caches each revision of a workflow description, and jobs are version-locked to the description used to initiate them. Job migrations are always explicit, even when they are a no-op.

If no version is available, for instance, the description came from the filesystem, then a SHA-256 hash of the description contents is used in its place.

Any workflow description that is used in production will be cached in the database indefinitely.

Diagnostics
Jobs will be logged as they move through a workflow, including any signals received or actions performed.

EventLogging schemas
In debug mode, we might want to version state variables or dump them at each step.

Proposed implementation






Components

 * Default descriptions - your base workflows, which may be overridden downstream
 * Default configuration - reasonable default values for all parameters
 * Libraries - If the workflow involves anything novel, you may have to implement it as an action in PHP.

Description
name: Articles for Deletion Queue
 * 1) TODO: flesh out
 * 2) The AfD extension will hook on article save, and will check the article
 * 3) content for new deletion tags.  If this condition is present, we
 * 4) instantiate a new AfD job with the new revision as its argument, and
 * 5) begin the workflow.
 * 6) The workflow is split up into parallel and child workflows, a strategy
 * 7) that should be used liberally, everywhere.  We use the same implementation
 * 8) for all specifications here out of laziness, but there are really three
 * 9) archetypes: discussion queue, provisional endorsement, and admin review.
 * 10) The AfD workflow can receive the following signals:
 * 11)   extend
 * 12)   keep
 * 13)   delete
 * 1)   keep
 * 2)   delete

libraries: # Allows us to respond to user requests with a wiki page - WikiPages

# Enables synchronous states by sending self a signal - SelfStimulating

# Send self a signal in the future - Alarm

# Tag pages and fork depending on existing tags - TaggedPage

# Provides limit_jepoardy and delete_in - ArticlesForDeletion

states:

Start: initial: true actions: # Append this article to the AfD discussion page, then signal "open" add_to_afd_queue # This is a soft keep. If a child workflow later acts on "delete_in", # the expiration date and default outcome will be overridden. keep_in: normal_grace_period # Takes a map from deletion tag name to workflow specification title. # A child workflow is begun, which can signal back to this machine. fork_on_tag: PROD: Proposed Deletion BLP-PROD: Proposed Deletion, Biographies CSD: Speedy Deletion Queue Copyvio: Copyright investigation transitions: open: Discussion

Discussion: transitions: # There is logic in here to limit total time open to maximum_discussion. extend: Discussion

# Proposed Deletion PROD: actions: # Only allow PROD once per article. On successive invocations, # automatically send a "keep" signal and wait for admin review. limit_jepoardy: 1 scan_ # Sets an alarm to run delete_in: normal_grace_period transitions: # signaled by the implementation when the template is removed from the article, or     keep: Keep #     delete: Delete

Keep: actions: signal: review # No transitions, this is a final state

Delete: actions: signal: review # No transitions, this is a final state

Review: transitions: endorse: End reverse: End

exceptions: early_renomination: Keep
 * 1) Shoot us out of the state machine if premature renomination for deletion is demonstrated.

configuration: normal_grace_period: 7 days longer_grace_period: 10 days maximum_discussion: 21 days afd_queue_page: "Wikipedia:Articles for deletion" deletion_review_queue_page: "Wikipedia:Deletion review"
 * 1) Constants to be customized