JADE



Jade is an MediaWiki extension that is designed to allow editors to annotate articles, revisions, diffs, and other wiki things using structured data. Wikipedia editors make difficult judgment calls all of the time. For example, "Is this edit vandalism?", "What is the quality level of this article?", "What type of changes are happening in this edit?", and "Is this newcomer a vandal or a good-faith contributor in need of help?  Jade is a system that is designed to capture those judgments in a central repository to support collaboration and re-use.  As Wikipedia invests more heavily in algorithmic strategies (e.g., ORES), human judgment and consensus needs to remain the gold standard.  Jade provides an effective means for correcting the mistakes of AIs and calling attention to problematic biases.

How does Jade work
See the glossary for an overview of terminology.

Jade is a a MediaWiki extension that adds a new namespace to the wiki called "Jade". Each Jade page represents a wiki entity and it contains labels/annotations that are relevant to that entity. For example, the page with the title "Jade:Diff/123456" represents the edit described at Special:Diff/123456 and it can contain labels describing whether the edit was "damaging" or whether it appears to be saved in "good-faith" or it is "vandalism". Similarly, "Jade:Revision/123456" represents the entire page as a rev_id 123456 and it can contain labels describing the quality level of the page from "Stub" to "Featured Article".

Jade pages work like regular pages, so they have a history and changes to them can be reverted. And Jade shows up in Special:RecentChanges so activity there can be monitored.

Jade has an en:application programming interface so that labels/annotations can be submitted directly from tools without needing to manually go to the target Jade page.

Coordination between patrollers
When a patroller reviews an edit and decides that it is good, it's a waste of time to review that edit again. Currently the patrolled flag can help with coordination between patrollers, but it only records that something has been done -- not what judgment was made. Labels stored in Jade create a record of the judgments that people make about wiki entities. Jade provides a flexible strategy for managing backlogs of review work.

Training new AIs
Jade stores human judgment, so it provides valuable examples for training new AIs. Generally, when training AIs, gathering high quality labeled examples is the most difficult task. Systems like wiki labels allow editors to generate AI training data as a specific activity outside of their work. Jade allows editors to store their judgment while they are doing their regular wiki-work. This makes it possible to build large datasets of high quality training data without wasting the time of editors.

Auditing/Refuting of AI predictions
AI systems like ORES make predictions about the subtle qualities of edits, articles, and editors themselves. By their nature, many of these predictions are wrong. Jade provides a mechanism for recording specific instances where humans disagree with AIs. This provides a reliable strategy for humans to refute the algorithms and correct mistakes in the record. It also provides a mechanism for tracking trends in what AIs get right and wrong. This is essential for identifying and addressing en:algorithmic bias.

How to get involved

 * Sign up to be contacted about discussions and deployments: JADE contact list

Background
See JADE/Background.

What are judgments?
Judgments in Jade begin as individuals' subjective opinions about wiki entities, for example that a given article has reached Featured Article quality, or that an edit is damaging to the article. These judgments can be refined through collaborative authorship, complete with a talk namespace. This enables collaborative auditing – a wiki-style collaborative process for reviewing changes to the wiki.

In the first phase of deployment, Jade will be used to judge revisions (individual versions of wiki pages) and diffs (differences between revisions). Later, we'll want to judge other entities such as pages (regardless of revision), admin actions (via log entries), users, and more. Each entity type can be judged according to several quantitative schemas, for example the "Wikipedia 1.0" assessment scale.

Judgments consist of free text, an evaluation on one of our quantitative scales, and indirectly the associated talk page and metadata about the edit history of the judgment. Currently, the scales each map to an ORES model:


 * damaging – whether an edit causes damage to the article (i.e. it is vandalism or otherwise inappropriate). The values for this are true or false.
 * goodfaith – an educated guess as to whether an edit was made in good faith (or with the intent of causing harm). This field is useful for clarifying whether a "damaging" edit was intended vandalism or accidental mistake (e.g. by a newcomer). The values for this are true or false.
 * contentquality – the quality of a given wiki page (as of a given revision). The values for this can be configured per-wiki. For example, on English Wikipedia it would use the Wikipedia 1.0 Assessment Scale.

In the future, we may support other kinds of annotations, including sentence-level annotations.

Judgment content will normally be authored using tools, although it can be edited and administered as raw wiki pages when needed. Jade data is stored as regular wiki pages in a special-purpose namespace, e.g. https://en.wikipedia.beta.wmflabs.org/wiki/Judgment:Diff/376901 The content format is a bit unpleasant to write by hand, and we're still prototyping the reference UI, so please join us in welcoming a small ecosystem of user interfaces to be developed over time.

The current technical implementation is documented in Extension:JADE and won't be treated further here.

Community governance of AI systems
Jade will serve several purposes, to give a rich structure to patrolling and assessment workflows, and to produce high-quality feedback for the ORES AIs. But most importantly, Jade provides a powerful tool for editors to monitor the behavior of AIs running in the Wikis.

Jade enables editors to directly critique specific predictions made by various AIs. E.g. if ORES "damaging" model thinks an edit is damaging, but a real human editor does not, Jade is the place that human can file their rebuttal. After collecting a large number of such confirmations and rejections of ORES "damaging" model, editors can use Jade's data to monitor trends in fitness and bias. Before Jade, this work was done ad-hoc on wiki pages. E.g. see it:Progetto:Patrolling/ORES. Jade represents basic infrastructure to better support these auditing and monitoring processes.

What does Jade support?

 * MediaWiki integration
 * Allows editors to propose/endorse labels for wiki entities (edits, pages, etc.)
 * Labels can be submitted from key points in the interface: Special:Diff, action=undo, action=rollback, etc.
 * Public API for to tool developers and extension developers (Huggle, RC Filters, RTRC, etc.)
 * Consensus building patterns
 * Multiple labels can be proposed and endorsed. One label is marked as the "consensus" or "preferred" label
 * Jade pages have talk pages can be used for deeper discussion
 * Collaborative analysis
 * Jade's labels open licensed and publicly accessible
 * Machine readable dumps/api for generating fitness and bias trend reports
 * Curation and suppression
 * Recent activity in Jade appears in Special:RecentChanges
 * Jade pages can be reverted like any other page.
 * Basic suppression actions are supported (hide comment, user, etc.)

Open discussions
See JADE/Implementations for alternative potential technical implementations.