Jade is an MediaWiki extension that is designed to allow editors to annotate articles, revisions, diffs, and other wiki things using structured data. Wikipedia editors make difficult judgment calls all of the time. For example, "Is this edit vandalism?", "What is the quality level of this article?", "What type of changes are happening in this edit?", and "Is this newcomer a vandal or a good-faith contributor in need of help?" Jade is a system that is designed to capture those judgments in a central repository to support collaboration and re-use. As Wikipedia invests more heavily in algorithmic strategies (e.g., ORES), human judgment and consensus needs to remain the gold standard. Jade provides an effective means for correcting the mistakes of AIs and calling attention to problematic biases.
How does Jade work
See the glossary for an overview of terminology.
Jade is a a MediaWiki extension that adds a new namespace to the wiki called "Jade". Each Jade page represents a wiki entity and it contains labels/annotations that are relevant to that entity. For example, the page with the title "Jade:Diff/123456" represents the edit described at Special:Diff/123456 and it can contain labels describing whether the edit was "damaging" or whether it appears to be saved in "good-faith" or it is "vandalism". Similarly, "Jade:Revision/123456" represents the entire page as a rev_id 123456 and it can contain labels describing the quality level of the page from "Stub" to "Featured Article".
Jade pages work like regular pages, so they have a history and changes to them can be reverted. And Jade shows up in Special:RecentChanges so activity there can be monitored.
Jade has an en:application programming interface so that labels/annotations can be submitted directly from tools without needing to manually go to the target Jade page.
Coordination between patrollers
When a patroller reviews an edit and decides that it is good, it's a waste of time to review that edit again. Currently the patrolled flag can help with coordination between patrollers, but it only records that something has been done -- not what judgment was made. Labels stored in Jade create a record of the judgments that people make about wiki entities. Jade provides a flexible strategy for managing backlogs of review work.
Training new AIs
Jade stores human judgment, so it provides valuable examples for training new AIs. Generally, when training AIs, gathering high quality labeled examples is the most difficult task. Systems like m:wiki labels allow editors to generate AI training data as a specific activity outside of their work. Jade allows editors to store their judgment while they are doing their regular wiki-work. This makes it possible to build large datasets of high quality training data without wasting the time of editors.
Auditing/Refuting of AI predictions
AI systems like ORES make predictions about the subtle qualities of edits, articles, and editors themselves. By their nature, many of these predictions are wrong. Jade provides a mechanism for recording specific instances where humans disagree with AIs. This provides a reliable strategy for humans to refute the algorithms and correct mistakes in the record. It also provides a mechanism for tracking trends in what AIs get right and wrong. This is essential for identifying and addressing en:algorithmic bias.
How to get involved
- Sign up to be contacted about discussions and deployments: JADE contact list
What are labels?
A label in Jade begin as individuals' subjective opinions about a wiki entity. For example, "Is this edit damaging?" or "Has this version of an article reached Featured Article status?" An editor can propose a label and mark that as consensus. Another editor can disagree, propose another label and mark that label as consensus. If there is disagreement, that can be managed via the talk page. In this way, Jade allows editors to manage the production and maintenance of "labels" as first order wiki entities.
A label consists of structured data along with a free-form note. Currently, Jade supports one type of label:
- Edit quality
- damaging – does this edit cause damage to the article (i.e. it is vandalism or otherwise inappropriate). The values for this are true or false.
- goodfaith – an educated guess as to whether an edit was made in good faith (or with the intent of causing harm). This field is useful for clarifying whether a "damaging" edit was intended vandalism or accidental mistake (e.g. by a newcomer). The values for this are true or false.
- Article quality
- the quality of a given wiki page (as of a given revision). The values for this can be configured per-wiki. For example, on English Wikipedia it would use the Wikipedia 1.0 Assessment Scale.
- Subject matter needed
- does this edit make a subtle change that a subject matter expert will need to review? This should be marked true for cases when a patroller will not be able to effectively review the edit without specific knowledge or research to verify its claim.
- Citation needed
- does this sentence need a citation?
- Concept topic
- what topic is this Wikidata item relevant to. E.g. d:Q7251 maps to en:Alan Turing which topics are relevant? "Military and Warfare", "Computer Science", "Mathematics", "Northern Europe", "Biography", etc.
- Newcomer quality
- Is this new editor already doing productive work? Are they are least trying? Or are they a vandal?
- Edit type
- What type of change is happening in this edit? A copyedit, refactoring, elaboration, etc?
Community governance of AI systems
Jade will serve several purposes, to give a rich structure to patrolling and assessment workflows, and to produce high-quality feedback for the ORES AIs. But most importantly, Jade provides a powerful tool for editors to monitor the behavior of AIs running in the Wikis.
Jade enables editors to directly critique specific predictions made by various AIs. E.g. if ORES "damaging" model thinks an edit is damaging, but a real human editor does not, Jade is the place that human can file their rebuttal. After collecting a large number of such confirmations and rejections of ORES "damaging" model, editors can use Jade's data to monitor trends in fitness and bias. Before Jade, this work was done ad-hoc on wiki pages. E.g. see it:Progetto:Patrolling/ORES. Jade represents basic infrastructure to better support these auditing and monitoring processes.
What does Jade support?
- MediaWiki integration
- Allows editors to propose/endorse labels for wiki entities (edits, pages, etc.)
- Labels can be submitted from key points in the interface: Special:Diff, action=undo, action=rollback, etc.
- Public API for to tool developers and extension developers (Huggle, RC Filters, RTRC, etc.)
- Consensus building patterns
- Multiple labels can be proposed and endorsed. One label is marked as the "consensus" or "preferred" label
- Jade pages have talk pages which can be used for deeper discussion
- Collaborative analysis
- Jade's labels open licensed and publicly accessible
- Machine readable dumps/api for generating fitness and bias trend reports
- Curation and suppression
- Recent activity in Jade appears in Special:RecentChanges
- Jade pages can be reverted like any other page.
- Basic suppression actions are supported (hide comment, user, etc.)
See JADE/Implementations for alternative potential technical implementations.
- "Best practices for AI in the social spaces: Integrated refutations"
- Technical work: task T148700 and its subtasks.
- Past examples of (manual, wiki-based) ORES auditing:
- JADE/Content schemas
- JADE/Implementations/Archive 1
- JADE/Intro blog
- JADE/Intro blog/Short story
- JADE/MCR example
- JADE/MCR example/Edit/1234
- JADE/MCR example/Edit/1234/damaging
- JADE/MCR example/Edit/1234/edittype
- JADE/MCR example/Edit/1234/goodfaith
- JADE/MCR example/Revision/1234
- JADE/MCR example/Revision/1234/draft quality
- JADE/MCR example/Revision/1234/topic
- JADE/MCR example/Revision/1234/wp10
- JADE/Open questions
- JADE/Pilot postings
- JADE/Political economy
- JADE/Scalability FAQ
- JADE/Use cases
- JADE/Wikimania 2018 presentation