Jade, the Judgment and Dialogue Engine, is a system for wiki communities to annotate individual revisions and diffs. It is implemented as a MediaWiki extension.
The Jade extension to MediaWiki provides a new namespace called "Judgment" that stores annotations to the wiki's content. Pages follow a standardized naming convention. For example, the page "Judgment:Revision/2893543" would include statements on the version of the page at revision number 2893543.
This namespace allows you to provide labels and write notes regarding individual edits made to a wiki and versions of wiki pages. These machine-readable annotations are wiki pages, and they are editable just like any other wiki page. Users can interact with these judgments directly as wiki pages or through the use of specialized tools made available by the Jade extension and by third-party tool developers.
Annotations provided through Jade can be used to provide feedback to automated systems like ORES, for example by challenging the assertions that ORES makes. ORES is a machine learning service run by the Wikimedia Foundation which makes instant predictions on matters such as a given article's quality or whether a given edit to a page damages that page. It helps make the work of human reviewers more efficient by providing data which helps those reviewers triage their work. ORES is not Jade, and Jade is not ORES. However, human contributions made through Jade can be used to help make ORES better.
- Sign up to be contacted about discussions: JADE contact list
What are judgments?
Judgments in Jade begin as individuals' subjective opinions about wiki entities, for example that a given article has reached Featured Article quality, or that an edit is damaging to the article. These judgments can be refined through collaborative authorship, complete with a talk namespace. This enables collaborative auditing – a wiki-style collaborative process for reviewing changes to the wiki.
In the first phase of deployment, Jade will be used to judge revisions (individual versions of wiki pages) and diffs (differences between revisions). Later, we'll want to judge other entities such as pages (regardless of revision), admin actions (via log entries), users, and more. Each entity type can be judged according to several quantitative schemas, for example the "Wikipedia 1.0" assessment scale.
Judgments consist of free text, an evaluation on one of our quantitative scales, and indirectly the associated talk page and metadata about the edit history of the judgment. Currently, the scales each map to an ORES model:
- damaging – whether an edit causes damage to the article (i.e. it is vandalism or otherwise inappropriate). The values for this are true or false.
- goodfaith – an educated guess as to whether an edit was made in good faith (or with the intent of causing harm). This field is useful for clarifying whether a "damaging" edit was intended vandalism or accidental mistake (e.g. by a newcomer). The values for this are true or false.
- contentquality – the quality of a given wiki page (as of a given revision). The values for this can be configured per-wiki. For example, on English Wikipedia it would use the Wikipedia 1.0 Assessment Scale.
In the future, we may support other kinds of annotations, including sentence-level annotations.
Judgment content will normally be authored using tools, although it can be edited and administered as raw wiki pages when needed. Jade data is stored as regular wiki pages in a special-purpose namespace, e.g. https://en.wikipedia.beta.wmflabs.org/wiki/Judgment:Diff/376901 The content format is a bit unpleasant to write by hand, and we're still prototyping the reference UI, so please join us in welcoming a small ecosystem of user interfaces to be developed over time.
The current technical implementation is documented in Extension:JADE and won't be treated further here.
Why is Jade important?
Jade will serve several purposes, to give a rich structure to patrolling and assessment workflows, and to produce high-quality feedback for the ORES AIs. Other uses will likely emerge.
It will facilitate a community of people overseeing our AIs, perhaps even in "partnership" with the AI. Jade is needed so that editors can effectively challenge the AIs' automated judgments. Currently this work is done ad-hoc on wiki pages. E.g. it:Progetto:Patrolling/ORES. Jade represents basic infrastructure to better support this auditing process. The goal is put more power into the hands of the people that ORES' predictions affect.
We hope that Jade will become useful to the patroller community especially, as a way to coordinate work across workflows. For example, edits that have been patrolled as good can be input to ORES's AI training as non-damaging examples.
Jade data should also become an important reservoir of counterexamples which help challenge assumptions and mitigate biases in ORES or other AIs. Judgments in Jade can be used to audit ORES (e.g. tracking bias) as well as to retrain ORES. Doing this in an open and collaborative way will encourage democratic oversight, rather than a handful of technical staff making all the decisions about how to build ORES.
Continuous collaborative auditing has been explored in the industry and is a promising method. Our approach is unique due to the massive public collaboration possible in wiki projects, so we're eagerly looking forward to seeing what emerges.
What will Jade support?
- MediaWiki integration
- Allows users to review a judgement for a wiki entity (revisions, pages, etc.)
- Allows users to submit and edit judgments
- Public API for to tool developers and extension developers (Huggle, RC Filters, etc.)
- ORES integration
- Judgments returned along with predictions.
- Consensus patterns
- Users file dissenting judgments
- Structured discussions (or talk pages) for every wiki entity
- Collaborative analysis
- Judgments open licensed and publicly accessible
- Machine readable dumps/api for generating fitness and bias trend reports
- Curation and suppression
- Recent judgments appear in Special:RecentChanges
- Basic suppression actions supported (hide comment, user, etc.)
See JADE/Implementations for alternative potential technical implementations.
- "Best practices for AI in the social spaces: Integrated refutations"
- Technical work: task T148700 and its subtasks.
- Past examples of (manual, wiki-based) ORES auditing:
- JADE/Content schemas
- JADE/Implementations/Archive 1
- JADE/Intro blog
- JADE/Intro blog/Short story
- JADE/MCR example
- JADE/MCR example/Edit/1234
- JADE/MCR example/Edit/1234/damaging
- JADE/MCR example/Edit/1234/edittype
- JADE/MCR example/Edit/1234/goodfaith
- JADE/MCR example/Revision/1234
- JADE/MCR example/Revision/1234/draft quality
- JADE/MCR example/Revision/1234/topic
- JADE/MCR example/Revision/1234/wp10
- JADE/Open questions
- JADE/Political economy
- JADE/Scalability FAQ
- JADE/Use cases
- JADE/Wikimania 2018 presentation