JADE

What are judgments?
Judgments in JADE begin as individuals' subjective opinions about wiki entities, for example that a given article has reached Featured Article quality, or that an edit is damaging to the article. These judgments can be refined through collaborative authorship, complete with a talk namespace, into what we expect to be a new gold standard for data quality: collaborative auditing.

Judgments consist of free text, an evaluation on one of our quantitative scales, and indirectly the associated talk page and metadata about the edit history of the judgment. Currently, the scales each map to an ORES model: damaging, goodfaith, wp10, itemquality, and drafttopic, because that makes it most appropriate for retraining the AIs directly.

Judgment content will normally be authored using tools, although it can be edited and administered as raw wiki pages when needed. JADE data is stored as regular wiki pages in a special-purpose namespace, e.g. https://en.wikipedia.beta.wmflabs.org/wiki/Judgment:Diff/376901 The content format is a bit unpleasant to write by hand, and we're still prototyping the reference UI, so please join us in welcoming a small ecosystem of user interfaces to be developed over time.

The current technical implementation is documented in Extension:JADE and won't be treated further here.

Why is JADE important?
JADE will serve several purposes, to give a rich structure to patrolling and assessment workflows, and to produce high-quality feedback for the ORES AIs. Other uses will likely emerge.

It will facilitate a community of people overseeing our AIs, perhaps even in "partnership" with the AI. JADE is needed so that editors can effectively challenge the AIs' automated judgments. Currently this work is done ad-hoc on wiki pages. E.g. it:Progetto:Patrolling/ORES. JADE represents basic infrastructure to better support this auditing process. The goal is put more power into the hands of the people that ORES' predictions affect.

We hope that JADE will become useful to the patroller community especially, as a way to coordinate work across workflows. For example, edits that have been patrolled as good can be input to ORES's AI training as non-damaging examples.

JADE data should also become an important reservoir of counterexamples which help challenge assumptions and mitigate biases in ORES or other AIs. Judgments in JADE can be used to audit ORES (e.g. tracking bias) as well as to retrain ORES. Doing this in an open and collaborative way will encourage democratic oversight, rather than a handful of technical staff making all the decisions about how to build ORES.

Continuous collaborative auditing has been explored in the industry and is a promising method. Our approach is unique due to the massive public collaboration possible in wiki projects, so we're eagerly looking forward to seeing what emerges.

What will Jade support?

 * MediaWiki integration
 * Allows users to review a judgement for a wiki entity (revisions, pages, etc.)
 * Allows users to submit and edit judgments
 * Public API for to tool developers and extension developers (Huggle, RC Filters, etc.)
 * ORES integration
 * Judgments returned along with predictions.
 * Consensus patterns
 * Users file dissenting judgments
 * Structured discussions (or talk pages) for every wiki entity
 * Collaborative analysis
 * Judgments open licensed and publicly accessible
 * Machine readable dumps/api for generating fitness and bias trend reports
 * Curation and suppression
 * Recent judgments appear in Special:RecentChanges
 * Basic suppression actions supported (hide comment, user, etc.)

Open discussions
Sign up to be contacted about discussions: JADE contact list

See JADE/Implementations for alternative potential technical implementations.