Article feedback/Extended review

Because of the lack of a standard, readily-available tool to create and store quality reviews of Wikipedia content, several groups and organizations have created their own ad-hoc tools for this purpose. This page describes a standard system to conduct open quality review of Wikipedia content, and surface the results on the article page.

This system is primarily intended for Wikipedia, but could also be used on other Wikimedia projects.

System
The system will be implemented as a MediaWiki extension: it will make integration with Wikipedia user accounts easier, and review data will be stored locally. The Article feedback tool provides an existing framework that could be extended to support a more detailed quality review process.

Authentication
E-mail is the safest assumption we can make on the partner organization's infrastructure.

Organization members get an invitation by e-mail to confirm their affiliation to the organization on Wikipedia; the invitation contains a link that leads them to a special page to confirm their affiliation. The link contains a token to identify the partner organization (mandatory), and if possible other information to prefill the fields.


 * If the user is logged in, the page displays the fields required for the confirmation. If they want to use an alternate account for
 * Organization (non-editable, filled)
 * Real name (mandatory, editable, filled if possible)
 * Credentials (optional, editable, filled if possible)
 * Link to university page or biography (optional, editable, filled if possible)
 * If the user isn't logged in, the page offers the possibility to log in in-place, or sign up in-place. When that is done, it displays the interface for logged-in users.

If the partner organization agrees to provide us with a structured document containing this information (e.g. a CSV file), a script can be run to generate these e-mail invitations.

If the partner organization prefers not to share this information with us, we would provide them with a modified script that would only identify the organization; their members would then enter the rest of the information themselves.

The reviewer should be able to attach their existing Wikipedia account if they have one, instead of creating a new one.

Tooling requirements

 * Script which sends confirmation emails, generates list of tokens, imports tokens into Wikimedia database
 * Optionally can be run without final step by partner organization -- instead of importing into database, just generates list of tokens for Wikimedia
 * Optionally can be run to just import tokens by Wikimedia
 * Should be highly portable and easy to use

Review location and submission
A voluntary model, where people can review any page they want, is the simplest implementation, and the most likely to fit within the existing article feedback infrastructure. Restricting the scope of reviewable articles doesn't appear to be necessary: existing expert review systems show that reviewers usually stick to their field of expertise when their name is publicly associated with the review.

Because articles can grow fairly long, it would be better to allow the reviewer to scroll through the article while they're reviewing it, while keeping the review fields always visible (some suggested a setup similar to common the fixed-position "feedback tab").

Specifics of implementation
The following locations can be considered for the tool:
 * at the bottom of articles (current location of the article feedback interface)
 * in the left sidebar (this might not be very visible)
 * as a fixed panel (top, bottom or side of the browser window) that doesn't move when scrolling.

Testing these locations and comparing their success will help determine which one is best. Criteria may be: user engagement, quality of reviews, community perception.

Regardless of the implementation, it is strongly recommended that the review interface not hide the content reviewed; the reviewer needs to be able (this means the review shouldn't be displayed in a modal overlay). Furthermore, the interface will likely need at least two states of expansion (collapsed & unfolded).

{{fixtext|two UIs to be A/B tested: a left-hand sidebar invocation by means of an engagement rating, and a right-hand sidebar invocation by means of a "feedback tab" (in which case the engagement rating will only be visible after the tab is clicked).

Preliminary considerations
Analysis of the Article feedback experiment shows that Wikipedians have a consistent grasp of what criteria like "neutrality" and "well-sourced" mean, and rate them fairly consistently. The general public, however, who accounts for about 95% of the feedback provided, doesn't have the same model and provides ratings that vary greatly.

For the same reason, readers rarely rely on numeric ratings like Likert scales, as suggested by UX research on the current Article feedback tool. They're more interested in reading well-built reviews. They're also interested in information about who the reviewer is, so as to gauge the relevance of the comment/review for their personal situation.

Some criteria, like the well-sourcedness of an article, can be assessed by an aggregate of automated quantitative metrics (like the number of references relative to the length of the article) and human-generated qualitative feedback (like the appropriateness of the references, and their reliability).

Based on these considerations, it appears it would be better to move to a system where the reviewer is invited to answer a series of questions (some open-ended) to help readers and editors identify possible issues (and thus areas of improvement) with the article.

"Simple" readers and subject-matter "experts" (whether they're credentialed or not) have a different use for the article, and can provide different levels of feedback. A reader's main purpose may be to quickly find a specific piece of information, while an expert may want to check that the quality of the whole article. Asking the reviewer if they believe to have knowledge on the topic could be used to ask different questions, relevant to each profile.

Review interface
There are different ways to present the goals and issues. Testing can help determine what works best in terms of engagement and quality of feedback.

Specifics of implementation
Implementation choices may be:
 * whether or not to include a Likert scale; for example, clicking stars may help with user engagement, even if the results are not terribly useful.
 * whether review items should be single words ("Completeness"), questions ("Does the article provide an exhaustive coverage of the topic?"), statements of issues ("The article doesn't provide an exhaustive coverage of the topic."), or a combination of them. User engagement, as well as consistency and quality of reviews, will be the determining factors.
 * the amount of review items, and the overall length of the review, depending on how it impacts user engagement, completion rate and the quality of reviews.
 * how to best encourage the reviewer to improve the page themselves
 * whether or not to allow the reviewer to disclose a conflict of interest (e.g. if they have significantly edited the article themselves, which could be automatically assessed, or if they're particularly biased on the topic itself).



Review management

 * Where and how is the review published?
 * How are useful reviews surfaced, and useless or off-topic reviews handled?

Preliminary considerations
There are multiple reasons to integrate reviews with the existing talk page framework:
 * The talk page is the appropriate place to discuss improvements of the article; editors who watch the page will be notified of new reviews.
 * Few readers currently know of the talk page; by making it more discoverable, more readers may realize the information it contains is useful to assess an article's quality.
 * Reviewers are likely to appreciate feedback on their review and to have a venue to discuss further with editors; the talk page provides this opportunity.

However, there is also a risk that the talk page turns into a forum, or that the sheer amount of useless or irrelevant comments overwhelm editors on the talk page. Processes will be necessary to assess the usefulness and relevance of a review/comment to the article's improvement.

Furthermore, some users may be interested in reading reviews of an article, even if they're not actionable items that belong on a talk page.

Review triage


Triagers will be responsible for assessing the incoming reviews and acting depending on their content. The goal will be to surface the particularly relevant content from the quantity of reviews.

Existing processes for treating inappropriate text (personal attacks, personally identifiable non-public information, libel, etc.) will continue to apply.

Actions available for reviews:
 * Mark as patrolled: The review doesn't require follow-up, is unspecific, or mentions an issue that was resolved.
 * Move to the recycle bin: The review consists of spam, nonsense, test.
 * Promote to the talk page (and thus autopatrol): The review is relevant, useful, and will raises an actionable issue that needs to be addressed.
 * (for administrators only) Delete / restore

Automatic actions:
 * Automatically patrol reviews that consist only of ratings (no text)
 * Automatically patrol reviews that were promoted to the talk page.

Review list


Users should have the ability to sort or filter the list of reviews for a given article, for example by date (to show the latest review first), by reviewer (to show self-identified "experts" first), by usefulness, by status, etc.

Reviews can be classified in categories, depending on their usefulness for the user:
 * new reviews & praise
 * patrolled reviews & praise
 * reviews and praise promoted to the talk page
 * trash / recycle bin: spam, personal attacks, etc.: automatically deleted after a time, or manually before that.

Promotion of the review to the talk page
Constructive criticism and particularly relevant reviews about an article should be promoted to the talk page, for the reasons provided above. Each community will need to agree on guidelines for promoting a review to the talk page, but the


 * The review should indicate that it was promoted to the talk page, when and by whom.
 * The status of the review should remain independent from the promotion to the talk page; for example, promoting an actionable review to the talk page should keep it "pending", and not close it.
 * The text appearing on the talk page should contain:
 * A link to the review
 * The name of the reviewer, and date of the review
 * The free-text comment of the review
 * The name of the promoter, and date of the promotion

It seems superfluous to include numerical or binary ratings in the text that goes to the talk page, since it's really the comments that should start the discussion.

Other features

 * Public list of one's reviews: Being able to showcase one's work on Wikipedia is a factor encouraging the participation of some "experts". This could take the form of a special page (e.g. )
 * API to access the entirety of the reviews and their specifics
 * In the future: if the volume of reviews warrants it, investigate automated review aggregation


 * filter by levels of expertise?
 * filter by levels of expertise?

API

 * standards / policies for integration of data from external review systems with ours

Temporal evolution and aggregation
Some people would like to be able to measure the evolution of the quality of an article over time. Quality depends very much on each reader's perspective and needs; no absolute, one-size-fits-all metric will ever satisfy all readers.

A possibility would be to plot the evolution of non-expired positive reviews (praise) against non-expired negative reviews (issues); see example below. Generally, it is better to present well-designed quality information and charts, rather than a rather arbitrary quality index.

Quality indicators box
The goal of the Quality indicators box is to provide a summary of quality information "at a glance" (see example opposite). They are an entry point leading to more detailed information such as reviews (see Summary quality screen below).

The quality indicators box's default state is collapsed: it only displays the most important information. Expanding the box displays additional information, but still pretty summarized. It contains a link to the summary quality screen.

The quality indicators box can be part of the same interface as the Article review tool for readers, like the two sides of the same coin. Only one of them should be shown at any time: opening the feedback/review interface should collapse the quality indicators box.

Main information

 * community assessment status (featured, good, stub, etc.)
 * red flag / "heads-up":
 * e.g. the article was last changed X (months|days|hours|minutes) ago by X. (and hasn't been patrolled since))
 * A certain threshold of issues was reported through reviews.
 * other?

Detailed information

 * The talk page was last changed X days ago.
 * over the last X days: (X = {1, 7, 30, ∞})
 * number of changes (to give an idea of the activity on the article)
 * number of contributors (to give an idea of the diversity of participants to the article)
 * number of patrolled praise (to give a general measure of positive feedback)
 * number of patrolled issues (to give a general measure of negative feedback)

Other information
This area could also host other quality-related information and tools, such as:
 * warning templates that are currently displayed as banners on top of articles (POV, verifiability, etc.)
 * Flagged revisions / Pending changes information
 * WikiTrust switch on/off or similar tools

Summary quality screen


More detailed metrics and information extracted from the aggregation of reviews can be displayed in a "Summary" tab in the review interface for a given article. This summary would include the information present in the Quality indicators box; other possible information may include:

Ratings & reviews:
 * evolution of overall ratings over time
 * number of overall ratings over time
 * evolution of new reviews over time
 * evolution of patrolled reviews over time
 * evolution of promoted reviews over time
 * evolution of recycled reviews over time

filter by:
 * time period
 * level of expertise

praise:
 * evolution of praise over time

issues
 * evolution of issues over time
 * breakdown by type

other relevant metrics:
 * number of edits
 * number of views
 * evolution of the size of the article over time
 * evolution of the number of citations over time
 * evolution of the citation (citations / article size)

Note: reviews automatically expire after some time / revisions.

If users can rate specific dimensions like objectivity or completeness, these ratings can be aggregated, and charts generated like those used on Mozilla input.

Other (non-quality-specific) metrics:
 * date created
 * Contributors who added the most content are: X