Article feedback/Public Policy Pilot/Design Phase 2

This page describes the design for phase 2 of the Article Feedback Tool pilot. This document is a companion to the phase 1 design document and follows from there, changes for phase 2 being described here. This document assumes that the reader is familiar with the phase 1 design.

Scope
For phase 2 of the Article Feedback pilot, we want to address the following issues:


 * 1) "Expired" ratings. (if not defining the formula used, at least developing a system whereby that formula can be applied).  This is both a feature modification and a behavior change.
 * 2) Clearing ratings. The ability for users to "clear" a rating of any values they had previously applied.  This is a feature modification.
 * 3) Instant Submit. Modify the feature's widget to allow for instant submission and reveal.  This is part of an A-B test versus the current behavior (that of requiring a "Submit" button).  This is a behavior change.
 * 4) Modified Survey. Different questions, perhaps targeted clearer.
 * 5) Post-rating "calls to action". We want to see if the simple act of rating an article can easily serve as a gateway towards editing or creating an account. This is both a feature modification, a behavior change, and a data tracking experiment.
 * 6) Post-rating Survey Push.  We want to be more aggressive about asking people to take our survey. This is a behavior change.

Stretch Goals

 * 1) Performance increase.  It would be good to know that the tool will not collapse after a certain threshold.

Out of Scope
The following features were deemed out of scope for phase 2, either because of resource availability constraints or design readiness:


 * 1) Rating histograms. At this point in time there is insufficient data to determine which, if any, histograms will be of best value. Accordingly, spending what limited resources we have available on an ill-defined feature component is unadvisable.
 * 2) Component placement.  Without better data as to the opinion of the community regarding the tool's importance or usefulness, making changes to its position and presentation at this time is premature.

A/B/C/D Styles
There will be four styles that are going to be tested. These styles can be mapped into a two-by-two matrix based on primary feature types:


 * 1) The location of the Article Feedback Tool (or its "activation" mechanism)
 * 2) * The bottom of the page, below the reference lists, or
 * 3) * Activated from a link in the page's sidebar.
 * 4) Whether or not the Article Feedback Tool utilizes a "submit" button
 * 5) * The Article Feedback Tool requires the user to click the "submit" button for any ratings to be sent to the server
 * 6) * Individual ratings are submitted to the server upon selection

This table describes the matrix:

Each of the axes is designed to determine optimal completion rates for the tool.

A/B/C/D Test Support
One primary intent for phase 2 is the inclusion of a series of A/B tests based on the display and the display's position. To support this, the following changes must happen:


 * 1) One or more columns (as needed) must be added to the database tables.  This column will be used to store which "style" of the tool the user was given and rated. The style will be sent to the server through the API.
 * 2) * This could probably be an integer. There are five total defined "styles" for the tool:
 * 3) ** 0, the "phase one" style (all old entries will be given this value)
 * 4) ** 1, phase 2 style, page bottom location, with submit button.
 * 5) ** 2, phase 2 style, page bottom location, without submit button.
 * 6) ** 3, phase 2 style, sidebar location, with submit button.
 * 7) ** 4, phase 2 style, sidebar location, without submit button.
 * 8) A mechanism to store on the client a cookie which indicates which style they have been bucketed into.
 * 9) A mechanism to generate the style selected for the user.

While it would be ideal to store style values for logged-in users on the server (thus ensuring that they have the same experience during the testing phase, regardless of their client), development timeframe will prevent this.

Determining Applied Style
When a user visits a page that should display the tool, the following happens:


 * 1) The server checks for the value of the article feedback "style" cookie.  If the cookie is present and the value of it is valid, the system produces that style of display.
 * 2) If the value is missing or invalid (a value of 0 or higher than 4), a new style value will be generated and placed on the client.

The value of the style cookie should be able to be overridden via an url parameter to enable testing.

TBD: Storage of style counts for statistical analysis

Fresh/Stale/Expired Calculation
The current "is this rating stale" system must be adjusted to add an "is this rating expired" state as well. The calculations for both should be modified thusly:

A user's ratings are considered "stale" when:
 * The article has achieved 10 revisions or
 * The article has been modified +/- 20% of its size or
 * The article has achieved 5 revisions and +/- 15% of its size

A user's ratings are considered "expired" when:
 * The article has achieved 30 revisions or
 * The article has been modified +/- 35% of its size or
 * The article has achieved 15 revisions and +/- 20% of its size

Ideally, both state calculations could be defined within the server's LocalSettings configuration file to allow for changes to them without redeployment of the extension. Even having the numbers able to be defined there will be useful.

Visual Design Changes
Several changes have been made to the visual design. Many changes are specific to a "style" of design while others affect all designs.


 * The "thanks/okay" dialog square has changed to a white background with a border of color #029202.
 * The "error" dialog square has changed to a white background with a border of color #a31205.
 * The stars themselves have undergone some visual changes:
 * The size of an "unselected" star has been decreased and the color has been muted slightly. This is to enhance the ability for color-blind users to distinguish selected/unselected easier as well as providing additional visual cues to all users.
 * The background color and border of the surrounding "beds" for the stars has changed (lightened).
 * The survey feedback link has been removed.
 * A "clear all ratings" text link has been added.
 * Both boxes (from phase 1) have been merged into a single box with a divisor.

Clear All Ratings
The "Clear all Ratings" link will not be visible until the user has clicked on at least one value in at least one rating axis.

Clicking this link will cause all four ratings to be blanked out.

Depending on the style of the control available to the user, the following will happen:

If the user has the "Submit Button" style, the submit button will become "active". Clicking the button will then cause four values of "0" to be sent to the server and saved for the user. Clicking the button at this time will cause the Survey functionality to fire (if the user has not filled out the survey).

If the user does not have the submit button, four values of "0" will be immediately sent to the server (an auto-submit). This action will not cause the Survey functionality to fire.

Call-to-Action Behavior Changes
There are two major post-rating calls-to-action that we wish to create:


 * 1) Invite the user to fill out our survey.
 * 2) Invite the user to create an account or edit the page.

The "post-rating" trigger fires differently depending on whether or not the Tool includes the submit button.


 * If the Tool contains a submit button, the post-rating trigger will fire upon the click of the "submit" button
 * If the Tool does not contain a submit button, the post-rating trigger will fire when the fourth rating has been submitted

When the trigger fires, the tool's box will expand to reveal a new message. The contents of this message are intended to be variable and based on what information we know about the user. If the user has not filled out the survey, a call-to-action to answer the survey will be included in the message.

Ideally, we will be able to tailor the message based on whether or not the user is logged in (has an account) or whether or not they have a history of edits (e.g., we don't want to suggest that a user learn to edit pages if they have thousands of edits, or create an account if they are logged in).

Sidebar Activation Behavior
When the tool is activated from the sidebar, there will be a link ("Rate this Article"). Clicking on the link will cause the tool to load within a modal window in the page. There will be an additional "close" button on the tool in these cases. (Note: the terminology to be used here requires work)