Article feedback/Public Policy Pilot/Technical

Page Load
This is the process that occurs when a user goes to a page initially.


 * 1) Page request is received.
 * 2) If pageID is within the pilot's purview, we continue:
 * 3) After full page load, the AA javascript fires.
 * 4) * If the browser has a cookie value amounting to "has ever given a rating" or the user is logged in, the javascript-fired request needs to try to get previous rating values for this user.
 * 5) The server catches this request and returns the correct data:
 * 6) * The values for the current aggregates for all four questions, as well as the number of ratings received.
 * 7) * If there are individual ratings to be returned (user-level):
 * 8) ** The value given by the user for each question
 * 9) ** Whether or not the values are "stale"
 * 10) ** The number of revisions that have passed since the user gave these ratings.
 * 11) Upon receipt of the data from the server, the client-side javascript assembles the visuals as needed and injects the data into the page.

Rating Submission
This is the process that occurs when a user submits ratings. It is effectively the same process whether they are fresh, stale, or re-rating the same revision)


 * 1) User selects values for the questions.
 * 2) * User is not required to submit values for all four ratings.
 * 3) User clicks the submit button.
 * 4) If this is a new rating for the user on this revision, a new row will be inserted into the database. If it is a "re-rating" for the revision, an update will be made (so this is basically an upsert on the userid/pageid/revisionid key)
 * 5) It is at this point that the aggregate rating values are calculated.
 * 6) Data is returned to the client in the same format as it is on page load, and the components are updated (not reloaded)
 * 7) A "this user has given a rating to something" cookie is set on the client.

list=articleassessments
where pageid is the page ID of the desired page

This will return a cached result in an object nested like so

Where 1-4 are the indicies for the dimensions of the review, count is the number of reviews, and total is the sum of the reviews

Database
The data stored will be in the following tables:

article_assessment_rating
Maps metrics to ids.

article_assessment
Will hold 4 rows per user per revision, namely:

Four rows per user per revisionID. If a user does not provide a value (0 stars), a 0 is entered. This row will not be counted in aggregation values (for now).

article_assessment_pages
Will hold 1 entry per page per rating we're measuring, namely:

Where rating refers to the rating that's being measured (1 = completeness, etc)

Assumptions

 * the page's pageid and current revid need to be visible to JS as variables
 * Historical information will be retained per user per article. That is:  if a user rates a given article 5 times on 5 different revisions, all 5 ratings will be stored to facilitate "over time" statistical analysis.
 * Note that if the user re-rates the same revision, the data will be an update and not an insert. So if the user rates a given article 6 times on 5 revisions, only 5 entries will be stored.

Limitations

 * Any anonymous user who uses a different browser or clears their cookies will be seen by the system as having a different browser
 * We need to limit the MWHooks to 1 call so that code only gets injected when it is a page that has AA enabled for it, and this comparison only needs to be made once

Open Issues

 * Cache invalidating - On every new rating, the cached version of this page will have to be invalidated.
 * We mark pages as invalid after an edit, short-term solution is to do this
 * Long-term solution is to get the ratings info from an API call that has cache-able resources


 * Only to appear on select pages
 * Short-term solution is to have a configurable list of page titles or IDs that this extension will be visible on, and optimize to exit out if the page is not among these
 * Long-term solution not necessary as this will be on all pages long-term