Moderator Tools/Automoderator/Testing

To help communities test and evaluate Automoderator's accuracy, we are making a test spreadsheet available with data on past edits and Automoderator's decisions.

The decisions Automoderator would make are a combination of a machine learning model score and internal configuration. The model will be re-trained and improve over time, but we also want to understand what internal configuration rules we can apply to improve Automoderator's accuracy. For example, we have found that Automoderator often mis-judges users reverting their own edits as vandalism, so we will not take action on these edits. We would like to find other examples of these kinds of edits, and need your help to do so.

How to test Automoderator

 * 1) Make a copy of this spreadsheet by clicking File > Make a Copy ...
 * 2) Select the 'Share it with the same people' before clicking 'Make a copy' so that we can aggregate data from your responses.
 * 3) Follow the instructions in the sheet to select a random dataset, review 30 edits, and then uncover what decisions Automoderator would make for each edit.
 * 4) Join the discussion on the talk page.

Details
Automoderator's model is only trained on main namespace pages, so the dataset is limited to Wikipedia article edits. In the current version of the dataset, in addition to model scoring, Automoderator does not take actions on:


 * Edits made by administrators
 * Edits made by bots
 * Edits which are self-reverts
 * New page creations

The list above will be updated if we change the dataset as testing progresses.

Score an individual edit
If you want to get a Revert Risk score for an individual edit, you can do so with the LiftWing API ... TODO

Note that this is just the model score, and does not take into account Automoderator's internal