ORES/Sessions

Purpose
ORES as a predictive algorithm can already predict the quality of single edits and articles, this project aims to extend that capability to sessions of multiple related edits. Being able to predict session quality paves the way for potential future tools such as automatically detecting promising new editors or edit wars on pages. Of course this idea is not new, since 2014 Snuggle has been trying to detect new editors that may have been bitten by vandal-fighters, but its infrastructure is reliant on pre-ORES technology, and is not easily generalizable. Continuing on that stream of work we plan to use session quality to focus on newcomer retention conducting a small experiment.

Experiment 1
This experiment tries to determine the relationship between the  and   predictions of single edits, and the same labels applied edit sessions of newcomers. We don't know how the average, or maximum, or any Machine Learning algorithm of edit quality could be used to predict session quality? Please find the experiment at https://labels-experiment.wmflabs.org/ui/ it's being conducted with enwiki, frwiki, plwiki for now.

Labelling Instructions
Each task presents you with a session of edits by newcomers. A session is 1 or more edits that occur less than an hour after the previous edit. A newcomer means that these sessions happen on the same day as their date of registration. You are asked to make 2 decisions about each session. Decision 1) "Damaging/Not Damaging" please decide whether or not you would revert all of these edits. Decision 2) "Goodfaith/Badfaith" please decide whether you think the author was trying to contribute productively.

Technical Details
The session labelling apparatus is an extension of the Wikilabels    called. This generalization is purposeful to allow the way for other types of "sessions" that are simply a set of edits. For instance a "session" on a page could be all the edits occurring on a page within a timespan and could detect an edit war.

Work Disclaimer
I, Max Klein, am carrying out this work as a contractor of the Wikimedia Foundation, under the Scoring Team. The work is also mutually beneficial to my role with CivilServant.io. I don't believe there to be any conflict of interest between those two roles. I am glad to answer questions about it.