The page asked that people contact the team through said email address, but attempting to do so gives "550 Address firstname.lastname@example.org does not exist". I've removed the address from the page for the time being.
Thanks for noticing—I'm replacing with the correct address, scoring-internal
New question: What information does ORES use to evaluate an edit?
Does it just use the content of the edit itself? Does it consider whether the editor is logged-in or not? The age of the editor's account? The permissions of the account? The time of day the edit was made?
@EpochFail@ACraze (WMF) - I bet y'all know the answer to this.
Yes to all of the above. Check out https://ores.wikimedia.org/v3/scores/enwiki/123457/damaging?features
You can experiment with the local gradients for each feature by injecting counterfactuals: ORES/Feature injection
For a more systematic way of exploring how the features lead to a given prediction, there are libraries which vary each feature and show its impact on the output. For example: https://github.com/adamwight/ores-lime/blob/master/Explain%20edit%20quality.ipynb
Explanation of ArticleQuality.js script
I just talked through the ArticleQuality script and how to interpret it, with another Wiki Education staffer, and we thought something similar would be good to include in the FAQ.
We were looking at the result for https://en.wikipedia.org/wiki/Rachel_Carson: FA (4.7), and the question was, what does the 4.7 mean? (47% likely to be an FA? 4.7% likely to be a FA?)
The answer I gave is that it's based on the probabilities that ORES gives for the possible quality levels: ps://ores.wmflabs.org/v3/scores/enwiki/?models=wp10&revids=877812769
So it says FA because FA is the most likely result (at 42%), but 4.7 means the weighted average prediction is actually closer to GA, with the scale going from 1 (definitely a stub) to 6 (definitely an FA).
Help answer ORES expert level questions...
Here are some questions we would still like to answer! Feel free to share answers or ask something that hasn't been covered in the FAQ!
What hosts are used for ORES and where do they fit in the architecture?
Where are the deployment repositories? ===
How do I install the ORES package?
How do I develop ORES?
How to start a dev instance?
How to submit a PR to this repository?
How do I use ORES' API client?
What's a ScoringSystem and how does it work?
In what sense does ORES WP10 model measure the "quality" of an article? Does it differentiate between high and low quality sources? Good and bad writing? Can I use it for grading Wikipedia editing assignments?
We're incorporating more WP10-based info into the Wiki Education Dashboard these days... I actually just deployed a new feature that links to this very FAQ for background... and these are the types of questions that academics are likely to ask. More explanation of what features do and do not go into the models would be most welcome.
Ooh. Good Q. I'm happy to have a more substantial writeup about the wp10 model.
Sage, it's not much, but check out what I've added to ORES#Assessment_scale_support. What do you think? I think we could write a whole page about what the model directly and indirectly accounts for. But I figured this was a good start.
Yeah, good start!
Another question TODO answer:
- Which models are used by each known consumer? For example, what is using wp10 predictions?