User:Htriedman

I'm a Privacy Engineer with the Wikimedia Foundation's Security and Privacy Team. I focus on algorithmic approaches to privacy and fairness.

Algorithmic Accountability Sheets Proof of Concept
Following along with the my previously-authored guidelines for good algorithmic governance, I've written three proof of concept algorithmic accountability sheets — one for each level in the algorithmic decision-making stack (dataset, model, service).

For intelligibility across WMF and the broader Wikimedia community, I've focused on English-language modeling. I'll be analyzing the enwiki edit quality dataset, the damaging/good faith edit models, and the larger ORES service.

In principle, these analyses can be replicated for any algorithmic component, in any language. If/when this process is formalized and put into production, algorithmic components and their analyses will be posted on Wikidata in order to ensure consistency and translatability between languages.

What is the motivation for creating this dataset?

 * The motivation for creating this dataset is to train an English language model that outputs a prediction of edit quality for the ORES service. Specifically, this dataset contains encodings about whether ~20,000 edits from 2014-2015 are “damaging” or “in good faith”.

Who created this dataset?

 * Aaron Halfaker (aaron.halfaker@gmail.com)

Who currently owns/is responsible for this dataset?

 * The WMF ML team

Who are the intended users of this dataset?

 * WMF employees, community members, and stakeholders

What should this dataset be used for?

 * Training English language models to predict damaging and good faith edits for a production context

What should this dataset not be used for?

 * Training models to predict English language text quality outside the context of MediaWiki services
 * Training models to predict Wikipedia edit quality for any language other than English

What community approval processes has this dataset gone through?

 * It was labeled through a crowd-sourced human computation effort on wikilabels, and ORES, the service that it helped train, is used for reviewing edits every day. Besides that (which may count as an implicit community approval process), there was no official approval process.

What internal or external changes could make this dataset deprecated or no longer usable?

 * A significant amount of time could pass
 * Downstream use cases change
 * Upstream data sources change

How should this dataset be licensed?

 * Creative Commons Attribution ShareAlike 3.0

How is the data collected?

 * Unclear, but it seems like it may be randomly sampled from all revisions between April 2014 and April 2015

Is the data continuously updated? If it is, are there links to older versions of the dataset?

 * No, the data is not continuously updated.

If the data is labeled, how does that process work?

 * This data was labeled in 2015-2016 using wikilabels, a distributed human computation engine. Individual volunteers receive batches of 50 revisions to rate, and are asked to judge 1) whether a revision is in good faith or bad faith and 2) whether a revision is damaging or not damaging.
 * It is unclear whether or not individual judgments serve as final labels for the dataset, or whether multiple judgments are aggregated to compute a label.

If the data is preprocessed or cleaned, how does that process work?

 * The revscoring package dynamically fetches the dataset, extracting features about:
 * the type of page a revision occurred on
 * the parent of a revision
 * characters, words, tokens, links, etc. added or taken away from a page
 * changes in the number of bad words, dictionary words, and non-dictionary words
 * user privileges of the revision author
 * Some columns take the natural log of the feature they encode

How is the data distributed statistically?
Total samples: 19,264

Time distribution
Range: 2014-04-15 to 2015-04-14

Minimum month: 1,481 revisions (2014-06-14 to 2014-07-15)

Maximum month: 1,880 revisions (2015-03-15 to 2015-04-14)



Time since registration
Range: 0 to 13.566 years

Revision tags and mobile edits
Total number of revisions with tags: 759

Total number of revisions with at least one mobile edit tab: 497

Distribution of tags:

Geographic analysis of anon revisions
Total number of anon revisions: 3,467

IPv4 anon revisions: 3,305

IPv6 anon revisions: 162

So far, I've been unable to link anonymous revisions with IPv6 addresses with their originating countries. Luckily, they only encompass 4% of the anonymous revisions we're considering, so are unlikely to change the broad distribution of edits. Looking at countries which have contributed more than 25 views (76.8% of the available data), this data seems like it is well-distributed across the English-speaking world.

Text data distributions
Among all revisions (n = 19,264): Among revisions judged as damaging (n = 745): Among revisions judged as good faith (n = 18,758):

Are there any sensitive attributes contained in the dataset?
Sensitive attributes include:


 * username (and if no username, then IP address)


 * time since registration
 * page edited
 * exact timestamp of edit
 * This dataset can also be linked with other on-wiki data (timestamp, comments, mobile edits, etc.) through mwapi, and IP addresses can be resolved to (relatively precise) locations.

New editors (<1 year since account creation)
Total: 6,215

Anonymous editors
Total: 3,467

Mobile editors
Total: 497

Non-US editors anonymous editors
Total: 2,087

Which models and services rely on this dataset?
The enwiki edit quality good faith/damaging model (described below) trains on this dataset. That model is a part of the ORES service, which indirectly relies on this dataset.

Enwiki Edit Quality Good Faith/Damaging Model Card
tk

ORES Service Sheet
tk