Wikimedia Developer Summit/2017/AI ethics

Wikimedia Developer Summit 2017 Session Notes Template

Session Overview

 * Title: Algorithmic dangers and transparency -- Best practices
 * Day & Time: Tuesday, January 10th at 1:10PM PST
 * Room: Hawthorne
 * Phabricator Task Link: https://phabricator.wikimedia.org/T147929
 * Facilitator(s): Aaron Halfaker
 * Note-Taker(s): Michael H, Ejegg
 * Remote Moderator:
 * Advocate: Pau
 * Stream link: https://www.youtube.com/watch?v=myB278_QthA
 * IRC back channel: #wikimedia-ai ( http://webchat.freenode.net/?channels=wikimedia-ai) )

Detailed Summary
https://etherpad.wikimedia.org/p/devsummit17-AI_ethics

Purpose
Advance our principles with regards to what's acceptable or problematic with regards to advanced algorithms.

Agenda

 * What's an AI?
 * What do we use them for?
 * Why are we worried about AIs?
 * Some thoughts:
 * Stack protocol: One-hand (new), two-hands (continue thread)
 * Gatekeeper: Please use your judgement
 * What happens next?
 * Method: https://en.wikipedia.org/wiki/Grounded_theory
 * Results: https://meta.wikimedia.org/wiki/Research:Algorithmic_dangers_and_transparency
 * Victory!  _o/  \o/  \o_

Style
Problem solving (discussion)

Discussion Topics

 * 1) People affected by a prediction model should have the means to talk about it, identify problematic-seeming aspects
 * 2) training data given to models should be proper wiki artifacts so they can be edited and discussed
 * 3) how do we implement policies based on the "protected class" concept?
 * 4) any AI model should account for change in the dynamics of the social system -- models should reflect our values as they change.
 * 5) With vandalism tools, there's a tradeoff between educating users about why they got flagged and training vandals to be more subtle