Wikimedia Scoring Platform team
Welcome to the home of the Wikimedia Scoring Platform team. For our past work as an ad-hoc, volunteer project, see m:Research:Revision_scoring_as_a_service/Archived. As of July, 2017, we're an officially funded team operating within the Technology Department at the Wikimedia Foundation.
Principal Research Scientist
Software Engineer (WMDE)
Associate Product Manager (WMF)
- とある白い猫 (IEG)
- Arthur Tilley (IEG)
- He7d3r (IEG/Volunteer)
- Yuvipanda (Volunteer)
- Sumit (Volunteer)
- Ewitch (Research Intern)
- Marius Hoch (Software engineer)
- Max Klein (Software engineer)
- Adam Wight (Software engineer)
There are two divergent conversations about artificial intelligences—in one, robots will save us from ourselves, and in the other, they destroy us. AI has great potential to help our projects scale by reducing the work that our editors need to do and enhancing the value of our content to readers, but AIs also have the potential to perpetuate biases and silence voices in novel and insidious ways. Imagine a world where AIs are powerful, open, accessible, audit-able tools that Wikimedians use to make their work easier. We develop and maintain AI services (like m:ORES) and the related technologies as a means to unlocking that future. We're an experimental, research focused, community supported, AI-as-a-service team. Our work focuses on balancing efficiency and accuracy with transparency, ethics, and fairness.
- We deliver realtime advanced machine prediction technologies in an easy-to-use format via a web service
- We focus on supporting all wiki communities who will collaborate with us
- We develop strategies and technological support for identifying and mitigating hidden biases
- We publish, communicate, and promote what we've learned so that others can benefit
- ORES -- Machine learning prediction as a web service (see the list of tools that use ORES)
- m:Wiki labels -- Training interface where Wikipedians teach machines how to perform important tasks
- revscoring -- A machine prediction "scoring" framework for building prediction models used by ORES
- JADE -- Robust false-positive and feedback gathering system, to allow human refutation and review of ORES scoring.
We're an open project team. There are many ways you can get in contact with us.