Wikimedia Technology/Goals/2019-20 Q2

Technology Department Team Goals and Status for Q2 FY19/20 in support of the  Medium Term Plan (MTP) Priorities and Annual Plan for FY19/20



Analytics
Team Manager: Nuria Ruiz
 * Reduce platform Complexity. Modern Event Platform
 * Build a reliable, scalable, and comprehensive platform for creating services, tools and user facing features that produce and consume event data'''
 * Resolve Kafka Connect HDFS Licensing issue and decide if we will use Kafka Connect
 * Initial (Stream) Config Service implementation in vagrant


 * Smart Tools for Better Data. Make easier to understand the history of all Wikimedia projects
 * Release Mediawiki History in JSON/CSV or mysql dump format (the best dataset to date measure content and contributors)
 * Deploy hadoop client to dump hosts so mediawiki history public dataset can get to dumps on a reasonable timeframe


 * Smart Tools for Better Data. Make easier to understand how Commons media is used across our projects.
 * Announce the deployment of the mediarequests API:
 * Add mediarequests metrics to Wikistats UI


 * Smart Tools for Better Data. Increase Data Quality, Privacy and Security
 * Deploy Enthrophy-based alarms for data issues that could indicate, bugs, traffic drops due to censorship on inconsistencies, this work continues from Q1
 * Productionize Kerberos Service
 * Create test Kerberos identities/accounts for some selected users from Analytics Team in test cluster T212258,


 * Core. Operational Excellence. Increase Resilience of Systems
 * New zookeeper cluster for tier-2


 * Core. Operational Excellence. Reduce Operational Load by Phasing Out Legacy Systems/Technologies
 * Sunset MySQL data store for eventlogging., this work continues from Q1
 * Migrate eventlogging to python3

Dependencies on:

 Status 
 * October 2019 status -
 * November 2019 status -
 * December 2019 status -



Core Platform
Team Manager: Corey Floyd

Dependencies on:

 Status 
 * October 2019 status -
 * November 2019 status -
 * December 2019 status -



Fundraising Tech
Team Manager: Erika Bjune

Dependencies on:

 Status 
 * October 2019 status -
 * November 2019 status -
 * December 2019 status -



Performance
Team Manager: Gilles Dubuc
 * - Provide performance expertise to FAWG outcome
 * - Hold 3 or more workshops and training sessions with 1 engineering team
 * - Hire and onboard Systems Performance Engineer
 * - Publish 2 blog posts about performance
 * - Organise and run the Web Performance devroom at FOSDEM 2020
 * - Create performance alerts for 12 different wikis
 * - Create synthetic tests for backend editing with XHGui profile comparison
 * - Expand coverage of metrics from synthetic testing (introducing user journeys). Add 5 new user journeys and a minimum of 7 new metrics
 * - Add a new Graphite instance for synthetic metrics. It needs to be connected with our current Grafana instance and documented.
 * - Migrate ResourceLoader dependency tracking off the RDBMs

Dependencies on:

 Status 
 * October 2019 status -
 * November 2019 status -
 * December 2019 status -



Release Engineering
Team Manager: Tyler Cipriani
 * Build and support a fully automated and continuous Code Health and Deployment Infrastructure
 * - Update weekly branchcut script for MediaWiki to allow for automation
 * - Production configuration is compiled into static files on deployment servers
 * - Seakeeper (New CI) proposal for a dedicated CI cluster submitted for feedback
 * - A demonstration MediaWiki development environment hosts the full TimedMediaHandler front-end and back-end workflow
 * Improve and maintain the Wikimedia code review system
 * - Migrate Gerrit master from Cobalt to Gerrit1001
 * - Migrate from Gerrit version 2.15 to 2.16

Dependencies on:

 Status 
 * October 2019 status -
 * November 2019 status -
 * December 2019 status -



Quality and Test Engineering
Team Manager: JR Branaa

Dependencies on:

 Status 
 * October 2019 status -
 * November 2019 status -
 * December 2019 status -



Research
Team Manager: Leila Zia

Content Integrity

-  A comprehensive literature review of disinformation published in arxiv and meta (completing the work started in Q1).

-  Build a prioritized list of actions to take (tools to build, datasets to release, etc.) for combating disinformation (though discussions with the community of editors and developers, internal consultation, and maybe with external researchers)

-  Build one formal collaborations in the disinformation space to start the research for building solutions starting Q3.

Foundational

- Prepare the Research Internship proposal.

-  Finalize the research brief for crosslingual topical model laying out the work that will be done in this space starting Q3.

-  Literature review of reuse.

-  Review of the different types of re-use and what we know about their effect on traffic to Wikimedia.

-  Review of what data is available to us and what data is not. What questions we can currently answer. What questions we can't.

-  Initiate monthly or quarterly office hours for the community. (trial for 6 months if monthly and 12 months if quarterly)

-  Wiki Workshop 2020 proposal submission.

-  Plan for a challenge: come up with an initial format, put a committee together, choose a venue for presentations.

Address Knowledge Gaps

-  Finalize the taxonomy of readership gaps

-  Make significant progress towards building the taxonomy of search (usage gaps). (We expect the research part of this work to conclude in Q3, as a stretch in Q2).

-  Literature review of identified content gaps in Wikipedia

-  Taxonomy of the causes of content gaps in Wikipedia

-  Build a series of hypotheses for the possible causes of skewed demographic representation of Wikipedia readers (specific to gender). Identify possible formal collaborations for research and testing starting Q3 if relevant based on the learnings from the list of hypotheses.

✅ -  Submit the citation usage paper to TheWebConf 2020.

-  (via mentoring an Outreachy) start work on the development of the data-set for statements in need of citation.

- Supervise a student evaluating methods to recommend images to Wikipedia pages.

- Train from scratch and evaluate an end-to-end (simple) classification model using Wikimedia Commons categories, optimized for GPU usage.

- Conduct a literature review, plan and set up collaborations for projects about understanding engagement with Wikimedia images around the world.

Core

-  Complete two 30-60-90 day plans.

-  Finalize a proposal for changes in Research based on learnings about Reseach's audience, what they expect from the team, our positioning within WMF, Movement, and the Research community, and the opportunities for impact.

-  Document and communicate with the team: expectations of the Research Scientist role and trajectory in the IC track.

-  Research Showcase feedback collection, assessment, and proposal for changes if relevant.

- A half-yearly newsletter for Research with the goal of making it quarterly if bandwidth allows and/or project is successful.

Dependencies on:

 Status 
 * October 2019 status -
 * November 2019 status -
 * December 2019 status -



Scoring Platform
Team Manager: Aaron Halfaker
 * - Jade Entity Page UI
 * - Newcomer quality session models
 * - Expansion of Topic Model to ar, ko, and cswiki

Dependencies on:

 Status 
 * October 2019 status -
 * November 2019 status -
 * December 2019 status -



Search Platform
Team Manager: Guillaume Lederrey

Dependencies on:

 Status 
 * October 2019 status -
 * November 2019 status -
 * December 2019 status -



Security
Team Manager: John Bennett

Dependencies on:

 Status 
 * October 2019 status -
 * November 2019 status -
 * December 2019 status -



Site Reliability Engineering
Directors: Mark Bergsma and Faidon Liambotis

Dependencies on:

 Status 
 * October 2019 status -
 * November 2019 status -
 * December 2019 status -



Technical Engagement
Team Manager: Birgit Müller

Core
 * - [IaaS] All out of warranty hardware used for offsite backups of Cloud Services data in the codfw datacenter is replaced
 * - [IaaS] 60% of the remaining Debian Jessie systems in the hardware layer underlying Cloud VPS are upgraded to Debian Buster or Stretch
 * - [IaaS] All Debian Jessie instances are removed/replaced in 95% of Cloud VPS hosted projects
 * - [IaaS] Deploy a minimum viable Ceph cluster in eqiad and convert 1+ cloudvirt servers to use it for instance storage
 * - [IaaS] Measure IOPS as seen at the instance level, IOPS as seen at the Ceph cluster level, and network activity generated in delivering IOPS at the backbone network level to produce a forecast for impact of full conversion of cloudvirt servers to Ceph instance storage.
 * - [IaaS] Create a shared understanding of systems and service continuity and availability constraints in the current Cloud VPS product which can be used to design follow-on projects to reduce single points of failure and establish practices for testing and maintaining continuity and availability of Cloud VPS core services.
 * - [IaaS] OpenStack APIs and services are upgraded to the "Ocata" release
 * - [PaaS] Deploy a Kubernetes 1.15.2+ cluster in Toolforge which will be used to provide a more modern, secure, and performant PaaS baseline to Tool maintainers.
 * - [PaaS] Migrate 5+ early adopter/beta tester tools from legacy Kubernetes cluster to new Kubernetes cluster to validate integration with ingress proxy layer and sandboxing/isolation of new Kubernetes cluster deployment.
 * - [PaaS] Create timeline and operational plan for migrating all Kubernetes workloads in Toolforge to the new Kubernetes cluster and decommissioning the legacy cluster by the end of FY19/20.
 * - [Docs] Create a functional template and content checklist for Help pages in the Toolforge and Cloud VPS technical content collections.
 * - [Docs] Establish a technical content review process with developers on WMCS team.
 * - [Docs] Noticeably improve readability for 5 instances of Toolforge and Cloud VPS "Help" documentation on Wikitech.

Increased visibility & knowledge of technical contributions, services and consumers across the Wikimedia ecosystem (Reduce Complexity of the Platform, Movement Diversity)
 * - Create a blog by and for technical audiences where members of the technical community can post about their technical work
 * - Publish 6 (min) technical blog posts
 * - Coordinate Tech Talks and increase views on tech talks by 10%/quarter
 * Prepare release of 2nd edition of the Tech Community Newsletter (publishing date: Jan 2020)
 * - A dashboard for Wikimedia Cloud Services edit data is available to the Wikimedia movement
 * - Provide “showroom”, introducing newcomers to a variety of different tools to show what developers can do in Toolforge by Q3
 * - Find out what is needed to get data on all technical contributions/contributors
 * - Coordinate with Bitergia and get data on "Avg. Time Open (Days)" for Gerrit patchsets per affiliation and "time to first review" data for patches (by end of Q4).
 * - Gather and publish current numbers on technical contributions provided by Bitergia in the Quarterly Tech Community newsletter (by Jan 2020)

Support Wikimedia's diverse technical communities (Reduce Complexity of the Platform; Movement Diversity)
 * - Develop workshop concept with partner community for technical workshops in Q3
 * - Conduct workshop and document the technical challenges small wikis face in North America
 * - Coordinate GCI. In Q2/Q3, in Google Code-in, > 35 mentors volunteer to provide tasks and mentor students in >70 task instances
 * - Coordinate Outreachy round 19. At least 5 featured projects are accepted for Outreachy round 19 by Oct 1st ✅. At least five projects are successfully completed by Outreachy interns by end of Q3.
 * - Prepare and hold session on Wikimedia's Tech internships at WikiCon North-America

Dependencies for core work is on: SRE/Data Center Operations team

 Status 
 * October 2019 status -
 * November 2019 status -
 * December 2019 status -