Wikimedia Technology/Goals/2019-20 Q2

Technology Department Team Goals and Status for Q2 FY19/20 in support of the  Medium Term Plan (MTP) Priorities and Annual Plan for FY19/20



Analytics
Team Manager: Nuria Ruiz
 * Reduce platform Complexity. Modern Event Platform
 * Build a reliable, scalable, and comprehensive platform for creating services, tools and user facing features that produce and consume event data'''
 * Resolve Kafka Connect HDFS Licensing issue and decide if we will use Kafka Connect
 * Initial (Stream) Config Service implementation in vagrant


 * Smart Tools for Better Data. Make easier to understand the history of all Wikimedia projects
 * Release Mediawiki History in JSON/CSV or mysql dump format (the best dataset to date measure content and contributors)
 * Deploy hadoop client to dump hosts so mediawiki history public dataset can get to dumps on a reasonable timeframe


 * Smart Tools for Better Data. Make easier to understand how Commons media is used across our projects.
 * Announce the deployment of the mediarequests API:
 * Add mediarequests metrics to Wikistats UI


 * Smart Tools for Better Data. Increase Data Quality, Privacy and Security
 * Deploy Enthrophy-based alarms for data issues that could indicate, bugs, traffic drops due to censorship on inconsistencies, this work continues from Q1
 * Productionize Kerberos Service
 * Create test Kerberos identities/accounts for some selected users from Analytics Team in test cluster T212258,


 * Core. Operational Excellence. Increase Resilience of Systems
 * New zookeeper cluster for tier-2


 * Core. Operational Excellence. Reduce Operational Load by Phasing Out Legacy Systems/Technologies
 * Sunset MySQL data store for eventlogging., this work continues from Q1
 * Migrate eventlogging to python3

Dependencies on:

 Status 
 * October 28, 2019 status:
 * Finalize productionizing kerberos service, and then possibly enabling it ✅
 * Set up a generic workflow to create Kerberos accounts
 * Create test Kerberos identities/accounts for some selected users from Analytics Team in test cluster ✅
 * Deprecate eventlogging-service-eventbus ✅
 * Bot Detection “Remove automated traffic not identified as such from readers data”
 * November 2019 status -
 * December 2019 status -



Core Platform
Team Manager: Corey Floyd
 * Reduce platform Complexity
 * Migrate Service - changeprop
 * Modernizing front end project planning (from Front End Working Group)
 * Add API Integration tests and decouple components
 * Initial librarization of MediaWiki
 * Frontend Architecture Group Planning for Desktop Refresh
 * Tech and Product Partnerships
 * Implement MediaWiki REST APIs for MVP
 * Integrate OAuth 2.0 into API
 * Prototype Documentation Portal

Dependencies on:

 Status 
 * October 28, 2019 status:
 * Modernizing front end project planning (from Front End Working Group)
 * Implement MediaWiki REST APIs for MVP
 * Prototype Documentation Portal
 * November 2019 status -
 * December 2019 status -



Fundraising Tech
Team Manager: Erika Bjune
 * Core Work
 * Support high revenue/high risk campaigns
 * Extra attention payed to security and privacy during highest revenue campaigns

Dependencies on:

 Status 
 * October 28, 2019 status:
 * all goals
 * November 2019 status -
 * December 2019 status -



Performance
Team Manager: Gilles Dubuc
 * Core Work
 * - Provide performance expertise to FAWG outcome
 * - Hold 3 or more workshops and training sessions with 1 engineering team
 * - Hire and onboard Systems Performance Engineer
 * - Publish 2 blog posts about performance
 * - Organise and run the Web Performance devroom at FOSDEM 2020
 * Reduce Complexity of the Platform
 * - Create performance alerts for 12 different wikis
 * - Create synthetic tests for backend editing with XHGui profile comparison
 * - Expand coverage of metrics from synthetic testing (introducing user journeys). Add 5 new user journeys and a minimum of 7 new metrics
 * - Add a new Graphite instance for synthetic metrics. It needs to be connected with our current Grafana instance and documented.
 * - Migrate ResourceLoader dependency tracking off the RDBMs

Dependencies on:

 Status 
 * October 28, 2019 status:
 * Hire Systems Performance Engineer and create onboarding material, ensuring that this new hire has a shared understanding of the team’s performance culture.
 * Organise and run the Web Performance devroom at FOSDEM 2020
 * Add a new Graphite instance for synthetic metrics. It needs to be connected with our current Grafana instance and documented.
 * MachineVision extension performance review
 * November 2019 status -
 * December 2019 status -



Quality and Test Engineering
Team Manager: JR Branaa
 * Core Work
 * A clear set of unit, integration, and system testing tools is available for all supported engineering languages.
 * Update WebdriverIO from version 4 to 5 for Core.
 * Core Work
 * Actionable code health metrics are provided for code stewards
 * Add all applicable repos to the Code Health pipeline (Code Health Metrics).
 * Solicit feedback from current users of CHM POC and define phase 2 enhancements.
 * Improve Code Review experience
 * Interview engineering teams to understand their current code review practices -
 * Relaunch the Code Review Office Hours-
 * Put in place Code Review performance metrics-
 * Reduce complexity of the platform to make it easier for new developers to contribute
 * Actionable code health metrics are provided for code stewards
 * Make CI warn about slow tests, and publish a collated list of slow tests

Dependencies on:

 Status 
 * October 28, 2019 status:
 * Solicit feedback from current users of CHM POC and define phase 2 enhancements
 * Relaunch the Code Review Office Hours
 * Put in place Code Review performance metrics (We've defined things, but need to implement)


 * November 2019 status -
 * December 2019 status -



Release Engineering
Team Manager: Tyler Cipriani
 * Reduce Complexity of Platform
 * Build and support a fully automated and continuous Code Health and Deployment Infrastructure
 * Update weekly branchcut script for MediaWiki to allow for automation
 * Production configuration is compiled into static files on deployment servers
 * Seakeeper (New CI) proposal for a dedicated CI cluster submitted for feedback
 * A demonstration MediaWiki development environment hosts the full TimedMediaHandler front-end and back-end workflow
 * Other service deployment pipeline migrations as prioritized between SRE/RelEng and relevant teams.
 * Core Work
 * Improve and maintain the Wikimedia code review system
 * Migrate Gerrit master from Cobalt to Gerrit1001
 * Migrate from Gerrit version 2.15 to 2.16
 * Continuation of Phabricator and Gerrit improvement (in conjunction with SRE)

Dependencies on:

 Status 
 * October 28, 2019 status:
 * Migrate Gerrit master from Cobalt to Gerrit1001 ✅ (Completed on 2019-10-22; needed to be done early in the quarter to ensure we could also jump Gerrit versions this quarter)
 * Update weekly branchcut script for MediaWiki to allow for automation
 * Production configuration is compiled into static files on deployment servers
 * Seakeeper (New CI) proposal for a dedicated CI cluster submitted for feedback
 * November 2019 status -
 * December 2019 status -



Research
Team Manager: Leila Zia
 * Content Integrity
 * -  A comprehensive literature review of disinformation published in arxiv and meta (completing the work started in Q1).
 * -  Build a prioritized list of actions to take (tools to build, datasets to release, etc.) for combating disinformation (though discussions with the community of editors and developers, internal consultation, and maybe with external researchers)
 * -  Build one formal collaborations in the disinformation space to start the research for building solutions starting Q3.
 * Foundational
 * - Prepare the Research Internship proposal.
 * -  Finalize the research brief for crosslingual topical model laying out the work that will be done in this space starting Q3.
 * -  Literature review of reuse.
 * -  Review of the different types of re-use and what we know about their effect on traffic to Wikimedia.
 * -  Review of what data is available to us and what data is not. What questions we can currently answer. What questions we can't.
 * -  Initiate monthly or quarterly office hours for the community. (trial for 6 months if monthly and 12 months if quarterly)
 * ✅ -  Wiki Workshop 2020 proposal submission.
 * -  Plan for a challenge: come up with an initial format, put a committee together, choose a venue for presentations.
 * Address Knowledge Gaps
 * -  Finalize the taxonomy of readership gaps
 * -  Make significant progress towards building the taxonomy of search (usage gaps). (We expect the research part of this work to conclude in Q3, as a stretch in Q2).
 * -  Literature review of identified content gaps in Wikipedia
 * -  Taxonomy of the causes of content gaps in Wikipedia
 * -  Build a series of hypotheses for the possible causes of skewed demographic representation of Wikipedia readers (specific to gender). Identify possible formal collaborations for research and testing starting Q3 if relevant based on the learnings from the list of hypotheses.
 * ✅ -  Submit the citation usage paper to TheWebConf 2020.
 * -  (via mentoring an Outreachy) start work on the development of the data-set for statements in need of citation.
 * - Supervise a student evaluating methods to recommend images to Wikipedia pages.
 * - Train from scratch and evaluate an end-to-end (simple) classification model using Wikimedia Commons categories, optimized for GPU usage.
 * - Conduct a literature review, plan and set up collaborations for projects about understanding engagement with Wikimedia images around the world.
 * Core Work
 * -  Complete two 30-60-90 day plans.
 * -  Finalize a proposal for changes in Research based on learnings about Reseach's audience, what they expect from the team, our positioning within WMF, Movement, and the Research community, and the opportunities for impact.
 * -  Document and communicate with the team: expectations of the Research Scientist role and trajectory in the IC track.
 * -  Research Showcase feedback collection, assessment, and proposal for changes if relevant.
 * - A half-yearly newsletter for Research with the goal of making it quarterly if bandwidth allows and/or project is successful.

Dependencies on:

 Status 
 * October 28, 2019 status:
 * Finalize the research brief for crosslingual topical model laying out the work that will be done in this space starting Q3.
 * Finalize the taxonomy of readership gaps
 * Make significant progress towards building the taxonomy of search (usage gaps). (We expect the research part of this work to conclude in Q3, as a stretch in Q2).
 * Literature review of identified content gaps in Wikipedia
 * Build a series of hypotheses for the possible causes of skewed demographic representation of Wikipedia readers (specific to gender). Identify possible formal collaborations for research and testing starting Q3 if relevant based on the learnings from the list of hypotheses.
 * Submit the citation usage paper to TheWebConf 2020. ✅
 * (via mentoring an Outreachy) start work on the development of the data-set for statements in need of citation.
 * Supervise a student evaluating methods to recommend images to Wikipedia pages.
 * Build a prioritized list of actions to take (tools to build, datasets to release, etc.) for combating disinformation (though discussions with the community of editors and developers, internal consultation, and maybe with external researchers)
 * A comprehensive literature review of disinformation published in arxiv and meta (completing the work started in Q1)
 * Prepare the Research Internship proposal.
 * Literature review of reuse.
 * Initiate monthly or quarterly office hours for the community. (trial for 6 months if monthly and 12 months if quarterly)
 * Wiki Workshop 2020 proposal submission. ✅
 * Complete two 30-60-90 day plans.
 * November 2019 status -
 * December 2019 status -



Machine Learning / Scoring Platform
Team Manager: Aaron Halfaker
 * Core Work
 * Hire ML Engineer
 * Machine Learning Infrastructure
 * Jade use, maintenance, and user-research
 * Deployment of session-based models
 * Jade Entity Page UI
 * Newcomer quality session models
 * Expansion of Topic Model to ar, ko, and cswiki

Dependencies on:

 Status 
 * October 28, 2019 status:
 * (no updates available)
 * November 2019 status -
 * December 2019 status -



Search Platform
Team Manager: Guillaume Lederrey
 * Address Knowledge Gaps
 * Any new data retention requirements are implemented
 * Core Work
 * New query parser is used in production by the end of Q2
 * WDQS storage expansion
 * CirrusSearch writes are split into per cluster kafka partitions to isolate clusters from each others by end of Q2
 * Get "explore similar" running again, with whatever has changed since we last looked at it
 * Increase understanding of our work outside our team, and outside the Foundation
 * Improve search quality, especially for non-English wikis by prioritizing community requests - Positive feedback from speakers/community on changes made
 * CirrusSearch writes can be paused during cluster operations without causing excessive stress on change propagation infrastructure by end of Q2
 * Rerun "explore similar" A/B test with rigorous analysis of results
 * Enable cross-wiki searching for 3+ new languages/projects (stretch)
 * Machine Learning Infrastructure
 * Glent method 0 (session reformulation) A/B tested and deployed by end of Q2
 * Learning to Rank (LTR) applied to additional languages and projects to improve ranking (needs experimentation, might not work at all)
 * Glent method 1 (comparison to other users' queries) offline tested, tuned, A/B tested and possibly deployed end of Q2
 * Structured Data
 * Proof of Concept SPARQL endpoint for SDoC is available on WMCS and updated weekly. (stretch)

Dependencies on:

 Status 
 * October 28, 2019 status:
 * WDQS storage expansion (Quote requested, waiting for feedback from vendor)
 * Glent method 0 (session reformulation) A/B tested and deployed by end of Q2 (A/B test running, still need to evaluate results and activate in production (provided the results are positive))
 * Glent method 1 (comparison to other users' queries) offline tested, tuned, A/B tested and possibly deployed end of Q2 (Some quality issues are identified in offline tests and need to be addressed before we can move forward. The biggest problems are that we are looking at edit distance per-string rather than per-token (probably because we thought too much about single word queries, where per-string and per-token are the same thing), and that Method 1 is too ready to add spaces or change the first letter of a word, all of which can make the ""semantic distance"" between a query and a suggestion much bigger.)
 * Proof of Concept SPARQL endpoint for SDoC is available on WMCS and updated weekly. (SPARQL endpoint for SDC (Commons Query Service - CQS) is blocked on having dumps from SDC that we can load on the endpoint.)
 * November 2019 status -
 * December 2019 status -



Security
Team Manager: John Bennett
 * Core Work
 * Security Engineering and Governance
 * Create initial version of PHP security toolkit
 * Deploy StopForumSpam
 * Create privacy engineering charter
 * Incident response Table Top and updates to security after action reports and improvement plans
 * Release of Phan 2.x
 * Update and publish data classification policy
 * Create initial set of security measurements and metrics
 * Publish data protection and retention guidelines (goal is being refined)
 * Bug Bounty SOP
 * Draft new employee security awareness content
 * Publication of privacy review template
 * Finalize and publish Security services catalog
 * Vulnerability Management
 * ERM implementation
 * Supplier assessments
 * Draft 3 new Security Incident Response playbooks Q2
 * Draft 3 new security policies Q2
 * Security release Q2
 * Assess, produce, and socialize Security documentation
 * Create or improve language-based best security practices documentation
 * Perform 2 phishing campaigns and provide awareness content
 * Assess / Refine Phab Usage and Workflows
 * Facilitate Agile / Scrum adoption
 * Develop Security PM Best Practices

Dependencies on:

 Status 
 * October 28, 2019 status (all are )
 * Create initial version of PHP security toolkit
 * Deploy StopForumSpam
 * Create privacy engineering charter
 * Incident response Table Top and updates to security after action reports and improvement plans
 * Release of Phan 2.x
 * Update and publish data classification policy
 * Create initial set of security measurements and metrics
 * Publish data protection and retention guidelines (goal is being refined)
 * Draft new employee security awareness content
 * Publication of privacy review template
 * Finalize and publish Security services catalog
 * ERM implementation
 * Draft 3 new Security Incident Response playbooks Q2
 * Draft 3 new security policies Q2
 * Security release Q2
 * Assess, produce, and socialize Security documentation
 * Create or improve language-based best security practices documentation
 * Perform 2 phishing campaigns and provide awareness content
 * Assess / Refine Phab Usage and Workflows
 * Facilitate Agile / Scrum adoption
 * Develop Security PM Best Practices
 * November 2019 status -
 * December 2019 status -



Site Reliability Engineering
Directors: Mark Bergsma and Faidon Liambotis
 * Cross-cutting
 * Begin hiring for the SRE Engineering Manager positions and ensure at least 4 candidates are interviewed by the end of Q2, to position ourselves to fill our remaining IC positions
 * Deliver 80% of the asks set by the System of Performance project by EOQ

Service Operations
Team Manager: Mark Bergsma
 * Core Work
 * Finish what we started: Cleanup remnants of HHVM from our infrastructure by end of Q2
 * Migrate core software components of the Deployment Pipeline to current major releases

Data Persistence
Team Manager: Mark Bergsma
 * Core Work
 * Ensure general backup service is migrated to new hardware infrastructure by end of Q2 and general backup runs are monitored for basic success/failure criteria

Traffic
Team Manager: Brandon Black
 * Core Work

Infrastructure Foundations
Team Manager: Faidon Liambotis
 * Core Work
 * Integrate with Netbox for device selection and topology data gathering
 * Assist with adoption of at least 2 additional services into the Deployment Pipeline by service owners by end of Q2
 * Develop a new alert notification, escalation and paging capability to accommodate the increased needs of the team and department.
 * Enable opt-in 2FA for web services SSO
 * Extend security vulnerability tracking for container images
 * Upgrade the Elastic/Logstash version to >= 7.2
 * Replace/renew the internal Certificate Authority (expires Jun 2020)
 * Reduce the number of service clusters running a soon-to-be unsupported Debian release by 8
 * Reduce the number of manual steps involved in the provisioning and decommissioning of new services by 1
 * Drive the configuration of the networking infrastructure via automated means & ensure multiple team members are able to deploy new configuration

Observability
Team Manager: Faidon Liambotis
 * Core Work

Data Center Operations
Team Manager: Willy Pao
 * Core Work
 * Deliver 80% of new installs by its requested need by date.
 * Complete decommission of at least 50% (currently 48 tasks) of existing decommission tasks in eqiad, with servers completed unracked, to make room for new installs.
 * Grant root access for Papaul, to take over remote portion of decommissioning servers in eqiad.
 * Complete the rebuild/refresh of the esams caching facility in/near Amsterdam by end of October.
 * Upgrade all PDUs in eqiad to new Servertech models (15 racks total) by end of November.
 * Return all servers back to Cisco from previous server donations by end of Q2.
 * Identify at least 3 new vendors as potential options for future disposition and sale of goods/services.
 * Order and upgrade all PDUs in eqsin by end of quarter.
 * Proper proper training for dc-ops team for receiving equipment in Coupa.
 * Partner with Finance and determine point person for submitting orders in Coupa.
 * Utilize bi-weekly meetings with Finance to target and resolve all issues within Coupa that may impede our current hardware procurement process.

Dependencies on:

 Status 
 * October 28, 2019 status: all goals below are
 * Deliver 80% of new installs by its requested need by date.
 * Complete decommission of at least 50% (currently 48 tasks) of existing decommission tasks in eqiad, with servers completed unracked, to make room for new installs.
 * Grant root access for Papaul, to take over remote portion of decommissioning servers in eqiad.
 * Complete the rebuild/refresh of the esams caching facility in/near Amsterdam by end of October.
 * Upgrade all PDUs in eqiad to new Servertech models (15 racks total) by end of November.
 * Return all servers back to Cisco from previous server donations by end of Q2.
 * Identify at least 3 new vendors as potential options for future disposition and sale of goods/services.
 * Order and upgrade all PDUs in eqsin by end of quarter.
 * Utilize bi-weekly meetings with Finance to target and resolve all issues within Coupa that may impede our current hardware procurement process.
 * Ensure general backup service is migrated to new hardware infrastructure by end of Q2 and general backup runs are monitored for basic success/failure criteria
 * Finish what we started: Cleanup remnants of HHVM from our infrastructure by end of Q2
 * November 2019 status -
 * December 2019 status -



Technical Engagement
Team Manager: Birgit Müller
 * Core Work
 * - [IaaS] All out of warranty hardware used for offsite backups of Cloud Services data in the codfw datacenter is replaced
 * - [IaaS] 60% of the remaining Debian Jessie systems in the hardware layer underlying Cloud VPS are upgraded to Debian Buster or Stretch
 * - [IaaS] All Debian Jessie instances are removed/replaced in 95% of Cloud VPS hosted projects
 * - [IaaS] Deploy a minimum viable Ceph cluster in eqiad and convert 1+ cloudvirt servers to use it for instance storage
 * - [IaaS] Measure IOPS as seen at the instance level, IOPS as seen at the Ceph cluster level, and network activity generated in delivering IOPS at the backbone network level to produce a forecast for impact of full conversion of cloudvirt servers to Ceph instance storage.
 * - [IaaS] Create a shared understanding of systems and service continuity and availability constraints in the current Cloud VPS product which can be used to design follow-on projects to reduce single points of failure and establish practices for testing and maintaining continuity and availability of Cloud VPS core services.
 * - [IaaS] OpenStack APIs and services are upgraded to the "Ocata" release
 * - [PaaS] Deploy a Kubernetes 1.15.2+ cluster in Toolforge which will be used to provide a more modern, secure, and performant PaaS baseline to Tool maintainers.
 * - [PaaS] Migrate 5+ early adopter/beta tester tools from legacy Kubernetes cluster to new Kubernetes cluster to validate integration with ingress proxy layer and sandboxing/isolation of new Kubernetes cluster deployment.
 * - [PaaS] Create timeline and operational plan for migrating all Kubernetes workloads in Toolforge to the new Kubernetes cluster and decommissioning the legacy cluster by the end of FY19/20.
 * - [Docs] Create a functional template and content checklist for Help pages in the Toolforge and Cloud VPS technical content collections.
 * - [Docs] Establish a technical content review process with developers on WMCS team.
 * - [Docs] Noticeably improve readability for 5 instances of Toolforge and Cloud VPS "Help" documentation on Wikitech.


 * Reduce Complexity of the Platform, Movement Diversity
 * Increased visibility & knowledge of technical contributions, services and consumers across the Wikimedia ecosystem
 * - Create a blog by and for technical audiences where members of the technical community can post about their technical work
 * - Publish 6 (min) technical blog posts
 * - Coordinate Tech Talks and increase views on tech talks by 10%/quarter
 * Prepare release of 2nd edition of the Tech Community Newsletter (publishing date: Jan 2020)
 * - A dashboard for Wikimedia Cloud Services edit data is available to the Wikimedia movement
 * - Provide “showroom”, introducing newcomers to a variety of different tools to show what developers can do in Toolforge by Q3
 * - Find out what is needed to get data on all technical contributions/contributors
 * - Coordinate with Bitergia and get data on "Avg. Time Open (Days)" for Gerrit patchsets per affiliation and "time to first review" data for patches (by end of Q4).
 * - Gather and publish current numbers on technical contributions provided by Bitergia in the Quarterly Tech Community newsletter (by Jan 2020)


 * Reduce Complexity of the Platform, Movement Diversity
 * Support Wikimedia's diverse technical communities
 * - Develop workshop concept with partner community for technical workshops in Q3
 * - Conduct workshop and document the technical challenges small wikis face in North America
 * - Coordinate GCI. In Q2/Q3, in Google Code-in, > 35 mentors volunteer to provide tasks and mentor students in >70 task instances
 * - Coordinate Outreachy round 19. At least 5 featured projects are accepted for Outreachy round 19 by Oct 1st ✅. At least five projects are successfully completed by Outreachy interns by end of Q3.
 * - Prepare and hold session on Wikimedia's Tech internships at WikiCon North-America

Dependencies for core work is on: SRE/Data Center Operations team

 Status 
 * October 28, 2019 status:
 * ll Debian Jessie instances are removed/replaced in 95% of Cloud VPS hosted projects (Annual unused project/instance purge)
 * 60% of the remaining Debian Jessie systems in the hardware layer underlying Cloud VPS are upgraded to Debian Buster or Stretch (Cloud VPS Domain name(s) migration)
 * Create a shared understanding of systems and service continuity and availability constraints in the current Cloud VPS product which can be used to design follow-on projects to reduce single points of failure and establish practices for testing and maintaining continuity and availability of Cloud VPS core services.
 * Deploy a Kubernetes 1.15.2+ cluster in Toolforge which will be used to provide a more modern, secure, and performant PaaS baseline to Tool maintainers.
 * Technical internships + mentoring - Q2
 * Coordinate new rounds, GCI
 * Create a blog by and for technical audiences where members of the technical community can post about their technical work.
 * Publish 6 (min) technical blog posts
 * Coordinate Tech Talks and increase views on tech talks by 10%/quarter
 * A dashboard for Wikimedia Cloud Services edit data is available to the Wikimedia movement
 * Coordinate with Bitergia and get data on "Avg. Time Open (Days)" for Gerrit patchsets per affiliation and "time to first review" data for patches (by end of Q4).
 * Coordinate GCI. In Q2/Q3, in Google Code-in, > 35 mentors volunteer to provide tasks and mentor students in >70 task instances
 * Coordinate Outreachy round 19. At least 5 featured projects are accepted for Outreachy round 19 by Oct 1st Yes Done. At least five projects are successfully completed by Outreachy interns by end of Q3. ✅
 * At least five projects are successfully completed by Outreachy interns by end of Q3.
 * Prepare and hold session on Wikimedia's Tech internships at WikiCon North-America
 * November 2019 status -
 * December 2019 status -