Wikimedia Technology/Goals/2019-20 Q1

Technology Department Team Goals and Status for Q1 FY19/20 in support of the  Medium Term Plan (MTP) Priorities and Annual Plan for FY19/20



Analytics
Team Manager: Nuria Ruiz


 * Make easier to understand the history of all Wikimedia projects
 * Release Mediawiki History in JSON/CSV or mysql dump format (the best dataset to measure content and contributors)


 * Make easier to understand how Commons media is used across our projects.
 * Work starting on mediarequests API to get statistics of view of individual Wikimedia images.


 * Increase Data Quality
 * Enthrophy-based alarms for data issues  work should be continued in Q2


 * Increase Data Privacy and Security
 * Make kerberos infra prod ready.  will continue into Q2 as well


 * Modern Event Platform
 * * Continue moving events from job queue to event gate main. ✅
 * * Development work for kafka connect ❌ to next quarter
 * * Schema Repository CI for convention and backwards compatibility enforcement ✅


 * Operational Excellence. Increase Resilience of Systems
 * * New zookeeper cluster for tier-2
 * Operational Excellence. Reduce Operational Load by Phasing Out Legacy Systems
 * * Sunset MySQL data store for eventlogging.  this quarter and next.

 Status 
 * July 23, 2019 -
 * We will be moving any work that has to do with kafka connect to next quarter due to licensing issues. Thus marking as not done for this quarter.
 * Migration of Events to EventGate main has been rolling out w/o issues.
 * Work for mediarequests API started, probably API will be in service next quarter.
 * Enthrophy-based alarms for data issues - is done for this quarter and will be picked up next quarter.
 * August 2019 -
 * We are waiting for survey responses to finalize the format of the mediawiki history release.
 * Work continues on mediarequests API, we are working on backfilling this data on hive, once that is done we can move it to druid or cassandra so it can be served via API.
 * Migration of events from job queue to event gate main ahead of schedule, no production issues.
 * September 2019 -
 * Overall we will be finishing much of the work in progress early next quarter


 * Mediawiki history data is public, we are working on the hadoop process that will publish the files as current infrastructure on dumps is not sufficient to publish files fast enough


 * New Mediarequest API work planned for this quarter is done, we have a deployment pending in order to make it public
 * Slightly blocked on hardware for the zookeeper task, working with DC ops.



Core Platform
Team Manager: Corey Floyd
 * - Kick off Front End Working Group to explore recommendations from the Q4 research and identify a project to begin working on in Q2 (PE, Reduce Complexity of the Platform)
 * - Build out platform infrastructure to support partner APIs to support better access and increased load (PE, Tech and Product Partnerships)
 * - Develop Multi-DC storage solution(s) to hold the remaining content in the main stash in order to unblock the move to Multi-DC reads (Core)

Dependencies on: Product (front end working group and API work) SRE and Performance (Multi-DC Mainstash)

 Status 
 * July 25, 2019 -
 * Kickoff of the working group is going well, started this week with Technology and Product
 * Platform infrastructure build out - is currently and we are waiting on on going Parsoid work to be completed (deploying APIs)
 * Multi-DC storage solutions is and figuring out possible alternate solutions
 * August 2019 -
 * September 2019 -



Fundraising Tech
Team Manager: Erika Bjune
 * - Get India form to first 1 hour test and continue further development
 * - Get recurring up-sell to first 1 hour test and continue further development
 * - Support ongoing fundraising activities

Dependencies on: Advancement team, Dlocal, Ingenico

 Status 
 * July 2019 - on all 3 points.
 * August 21, 2019 - on all 3 points.
 * September 2019 -



Performance
Team Manager: Gilles Dubuc

Platform Evolution: Reduce complexity of the platform to make it easier for new developers to contribute.

 * - Improve the filtering of obsolete domains in GTIDs to avoid timeouts on GTID_WAIT. (get reviewed and merged)
 * ✅ - Support Parsing Team with performance insights on Parsoid-php roll out.
 * - Reduce reliance on master-DB writes for RL file-dependency tracking (Multi-DC prep).T113916
 * ✅ - Audit use of CSS image-embedding (improve page-load time by reducing the size of stylesheets) T121730
 * - Figure out the right store to use for the main stash (dynamo? mcrouter?). T212129
 * ❌ - Swift cleanup + WebP ramp up. T211661

Core: Maintain libraries for which Performance is currently responsible, evaluate libraries to determine if should be owned by another team and perform handoffs to other teams when possible.

 * ✅ - [Ongoing] Support and maintenance of MediaWiki's object caching and data access components.
 * ✅ - [Ongoing] Support and maintenance of WebPageTest and synthetic testing infrastructure.
 * ✅ - [Ongoing] Support and maintenance of MediaWiki's ResourceLoader.
 * ✅ - [Ongoing] Support and maintenance of Fresnel.
 * ✅ - Support AbuseFilterCachingParser development. T156095

Core: We can quickly detect performance regressions and be able to better detect potential ones prior to deployment.

 * ✅ - Add Grafana dashboard for WANObjectCache stats. T197849

Core: Create a culture of performance in Wikimedia

 * ✅ - Write two performance topic blog posts.
 * ✅ - Line up interested speakers for a FOSDEM Web Performance devroom proposal.

Dependencies on: SRE, CPT, Parsing

 Status 
 * July 2019 -
 * August 2019 -
 * September 2019 -



Release Engineering
Team Manager: Greg Grossmeier

Priority: Reduce complexity of the platform to make it easier for new developers to contribute.

 * - All applicable new and existing services (and partially MediaWiki) exist in the Deployment Pipeline
 * Migrate restrouter ✅
 * (Stretch): MobileContentService is now
 * (Stretch): Preparatory MediaWiki config clean-up & static loading work
 * - Actionable code health metrics are provided for code stewards
 * Scope out requirements for a self-hosted version of SonarQube for our use is ❌
 * Expand set of repositories covered by code health metrics (via SonarQube) ✅
 * - Provide a standardized local MediaWiki development environment
 * Migrate local-charts to deployment-charts is - -
 * Instantiate testing and linting of helm charts - -
 * Preliminary work on a CLI for setup/management - -

Dependencies on: SRE, Code Health Metrics WG

Core: Developers have a consistent and dependable deployment service.

 * - Iteratively improve our deployment tooling, service, and processes.
 * Streamline the Kibana -> Phab error reporting workflow (using client-side code, at first)
 * - Align developer services with SRE best practices.
 * Work with SRE to identify and implement needs of Phabricator and Gerrit (expected to last into Q2)

Dependencies on: SRE, Performance

Core: Maintain and improve the Continuous Integration and Testing services

 * - Maintain CI and testing services
 * Scope updated CI/testing KPIs
 * Set up an experimental elastic search instance to store and analyze CI logs and metrics ❌
 * - Evaluate, select, and implement a new CI infrastructure.
 * POCs of GitLab and Zuul3 systems (as well as argo); evaluate options
 * Document an implementable architecture for what we want in new CI

Dependencies on: SRE/Others invested in CI architecture choices

Core: A clear set of unit, integration, and system testing tools is available for all supported engineering languages.

 * - Update the existing system test tooling and developer education.
 * Update existing Selenium documentation (https://www.mediawiki.org/wiki/Selenium/Node.js) ✅

Dependencies on: none.

 Status 
 * July 25, 2019 -
 * Migrate restrouter is ✅ and is now in Services's team hands
 * Some portions of SonarQube is not open sourced, so we're looking into options
 * Streamline the Kibana -> Phab error reporting workflow - has a POC now and should be deployed soon
 * August 27, 2019 -
 * For the work to streamline the Kibana -> Phab error reporting workflow e're looking at deploying Phatality
 * POC for GitLab is ✅ and Zuul3 is
 * Scope out requirements for a self-hosted version of SonarQube is no longer stalled. We have a strategy that will use a combination of self-hosted and cloud hosted depending on the data. Essentially, self-hosted open source version will not do branch level analysis.  We don't believe that will keep us from using it for non-branch based analysis.
 * Expand set of repositories covered by code health metrics (via SonarQube) - we will have three new extensions added by the end of this month, and adding 3-6 more next month.
 * Set up an experimental elastic search instance to store and analyze CI logs and metrics: We met to discuss this under the "Data ^3" project and laid out some basic objectives for a POC, and this work will continue into next quarter.
 * Update the existing system test tooling and developer education:
 * We worked with the Core Platform team in the analysis and selection of a an integration test tool. The expectation is for the Quality and Test Engineering team to take responsibility for this tooling once a SET is in place.
 * Code Health Metrics WG has spun off effort to separate existing MediaWiki Unit Tests from Integration tests (driven by WMDE)
 * September 2019 -
 * Actionable code health metrics are provided for code stewards
 * Decided that prior to investigating self-hosting of SonarQube, we wanted asses the current perceived value. As such we will be interviewing teams that are currently using SonarQube/SonarCloud as part of the Code Health Pipeline.
 * We've been incrementally adding new repos to the Code Health Pipeline in order to avoid overloading the CI. No issues so far.  Looking to add all applicable repos by the end of Q2.
 * A clear set of unit, integration, and system testing tools is available for all supported engineering languages.
 * To date we've established a set of tools that are used across the organization for Unit and System level automated testing. The CPT team has evaluated and deployed an integration testing tool that we look to make available more broadly.  However, due to lack of SET staffing, it's not likely going to happen in this FY.  As the new Quality and Test Engineering team has been formed, we will be assessing the state of tools across other teams across the foundation.
 * The Selenium documentation has been updated.
 * Webdriver IO has been upgraded from 4 to 5 for Core. Will need to start planning the migration for the other repos.



Research
Team Manager: Leila Zia
 * ✅ - [P-O14-D4] Run a series of interviews, office hours, or surveys to gather volunteer editor community's input on citation needed template recommendations. The result of this work will inform the specifications of an API (to be developed) to surface citation needed recommendations as well as future directions for this research.
 * ✅ - [P-O14-D4] Complete the research on characterizing Wikipedia citation usage. (Why We Leave Wikipedia). This goal will continue in Q2 and depending on the submission results potentially in Q3.
 * ✅ - [W-O6-D3] Computer vision consultation as part of Structured Data on Commons
 * ❌ - [P-O14-D6] Building a pipeline for image classification based on Commons categories.
 * ✅ - [P-O14-D4] Make substantial progress towards a comprehensive literature review about automatic detection of misinformation and disinformation on the Web. We expect this work to be completed in Q2 and inform the work in this direction in Q3+.
 * ✅ - [P-O14-D4] Understand patrolling on Wikipedia. A write-up describing how patrolling is being done on Wikipedia across the languages. This work may be extended further by understanding the patrolling on Wikipedia in the context of Wikipedia's interaction with other projects such as Wikidata, Wikimedia Commons, ...
 * ✅ - Conduct the analysis on reader surveys to understand the relation between demographics and the consumption of content on Wikipedia across languages. (Why We Read Wikipedia + Demographics). This research will be concluded in Q2 and we expect substantial progress in Q1:
 * ✅ - Hiring and onboarding. We expect 1-2 scientists to join the team in Q1 and the onboarding work will need to happen. We also expect to open a position for an engineering position in the team.
 * ✅ - [T-O12-D3] Determine important features of articles w/r/t level of reader interest across different demographic groups (as motivation for what aspects a general article category model should capture):
 * ✅ - Wrap up editor gender work:

Dependencies on: Product, Community Liaisons, and Structured Data teams

 Status 
 * July 23, 2019 - notes:
 * Complete the research on characterizing Wikipedia citation usage -- bulk of the work will be done in Q1 and Q2, and submitted in Q3.
 * Computer vision consultation as part of Structured Data on Commons -- more continued work on this, deadline is end of calendar year, currently waiting on word from Product on direction.
 * Building a pipeline for image classification based on Commons categories -- this work is ongoing through this quarter and next.
 * Comprehensive literature review about automatic detection of misinformation and disinformation -- this work will go on, but is not sustainable long term without addition of headcount for the team.
 * Analysis on reader surveys to understand the relation between demographics and the consumption of content -- we hope to present this at Wikimania 2019
 * August 27, 2019 - . All goals are on track to be met by the end of September (quarter).
 * September 2019 -



Scoring Platform
Team Manager: Aaron Halfaker
 * - Build out the Jade API to support user-actions
 * - Build/improve models in response to community demand
 * - Support operations infrastructure improvements (k8s, redis SPOF)

Dependencies on: SRE

 Status 
 * July 2019 -.
 * August 2019 -
 * September 2019 -



Search Platform
Team Manager: Guillaume Lederrey

Reduce complexity of the platform: Reduce technical debt and increase automation to reduce workload and make it easier to add new search features
 * Refactor query highlighting to make it extensible by other extensions
 * Refactor Mjolnir jobs into separate smaller jobs

Core work: Maintain CirrusSearch and the Search API and WDQS


 * Core maintenance work (always )
 * Improve WDQS updater performance by writing custom code for updates
 * Full data reimport for WDQS to enable optimizations that were done last quarter ✅
 * Work through the backlog of bugs and performance improvements for WDQS with our contractor ✅
 * Start the hiring process for a new WDQS Engineer ✅
 * Hardware renewal: replace elastic1017-1031

Continue to identify and enable machine learning and natural language processing techniques to improve the quality of search
 * "Did you mean" suggestions: deploy method0 to production

Underserved communities benefit from search techniques that to date are only used on big wikis like machine learning–aided ranking, word embeddings or language specific analyzers: Language analysis / Phab work
 * Work on highest priority language tickets (Discovery Search board / Language Stuff—always )

Structured Data on Commons support (as needed)
 * RDF export
 * Address the indexing issues of MediaInfo (labels vs descriptions) ✅

Dependencies on: RDF export: WMDE / Wikidata, Hardware renewal: DC Ops, MediaInfo indexing: SDoC

 Status 
 * July 30, 2019 -
 * Hiring process is in full swing for WDQS engineer - lots of folks applying!
 * Hardware renewal is and we're getting quotes
 * August 27, 2019 -
 * Refactor query highlighting is still with lots of patches being uploaded (Phab ticket added above)
 * Refactor Mjolnir jobs into separate smaller jobs was slow going the last month, but we should be able to tackle it in Sept.
 * Improve WDQS updater performance by writing custom code for updates should be done by end of Sept
 * We'll be taking another look at the backlog of bugs and performance improvements for WDQS this week,
 * Hardware renewal is ongoing, we're waiting on them to be racked and set up
 * Language work is continuing:
 * Slovak diacriticless search is waiting for community feedback T223787
 * highlighting for CirrusSearch results now respects grapheme clusters T35242 ✅
 * related patches for JQuery and OOUI libraries are awaiting feedback (same task)
 * Improvements to Khmer searching are ongoing T185721
 * RDF export is also still with patches being tested and merged when good; we're working with WMDE on this
 * September 2019 -
 * Refactoring query highlighting—, but expect to be done (last patch in progress)
 * Refactoring Mjolnir— almost done, expect 1-2 weeks into Q2
 * Improve WDQS updater performance—still
 * Work through the backlog of WDQS bugs—✅ (bugs got worked)
 * Hiring process for WDQS Engineer—✅ (process started)
 * Hardware renewal— delivered but not racked, will spill into Q2
 * DYM, deploy M0— A/B test will run this week
 * RDF export—unclear who owns it



Security
Team Manager: John Bennett

Core


 * - Finalize and publish service catalog
 * - Draft new employee security awareness content
 * - Create initial set of security measurements and metrics
 * - Create initial version of PHP security toolkit
 * ❌ - Create design document for how DAST will work
 * ✅ - Create team learning circles
 * - Publication of security team roadmap
 * - Release of Phan 2.x
 * - Security release
 * - Bug Bounty SOP
 * - Deploy StopForumSpam
 * - Draft 3 new security policies
 * - Draft 3 new Security Incident Response playbooks
 * - Socialize Corrective Action plan for Security Incidents
 * ❌ - Incident response Table Top and updates to security after action reports and improvement plans
 * - Discovery ticket for ElastAlert detection and alerting
 * ❌ - Phishing Security Awareness, at least 2 completed Phishing campaigns
 * ✅ - Team retro, implement agile ceremonies for appsec related projects
 * - Publish data protection and retention guidelines
 * - Create privacy engineering charter
 * - Update data classification policy
 * - Publication of privacy review template

Dependencies on: New employee security awareness needs OIT onboarding and new account process integration.

 Status 
 * July 25, 2019 -
 * Draft service catalog and 4-5 service descriptions being drafted and schedule for release at the end of the Q
 * New employee security awareness content will bolt on to the OIT new employee process. Content being prepared, hope to deploy this quarter.
 * Initial measurements around the number of concept reviews for both appsec and privacy engineering will be collected this quarter.
 * Ongoing work in the creation of some appsec automation via PHP security toolkit
 * Ongoing work and investigation on how a DAST solution could fit into our appsec pipeline
 * Security team roadmap is being built in Asana and will be published on office wiki this quarter.
 * Lots of work in the data protection and privacy engineering space.
 * August 26, 2019 -
 * Draft service descriptions and catalog created, on target for publication at the end of the quarter
 * We will be sidelining DAST work for this quarter due to bandwidth
 * learning circles/skill matrix created
 * Draft version of roadmap created, on target for publication at the end of the quarter
 * Security table top exercise will be sidelined this quarter to work on corrective action plan and playbooks
 * Phishing campaign stalled and will likely be abandoned this quarter
 * Team retro completed and monthly appsec retro created.
 * All privacy related work currently on target.
 * September 2019 -



Site Reliability Engineering
Directors: Mark Bergsma and Faidon Liambotis

Cross-cutting

 * Firefighting improvements, ONFIRE (continuation)
 * Produce a standardized template for a status document for ongoing major incidents ✅
 * Iterate on a process for running the incident documentation review board; review 90% of incident documents written this quarter
 * [stretch] Research possible implementations for synchronizing team contact information to everyone's phone


 * ✅ Database automation (continuation)
 * Productionize dbctl (deploy, import data, set up alerts)
 * Set up MediaWiki to optionally read the database configuration from etcd
 * Gradually migrate all MediaWiki instances to read the database configuration from etcd

Service Operations
Team Manager: Mark Bergsma


 * Complete the transition to PHP 7 in production
 * Move all application server & API traffic to PHP 7
 * Move maintenance scripts to PHP 7 ✅
 * Move jobrunners to PHP 7 ✅
 * [stretch] Remove HHVM from production


 * Self-service Deployment Pipeline
 * Define and document the process for service owners to deploy a new service onto the pipeline
 * Support migration of services RESTrouter, wikifeeds by service owners

Dependencies on: Release Engineering, Core Platform, Performance

Data Persistence
Team Manager: Mark Bergsma


 * Address Database infrastructure blockers on datacenter switchover
 * Order, rack and setup 10 new hosts in codfw ✅
 * Failover all codfw masters
 * Failover eqiad masters to new hosts and decommission old masters
 * [stretch] Deploy codfw non-Mediawiki database proxies ✅


 * Strengthen backup infrastructure and support
 * Deploy new Bacula hardware
 * Transfer ownership and knowledge of Bacula backup infrastructure
 * [stretch] Migrate general backup service from old to new host(s)

Traffic
Team Manager: Brandon Black


 * Create usable TLS ciphersuite dashboard (continued)
 * Decide on Prometheus vs Webrequest
 * Send all the right data from the cp boxes upstream
 * Make useful charts and graphs that can correlate ciphers to UA, Geo, ASN, etc.


 * Finish TLS deployment via ATS
 * Continuation of previous Q goal
 * Switch production edge TLS termination to ATS
 * [stretch] Support TLS1.3 {[to do}}


 * ATS Backends: Test live cache_text traffic
 * Implement basic TLS termination for cache_text services (may not be final solution w/ real PKI)
 * Begin testing a small fraction of live cache_text traffic through ATS backends ✅


 * AuthDNS: Implement smooth geoip repooling solution
 * Design new dynamic response architecture for future needs
 * MVP/Draft code for geoip smooth repooling using above
 * [stretch] release code, use in production


 * Deploy anycast recdns to all production
 * Finish evaluating current running implementation under live test ✅
 * Implement any minor improvements we need ✅
 * Switch most production hosts to using anycast recdns @ 10.3.0.1

Infrastructure Foundations
Team Manager: Faidon Liambotis


 * Puppet 5 (continuation & wrap-up)
 * Upgrade all production Puppetmasters to Puppet 5.5
 * Upgrade production PuppetDB to 6.2 in both data centers


 * Configuration management for network operations
 * Productionize existing configuration management software (jnt)
 * Integrate with Netbox for device selection and topology data gathering
 * Add safe push method for the configuration: interactive and sequential
 * [stretch] Evaluate Netbox to store network secrets ✅


 * Bare metal cloud
 * Import existing management interfaces IPs into Netbox ✅
 * Automate the assignment of new host's management interface IP
 * Automate the generation of management interface DNS records


 * Identity Management & Single Sign On
 * Build a production prototype of an Apereo CAS identity provider ❌
 * Switch (at least) one service to authenticate against the identity provider ❌

Observability
Team Manager: Faidon Liambotis


 * Improve our alerting capabilities
 * Produce and circulate an alerting infrastructure roadmap ✅
 * Establish periodic alerts reviews, complete one by EOQ
 * Reduce Icinga alert noise for forever


 * Tech debt: sunsetting of Graphite (part 1)
 * Deprecate statsd: fully migrate >= 30% of producers off statsd
 * [stretch] Deploy Thanos (long-term storage) stateless components: sidecar and query

Data Center Operations
Team Manager: Willy Pao


 * Refine procurement process
 * Improve average end-to-end turnaround time from hardware request to hardware delivery is (always in progress)
 * Tighten up procurement cycle by implementing regularly scheduled deadlines for quotes, approvals, and purchase orders
 * Implement general template form for service owners to fill in ✅


 * Improve turnaround times on repair/break-fix tasks
 * Implement a new hardware repair template & refine existing triaging processes
 * Enforce regular use of hardware troubleshooting runbook
 * Hire and on-board a contractor for additional support in eqiad ✅
 * Identify 3rd party contractor to take care of straightforward tasks at remote caching sites ✅


 * Operational excellence: resolve all inventory inconsistencies
 * Clean up existing backlog of Netbox inconsistencies and data errors
 * Keep all Netbox reports in a "passed" state
 * Maintain zero error reports going forward


 * Recycle all existing decommissioned hardware
 * Clear out existing decommissioned hardware in ulsfo and codfw
 * Determine alternative disposition company for Juniper equipment

 Status 
 * July 23, 2019 -
 * Complete the transition to PHP 7 in production is partially ❌ currently
 * Self-service Deployment Pipeline draft has been posted
 * Refine procurement process is in testing right now (2 week cycle)
 * Improve turnaround times on repair/break-fix tasks is also in progress with a new hire
 * Recycle all existing decommissioned hardware is in progress with getting quotes for work to be done
 * September 10, 2019 -
 * Firefighting improvements are and database automation is ✅
 * Move all application server (for PHP7) is at about 33% and should get to 50% later this week
 * Support migration of services for deployment pipeline is and might move into next quarter
 * Strengthen backup infrastructure and support is now
 * additional notes above (in line)
 * September 2019 -



Technical Engagement
Team Manager: Birgit Müller

Core
 * ✅ - HA for OpenStack API endpoints (keystone, glance, nova, designate)
 * - OpenStack version upgrade(s) - tbc in Q2
 * - Jessie deprecation (infra + Cloud VPS) - tbc in Q2
 * ❌ - Ceph cluster POC
 * - Improve Cloud VPS documentation (for users) - tbc in Q2
 * - Toolforge Kubernetes redesign/upgrade
 * - Improve Toolforge documentation - tbc in Q2

Increased visibility & knowledge of technical contributions, services and consumers across the Wikimedia ecosystem (Reduce Complexity of the Platform, Movement Diversity)
 * - Continue Tech Talks
 * ✅ - Conduct Coolest Tool Award
 * - Publish Technical Contributors Map
 * - Blog posts on Small Wiki Toolkits & Coolest Tool Award
 * ✅ - Design & publish Tech Engagement quarterly newsletter
 * - Develop visualization tool for WMCS edit data - tbc in Q2
 * - Publish Developer Metrics

Support Wikimedia's diverse technical communities (Reduce Complexity of the Platform; Movement Diversity)
 * - Develop support formats: Coordinate Small Wiki Toolkits focus area; Create toolkits & experiment, evaluate
 * - Technical internships and mentoring: Mentor students in GSOD, GSOC, Outreachy
 * Always - Provide continuous bug management support in Phabricator

Dependencies for core work is on: SRE/Data Center Operations team

 Status 
 * July 23, 2019 - as marked above
 * August 23, 2019
 * HA for OpenStack API endpoints - we'll have a good plan going forward after the offsite and should be able finish everything up by end of quarter.
 * Jessie deprecation (talking with the community) will get started in the next week or so, still in a status for now and will continue into Q2 as the conversation continues.
 * Ceph cluster - currently waiting on hardware to be installed, stalled for now
 * Toolforge Kubernetes redesign/upgrade is still (puppetization is in place, and now working on customizations) but will extend into Q2 due to conferences, etc
 * We've also been onboarding a new team member this quarter.
 * September 2019
 * Completed HA for OpenStack API endpoints. Glance is active/passive rather than active/active for now due to lack of good shared storage option. Will revisit after Ceph cluster project is complete.
 * OpenStack upgrades from Mitaka to Newton are partially complete. Expect to finish by mid-October.
 * Jessie deprecation goal for Q1 of notifying community of project and creating a tracking dashboard is complete. Project continues in Q2.