Wikimedia Performance Team

This page is a general introduction to the Performance team. For a tech deep dive check out our page at wikitech

Mission
As the Wikimedia Foundation’s Performance Team, we create value for readers and editors by making it possible to retrieve and render content at the speed of thought, from anywhere in the world, on the broadest range of devices and connection profiles.

We focus on providing equal access to a frustration-free experience, regardless of whether someone is using a brand-new laptop on a fast network in a large metropolitan area, or if they're using an inexpensive mobile device in a rural area with unreliable internet connectivity.

Team
Follow our quarterly goals progress or take a look at our Phabricator workboard

Values
Outreach. Our team strives to develop a culture of performance first within the Wikimedia movement. We ensure that performance is a prime consideration in technological and product developments across the movement.

Monitoring. By developing better tooling, designing better metrics, automatically tracking regressions, all in a way that can be reused by anyone, we monitor the right metrics and discover issues that can sometimes be hard to detect.

Empower others. We help the organization make better choices regarding performance.

Improvement. Some performance gains require a very high level of expertise and complex work to happen before they are possible. We undertake complex projects that can yield important performance gains in the long run.

Intake process

 * Performance Review. The main prerequisite to a performance review is to have the changes or features deployed to a Beta Cluster wiki.
 * Libera Chat IRC: #wikimedia-perf
 * email: performance-team@wikimedia.org
 * reach out directly to Larissa Gaulia

Self-service guidelines and tools
Here you can find most of our available tools and guides. Feel free to reach out to us if you can't find what you are looking for.
 * Diagrams, runbooks, available tools and guides
 * Page load performance guidelines
 * Backend performance guidelines
 * Grafana dashboards
 * Further reads - a list of recommended Performance blogs and articles

Tools
Below is an overview of the various applications, tools, and services we use for collecting, processing, and displaying our data.

Processing and display
Maintained by Wikimedia:


 * performance.wikimedia.org (view live | source) - Static website that serves as portal to Flame Graphs, profiling, and other dashboards.
 * navtiming (source) – [Python] Process data from Navigation Timing beacons and submit the data to Statsd/Graphite.
 * EventLogging – [Python] Platform for schema-based data.
 * coal (view live | source) - [Python] Custom Graphite writer and Web API. Frontend graphs made with D3.
 * Statsv – [Python] Receiver for simple statistics over HTTP for statsd.
 * NOC – [PHP] Detailed db config and wiki config information (see also: Grafana: MySQL dashboard).

We also use:
 * Grafana – Dashboard for visualising data from Prometheus and Graphite, publicly viewable at grafana.wikimedia.org.
 * Flame graphs (brendangregg/FlameGraph) – Viewing data from the sampling profiler for production traffic to Wikipedia and other sites, at performance.wikimedia.org/php-profiling.
 * XHGui – Viewing data from XHProf, a function-level hierarchical profiler, used when manually debugging individual requests, at https://performance.wikimedia.org/xhgui.
 * Statsd – Metrics aggregation between instrumented applications and Graphite.
 * Logstash, at logstash.wikimedia.org (NDA restricted).
 * Memkeys

Data storage

 * Prometheus – Storage of metrics and statistics. See also: Prometheus (internal runbook).
 * Kafka – Distributed streaming and storing of events. See also: Kafka (internal runbook).
 * Graphite – Timeseries database. See also Graphite (internal runbook).

Data collection
Maintained by Wikimedia:


 * Navigation Timing (docs | source) - [JS] MediaWiki plugin to collect Navigation Timing data.
 * WebPageReplay – Synthetic testing of front end web performance against a replay proxy.
 * sitespeed.io - Synthetic tests collecting user journey performance metrics,
 * php-excimer - [C] Low-overhead sampling profiler and interrupt timer for PHP.
 * wikimedia/arc-lamp - [PHP] Collect data from Excimer and send aggregated and sampled profiles from production requests to Redis. Used for flame graphs.

We also use:


 * php-tideways (source) - Profile any request via X-Wikimedia-Debug and view it in XHGui.

Internal workflows

 * Runbooks for operating webperf services


 * Gerrit Code-Review: Performance Team dashboard (how-to: Add Gerrit navigation link)

A big part of our work is devoted to collecting and analyzing site performance data to ensure that we have a holistic and accurate understanding of what users experience when they access Wikimedia sites. A selection of Grafana dashboards we frequently use:
 * Navigation Timing
 * ResourceLoader
 * Save Timing
 * WANCache
 * Edit Stash
 * MySQL aggregate

Milestones

 * 2014: Migration from Zend PHP to HHVM. This greatly reduced backend response time (2x faster).
 * 2014: Build the Statsv service. Greatly simplifies sending light weight data to statsd and Graphite from front-end code and apps.
 * 2015: Helped with the HTTPS + HTTP/2 migration.
 * 2015: Asynchronous ResourceLoader top queue.
 * 2015: Implement optimistic save to MediaWiki (aka "edit stashing").
 * 2015: Improve cache hits for front-end resources.
 * 2015-2016: Implement DeferredUpdates to MediaWiki (API docs, src). Greatly contributed to bringing edit save time median below 1 second.
 * 2015: Introduce WebPageTest service. Now used by several teams to do synthetic performance testing.
 * 2015: Build and introduce the Arc Lamp service These Xenon-powered flame graphs help surface performance issues across the MediaWiki platform.
 * 2015: First team offsite attending the Velocity conference. Being high profile enough to speak at that conference became an aspirational goal.
 * 2016: Introduction of many Grafana dashboards to track performance, and workshops and tutorials to teach other teams.
 * 2017: Implemented a performance metric alert system on top of Grafana.
 * 2017: Migrated MediaWiki and all extensions to jQuery 3.
 * 2017: Published the first of many tech blog posts.
 * 2015-2018: Introduce Thumbor service and migrate all production traffic. Rewrote the media thumbnailing layer for MediaWiki.
 * 2018: Running synthetic tests using a replay proxy (WebPageReplay) to focus measurements on front end performance.
 * 2018-2019: Migrated from HHVM to PHP 7.
 * 2019: Spoke at major performance conferences. Including Velocity's last edition ever, closing the loop on our 2015 inspiration to speak there.
 * 2019: Published our first research paper.
 * 2020: Created a real device performance monitoring lab.
 * 2020-2021: Hosted a web performance devroom at FOSDEM.
 * 2020-2021: Trained 30+ staff members on frontend web performance.
 * 2020: Awarded the first Web Perf Hero award.

Presentations and blog posts

 * Performance Team blog
 * Humans can (also) measure your performance at WeLoveSpeed (video), Gilles Dubuc, 2021
 * Why performance matters at Wikimedia (video, restricted to staff), Gilles Dubuc, 2021
 * The role of the performance team (video, restricted to staff), Gilles Dubuc, 2021
 * How to Logstash (video, restricted to staff), Timo Tijhof, 2020
 * How to make sense of real user performance metrics (RUM) at Velocity Conference Berlin (video), Gilles Dubuc, 2019.
 * Keeping Wikipedia Fast at WeLoveSpeed (video), Peter Hedenskog, 2019.
 * Tech talk "Creating Useful Dashboards with Grafana" (video), Timo Tijhof, 2016.
 * Tech talk "Let's talk about web performance" (video), Peter Hedenskog, 2015.

Contact

 * Phabricator workboard (Issue tracker)
 * Libera Chat IRC: [ircs://irc.libera.chat/wikimedia-perf #wikimedia-perf]