Wikimedia Performance Team

As the Wikimedia Foundation’s Performance Team, we create value for readers and editors by making it possible to retrieve and render content at the speed of thought, from anywhere in the world, on the broadest range of devices and connection profiles.

Wikipedia sucks Wikipedia sucks Wikipedia sucks Wikipedia sucks Wikipedia sucks Wikipedia sucks Wikipedia sucks Wikipedia sucks Wikipedia sucks Wikipedia sucks Wikipedia sucks Wikipedia sucks Wikipedia sucks Wikipedia sucks Wikipedia sucks Wikipedia sucks Wikipedia sucks Wikipedia sucks Wikipedia sucks Wikipedia sucks Wikipedia sucks Wikipedia sucks Wikipedia sucks Wikipedia sucks Wikipedia sucks Wikipedia sucks Wikipedia sucks Wikipedia sucks Wikipedia sucks Wikipedia sucks Wikipedia sucks Wikipedia sucks Wikipedia sucks Wikipedia sucks Wikipedia sucks Wikipedia sucks Wikipedia sucks Wikipedia sucks Wikipedia sucks Wikipedia sucks Wikipedia sucks Wikipedia sucks Wikipedia sucks Wikipedia sucks Wikipedia sucks Wikipedia sucks Wikipedia sucks Wikipedia sucks Wikipedia sucks Wikipedia sucks Wikipedia sucks Wikipedia sucks Wikipedia sucks

Team
Follow our quarterly goals progress or take a look at our Phabricator workboard

Values
Outreach. Our team strives to develop a culture of performance first within the Wikimedia movement. We ensure that performance is a prime consideration in technological and product developments across the movement.

Monitoring. By developing better tooling, designing better metrics, automatically tracking regressions, all in a way that can be reused by anyone, we monitor the right metrics and discover issues that can sometimes be hard to detect.

Empower others. We help the organization make better choices regarding performance.

Improvement. Some performance gains require a very high level of expertise and complex work to happen before they are possible. We undertake complex projects that can yield important performance gains in the long run.

Contact

 * Phabricator issue tracker: #Performance-Team (triaged within 7 days)
 * Libera Chat IRC: #wikimedia-perf
 * email: performance-team@wikimedia.org
 * or reach out directly to Larissa Gaulia.

Intake process
Refer to Performance Review. Note that our main review starts after the changes or features are deployed to the Beta Cluster.

Self-service guidelines and tools
Here you can find most of our available tools and guides. Feel free to reach out to us if you can't find what you are looking for.
 * Performance - Diagrams, available tools, guides, and Grafana dashboards.
 * Frontend performance guidelines
 * Backend performance guidelines
 * Further reads - a list of recommended Performance blogs and articles

Public data and open source
Public datasets that may be of interest:


 * Grafana - We run our Grafana installation in public. Dashboards include Navigation Timing and Page drilldown (Synthetic testing).
 * Flame Graphs - Daily and hourly flame graphs from Arc Lamp, detailing how backend time is spent on the MediaWiki PHP servers that power Wikipedia.
 * AS Report - Periodic comparison of backbone connectivity from different Internet service providers, based on anonymised Navigation Timing and CPU benchmark datasets.

A selection of open-source software we maintain:


 * NavigationTiming extension - [JS] MediaWiki extension to send performance beacons with Navigation Timing and Paint Timing API metrics.
 * sitespeed.io - Synthetic tests collecting user journey performance metrics.
 * php-excimer - [C] Low-overhead sampling profiler and interrupt timer for PHP.
 * wikimedia/arc-lamp - [PHP] Use Excimer to collect profile samples and aggregate these from production into flame graphs.
 * ResourceLoader - [PHP] MediaWiki's delivery system for JavaScript, CSS, interface icons, and localisation text.
 * BagOStuff - [PHP] MediaWiki's abstraction layer for object caching.
 * For a full list of components we maintain and operate at Wikimedia in production, refer to Maintainers.
 * For a list of open source packages we publish, refer to Wikimedia Open Source.

Milestones

 * 2014: Migration from Zend PHP to HHVM. This greatly reduced backend response time (2x faster).
 * 2014: Build the Statsv service. Greatly simplifies sending light weight data to statsd and Graphite from front-end code and apps.
 * 2015: Helped with the HTTPS + HTTP/2 migration (announcement, lessons learned).
 * 2015: Asynchronous ResourceLoader top queue.
 * 2015: Implement optimistic save to MediaWiki (aka "edit stashing").
 * 2015: Improve cache hits for front-end resources.
 * 2015-2016: Implement DeferredUpdates for MediaWiki (API docs, src), to reduce Save Timing median under a second.
 * 2015: Introduce WebPageTest service, enabling several teams to do synthetic performance testing.
 * 2015: Implement Arc Lamp service with flame graphs that surface performance issues across the MediaWiki platform.
 * 2015: First team offsite attending the Velocity conference. Being high profile enough to speak at that conference became an aspirational goal.
 * 2017: Implemented a performance metric alert system on top of Grafana.
 * 2017: Migrated MediaWiki and all extensions to jQuery 3.
 * 2015-2018: Introduce Thumbor service and migrate all production traffic. Rewrote the media thumbnailing layer for MediaWiki.
 * 2018: Running synthetic tests using a replay proxy (WebPageReplay) to focus measurements on front end performance.
 * 2018-2019: Migrated from HHVM to PHP 7.2.
 * 2019: Join the World Wide Web Consortium and participate on web standards via W3C Web Performance WG (announcement).
 * 2019: Called out at Google I/O by Paul Irish (Chrome DevRel) as "one of the best Performance teams".
 * 2019: Spoke at major performance conferences. Including the last-ever Velocity Conference, closing the loop on our 2015 inspiration to speak there.
 * 2019: Published our first research paper, A large-scale study of Wikipedia's quality of experience, with Dario Rossi (background).
 * 2020: Organise and oversee implementation of First Paint metric in WebKit for Apple Safari (announcement).
 * 2020: Created a real mobile device performance testing lab (documentation).
 * 2020-2021: Hosted a web performance devroom at FOSDEM (blogpost, recordings).
 * 2020-2021: Trained 30+ staff members on frontend web performance.
 * 2020: Awarded the first Web Perf Hero award.

Presentations and blog posts

 * Performance Team blog
 * Humans can (also) measure your performance at WeLoveSpeed (video), Gilles Dubuc, 2021
 * Why performance matters at Wikimedia (video, restricted to staff), Gilles Dubuc, 2021
 * The role of the performance team (video, restricted to staff), Gilles Dubuc, 2021
 * How to Logstash (video, restricted to staff), Timo Tijhof, 2020
 * How to make sense of real user performance metrics (RUM) at Velocity Conference Berlin (video), Gilles Dubuc, 2019.
 * Keeping Wikipedia Fast at WeLoveSpeed (video), Peter Hedenskog, 2019.
 * Tech talk "Creating Useful Dashboards with Grafana" (video), Timo Tijhof, 2016.
 * Tech talk "Let's talk about web performance" (video), Peter Hedenskog, 2015.