Wikimedia Performance Team
As the Wikimedia Foundation’s Performance Team, we want to create value for readers and editors by making it possible to retrieve and render content at the speed of thought, from anywhere in the world, on the broadest range of devices and connection profiles.
Team[edit]
Focus[edit]
Outreach. Our team strives to develop a culture of performance first in the movement. Through communication, embedding ourselves in the product lifecycle and training, we want to make performance a prime consideration in technological and product developments across the movement.
Monitoring. By developing better tooling, designing better metrics, automatically tracking regressions, all in a way that can be reused by anyone, we want to monitor the right metrics and discover issues that can sometimes be hard to detect.
Improvement. Some performance gains require a very high level of expertise and complex work to happen before they are possible. We undertake large projects, often on our legacy code base, that can yield important performance gains in the long run.
Knowledge. We are the movement's reference on all things performance, which requires keeping up with rapid changes in technology across our entire stack. In order to disseminate correct information in our outreach, we aim to build the most comprehensive knowledge base about performance.
Contents
Current projects[edit]
Availability. Although Wikimedia Foundation currently operates five data centers, MediaWiki is only running from one. If you are an editor in Jakarta, Indonesia, content has to travel over 15,000 kilometers to get from our servers to you (or vice versa). To run MediaWiki concurrently from multiple places across the globe, our code needs to be more resilient to failure modes that can occur when different subsystems are geographically remote from one another.
Performance testing infrastructure. WebPageTest provides a stable reference for a set of browsers, devices, and connection types from different points in the world. It collects very detailed telemetry that we use to find regressions and pinpoint where problems are coming from. This is addition to the more basic Navigation Timing metrics we gather from real users in production.
ResourceLoader. ResourceLoader is the MediaWiki subsystem that is responsible for loading JavaScript and CSS. Whereas much of MediaWiki's code executes only sparingly (in reaction to editors modifying content) ResourceLoader code runs over half a billion times a day on hundreds of millions of devices. Its contribution to how users experience our sites is very large. Our current focus is on improving ResourceLoader's cache efficiency by packaging and delivering JavaScript and CSS code in a way that allows it to be reused across page views without needing to be repeatedly downloaded.
Presentations and blog posts[edit]
- The Speed of Thought (Blog), regular blog posts from the Performance Team at Wikimedia.
- Tech talk "Let's talk about web performance" (video), Peter Hedenskog, 2015.
- Tech talk "Creating Useful Dashboards with Grafana" (video), Timo Tijhof, 2016.
Dashboards[edit]
A big part of our work is devoted to collecting and analyzing site performance data to ensure that we have a holistic and accurate understanding of what users experience when they access Wikimedia sites. You can discover our dashboards by visiting the Wikimedia performance portal. A selection of our dashboards is also provided here.
Tools[edit]
Below is an overview of the various applications, tools, and services we use for collecting, processing, and displaying our data.
Data collection[edit]
Maintained by Wikimedia:
- wikimedia/arc-lamp - [PHP] Collect data from HHVM Xenon and send aggregated and sampled profiles from production requests to Redis. Used for flame graphs.
- Navigation Timing (docs | GitHub) - [JS] MediaWiki plugin to collect Navigation Timing data.
- WebPageTest – Synthetic testing of web performance, at wpt.wmftest.org.
- WebPageReplay – Synthetic testing of web performance.
- WebPageTest runner (GitHub) - [JS] Collect data from WebPageTest API and send to Statsd or Graphite.
- Jenkins configuration (GitHub) - [YAML] Jenkins job that triggers WebPageTest runs.
- Tendril (GitHub). [PHP] Real-time MariaDB analytics and performance.
We also use:
- XHProf (HHVM extension) - Profile any request via X-Wikimedia-Debug and view it in XHGui.
Processing and display[edit]
Maintained by Wikimedia:
- performance.wikimedia.org (see | GitHub) - Static website that serves as portal to Flame Graphs, profiling, and other dashboards.
- navtiming (GitHub) – [Python] Process data from Navigation Timing beacons and submit the data to Statsd/Graphite.
- EventLogging – [Python] Platform for schema-based data.
- coal (see | GitHub) - [Python] Custom Graphite writer and Web API. Frontend graphs made with D3.
- PerformanceInspector (docs | GitHub) - [JS] MediaWiki plugin to profile the current page and find potential performance problems.
- Statsv – [Python] Receiver for simple statistics over HTTP for statsd.
- perflogbot (source) - [JS] An IRC bot tracking behaviour of ResourceLoader in Wikimedia production (#wikimedia-perf-bots connect )
- Xenon CLI tools [Python]
- dbtree - Detailed MySQL cluster information (see also: Grafana: MySQL dashboard).
We also use:
- Grafana – Dashboard for visualising data from Prometheus and Graphite, publicly viewable at grafana.wikimedia.org.
- Flame graphs (brendangregg/FlameGraph) – Viewing data from the sampling profiler for production traffic to Wikipedia and other sites, at https://performance.wikimedia.org/xenon.
- XHGui – Viewing data from XHProf, a function-level hierarchical profiler, used when manually debugging individual requests, at https://performance.wikimedia.org/xhgui.
- Statsd – Metrics aggregation between instrumented applications and Graphite.
- Logstash, at logstash.wikimedia.org (NDA restricted).
- Memkeys
Data storage[edit]
- Prometheus – Storage of metrics and statistics. See also: Prometheus (internal runbook).
- Kafka – Distributed streaming and storing of events. See also: Kafka (internal runbook).
- Graphite – Timeseries database. See also Graphite (internal runbook).
Milestones[edit]
- Migration from Zend PHP to HHVM. This greatly reduced backend response time (2x faster).
- Helped with the HTTPS + HTTP/2 migration.
- Asynchronous ResourceLoader top queue.
- Optimistic save (aka "edit stashing").
- Improve cache hits for front-end resources.
- DeferredUpdates (src). Greatly contributed to bringing edit save time median below 1 second.
- WebPageTest. Now used by several teams to do synthetic performance testing.
- Xenon/Flame graphs. Surfaces performance of all our PHP backend.
- NavigationTiming improvements. We are now able to track real user performance in a more fine-frained fashion.
- Introduction of many Grafana dashboards to track performance (see list above).
- Statsv. Greatly simplifies sending light weight data to statsd and Graphite from front-end code and apps.
- Helped improve the new portal page performance
- Helped on the images lazy loading project, which focused on improving the performance of the mobile site.
- Implemented a performance metric alert system on top of Grafana.
- Thumbor. Rewrote the media thumbnailing layer for Wikimedia production.
- Migrated MediaWiki and all extensions to jQuery 3.
Workflow[edit]
- Grafana dashboards
- Gerrit Code-Review: Performance Team dashboard (how-to: Add Gerrit navigation link)
Contact[edit]
- Phabricator workboard (Issue tracker)
- Freenode IRC: #wikimedia-perf connect
- Wikimedia Foundation teams internals
- WMF Projects
- WMF Projects 2015q1
- WMF Projects 2015q2
- WMF Projects 2015q3
- WMF Projects 2015q4
- WMF Projects 2016q1
- WMF Projects 2016q2
- WMF Projects 2016q3
- WMF Projects 2016q4
- WMF Projects 2017q1
- WMF Projects 2017q2
- WMF Projects 2017q3
- WMF Projects 2017q4
- WMF Projects 2018q1
- WMF Projects 2018q2
- WMF Projects 2018q3
- WMF Projects 2018q4
- WMF Projects 2019q1
- WMF Projects 2019q2
- WMF Projects 2019q3
- WMF Projects 2019q4