Wikimedia Performance Team/Web Perf Hero award

The Performance Team has been giving the Web Perf Hero award since mid-2020, to individuals who have gone above and beyond to improving the web performance of Wikimedia projects. It's awarded once a quarter (or less), and takes the form of a Phabricator badge.

Below are past recipients and why they've been given the Web Perf Hero award. Beyond specific recent projects that led to the award, they've all demonstrated repeated care, focus and discipline around performance.

Amir Sarabadani
Over the past six months, Amir (@Ladsgroup) significantly reduced the processing time and cost for saving edits in MediaWiki. Not just once, but several times! We measure this processing time through Backend Save Timing (docs). This metric encompasses time spent on the web server, from process start, until the response is complete and flushed to the client.

Amir expanded MediaWiki's ContentHandler component, with an ability for content models to opt-out from eagerly generating HTML (T285987). On Wikipedia we generate HTML while saving an edit. This is necessary because HTML is central to how wikitext is parsed and, generating HTML ahead of time speeds up pageviews. On Wikidata, this is not the case. Wikidata entities (example) can be validated and stored without rendering an HTML page. Wikidata is also characterised by having a majority of edits come from bots, and the site receives far fewer pageviews proportional to its edits (where Wikipedia has ~1000 pageviews per edit, Wikidata has ~10 ). This does not account for Wikidata edits generally being done in sessions of several micro edits.

Amir adopted this new opt-out in the Wikibase extension, which powers Wikidata. This lets Wikidata skip the HTML generation step whenever possible. He also identified and fixed an issue with the SpamBlacklist extension (T288639), that prevented the Wikidata optimisation from working. The spam filter acts on links in the content via Parser metadata, but it requested a full ParserOutput object with HTML, rather than metadata.

Amir's work cut latencies by half. The wbeditentity API went from upwards of 1.5s at the 95th percentile to under 0.7s, and the 75th percentile from 0.6-1.0s down to 0.4-0.5s (Grafana). Internal metrics show where this difference originates. The  metric went from 0.5-0.7s down to 0.2-0.3s, and   from 0.2-0.3s to consistently under 0.1s (Grafana).

SD0001
@SD0001 implemented Package files for Gadgets (T198758). This enables gadget maintainers to bundle JSON files, unpacked via. This improves performance by avoiding delays from extra web requests. It also improves security by allowing safe contributions to JSON pages, as pure data with validated syntax on-edit. Previously, admins on Wikimedia wikis for example, would need script editing access for this and rely on copy-paste instructions from another person via the talk page.

SD0001 also introduced  in ResourceLoader, and used it in the startup module to optimise away unneeded module registrations. We just shipped the first adoption of this for Gadgets (T236603). In the future, we'll use this to optimise MediaWiki's own skin modules as well.

Umherirrender
@Umherirrender has initiated and carried out significant improvements to the performance of MediaWiki user preferences (T278650, T58633, and T291748). The impact is felt widely and throughout Wikimedia sites. For example, when switching languages via the ULS selector, or exploring Beta Features and Gadgets, or switching skins. These are all powered by the MediaWiki "Preferences" component.

The work included implementing support for deferred message parsing in more HTMLForm classes, and applying this to the Echo and Gadgets extensions. This cut API latency by over 50%, from 0.7s to 0.3s at the median, and 1.2s to 0.5s at p95. (See graphs at T278650#7130951).

Kunal Mehta
@Kunal's work investigating and fixing performance differences during the Debian Buster upgrade was critical in understanding and mitigating the performance impact of that migration. If it wasn't for his initiative, that issue might have gone unnoticed or underestimated for some time and been much harder to understand and deal with.

Giuseppe Lavagetto
@Giuseppe's in-depth blog post about Envoy and PHP and all the underlying work that he did shows that he's willing to go the extra mile to improve the performance of our systems.

Nick Ray
Nick's in-depth analysis of the DOM order impact on performance was excellent and shows that how much work he does to ensure that he's building performant features.

Jon Robson
We hereby recognise the excellence of Jon's work converting image lazy loading to use IntersectionObserver, one of many projects he had the initiative of starting to improve the performance of our sites.