Wikimedia Release Engineering Team/Metrics

This page is to start the conversation around metrics for the Release Engineering and QA team. It is most certainly a draft.

Scenarios per repo

 * probably create a short list of high priority projects/repos
 * I'm not sure this makes sense. What if one repo has 100,000 LOC and another has 10,000 LOC? One repo has 20 active developers, another has 5 active developers (for some definition of "active")? One repo has a lot of WMF funding, one repo does not? Cmcmahon(WMF) (talk) 14:29, 21 July 2014 (UTC)
 * 1) Every metrics needs a story (this will be my mantra) and 2) it can help identified relied-upon-but-under-resourced code. Greg (WMF) (talk) 20:04, 7 August 2014 (UTC)

Browser test failure reasons

 * Percentage due to infrastructural issues
 * Percentage due to actual code breakage
 * Probably requires hand coding at first
 * how can we automate it? Delegate to the teams (ie: make it really easy)?

Browser test failure length

 * # days since last green build

Pre-deploy browser test state

 * % of red builds on the last run before the deploy (Wed night)

jenkins build times

 * broken down by type (browser tests, unittests/linting, etc)
 * broken down by 'team' (for various values of team)
 * Mobile
 * Flow
 * VE
 * MW Core
 * extensions are too much work to do all of them individually, but they are too varied to get useful data out of an aggregate
 * mean time to merge
 * probably same list for 'teams' above

Phabricator

 * Number of team migrated to Phabricator vs number of teams using Trello/Mingle right now

# of backport commits to deploy branches (wmfXX)

 * Initially handcoded by Greg
 * Next step: possibility (based on natural categories Greg identifies in first step): Commit Message keyword
 * Future step: track the number of un-tagged backports and report that publicly

MediaWiki-Vagrant adoption

 * How to measure this? Survey?

Beta Cluster stability

 * basic monitoring of uptime/response time