Community metrics

From MediaWiki.org
Jump to: navigation, search

How is the MediaWiki / Wikimedia tech community doing? Let's analyze the data available in order to highlight the contributors and areas setting an example, and also the bottlenecks or inactive corners requiring our attention.

Your feedback and requests are welcome in Phabricator (project Analytics-Tech-community-metrics). You can also comment at the discussion page and at the Analytics mailing list.

Key performance indicators[edit]

Median age of open changesets[edit]

"Time from last patchset" in days at "Age of open changesets (monthly snapshots)".

Open changesets waiting for review[edit]

"Waiting for review" at "Backlog of open changesets (monthly snapshots)"

New changesets submitted per month[edit]

The "submitted" line in "submitted vs. Merged changes vs. Abandoned".

Active Gerrit code review users per month[edit]

Uploaders, Reviewers, Committers

Other reports[edit]

MediaWiki Core code review[edit]

Number of MediaWiki Core changesets waiting for review (CR0 or CR+1):

Age in days of open MediaWiki Core patchsets:

Active users in Phabricator[edit]

Monthly active users in Bugzilla (from 2013-02 to 2014-10) and in Phabricator (from 2014-12 to last month).

New accounts in Phabricator[edit]

New Phabricator accounts created every month. About 4003 users have registered to Wikimedia Phabricator between its creation on September 2014 and October 2015.

korma.wmflabs.org[edit]

korma.wmflabs.org is the Wikimedia Tech community metrics dashboard. It has been under development since June 2013.

The data is refreshed regurlarly. The data sources include Git and Gerrit repositories, Phabricator's Maniphest (and Bugzilla before), mediawiki.org, mailing lists, and IRC.

Korma is powered by Metrics Grimoire and Viz Grimoire. You can find the development specific to the Wikimedia tech dashboard in GitHub.

Bugs and feature requests can be reported under the Analytics-Tech-community-metrics project.

In 2016/2017, korma.wmflabs.org will be deprecated and replaced by a Kibana-based dashboard. See below for more information.

Git[edit]

  • The source code repos analyzed are mediawiki/core and all the mediawiki extensions:
    • FIXME: This is only a portion (a big one, yes) of all the repositories we need to scan. The default is everything at gerrit.wikimedia.org but let's look at every repo before adding it just in case.
ssh -p 29418 gerrit.wikimedia.org gerrit ls-projects | grep "mediawiki/extensions

Code review[edit]

Korma offers code review data based on a selection of repositories scanned on a daily basis at gerrit.wikimedia.org. Specifically:

  • gerrit_trackers.conf contains a list of repositories retrieved using a command similar to: ssh -l <user> -p 29418 gerrit.wikimedia.org gerrit ls-projects. We are planning to maintain this list automatically (phab:T104845).
  • gerrit_trackers_blacklist.conf contains a list of blacklisted repositories that is maintained manually. These are repositories that we don't want to compute against our metrics, and they will be ignored as soon as they are added to the blacklist: upstream projects, empty or deprecated repositories, and other exceptional cases.

In addition to this, if a project was removed from the gerrit list, the script will detect the change and the project will be removed from Korma's database automatically.

Issue tracking[edit]

Since the move from Bugzilla to Phabricator in 2014, only basic data is available in the Metrics dashboard. For potential future data, see phab:T28.

mediawiki.org[edit]

Mailing lists[edit]

IRC[edit]

  • Pending to better define the channels to be added to the dashboard in phab:T56230.

Contributors[edit]

How to update user data[edit]

Our goal is to provide a tool allowing users to edit their own data directly (phab:T60585). Meanwhile, users can request updates to their personal data creating a Phabricator task including:

  • real name
  • username(s) and email address(es) used for your contributions
  • current and previous affiliations, with the dates of change of affiliation
  • Current location (country)

At the moment we can only process single affiliations (phab:T95238). If you are contributing from different affiliations (i.e. Wikimedia Foundation as part of your work, Independent in your free time), then we recommend you to use different usernames and email addresses.

Managing identities[edit]

SortingHat is the tool to manage identities. This helps in the following way:

  • To centralize all information in a database.
  • To deal with several identities: a developer may have several identities depending on the data source she is working on. This tool helps to identify for each identity of a developer where that information came from.
  • To avoid the use of direct database: a command line interface deals with ITS.
  • To manage extra developer attributes: it has support for managing affiliations and other developer attributes such as nationalities or bot activity.
  • To manage black lists: this is typically used in cases where bots are committing changes, or too generic names or emails addresses such as "root".

The process to merge all of the identities into one database could be done in two ways: a more detailed one, or an incremental one. The first process is done through the use of extra scripts to parse such information. However this is a heavy-time process and this is typically used in the first identities database creation. Later updates of the database typically follows the second step.

SortingHat also provides a way to export all of this data. This helps to look for other developer identities and merge them through the command line.

These exported JSON files follows the same structure:

"<uuid-hash>":{

 "enrollments":[
   {
     "end": "<final date of this enrollment>",
     "organization": "<Organization>",
     "start": "<initial date of this enrollment>",
     "uuid": "<uuid-hash>"
   },
   {
     ...
   }
 ],
 "identities": [
   {
     "email": "<email>",
     "id": "<hash of this identity>",
     "name": "<name>",
     "source": "<where this identity comes from: e.g.: Wikimedia:mls>",
     "username": "<username>",
     "uuid": "<uuid-hash where this identity belongs to>"
   },
   {
     ...
   }

}

As an example, if an identity is required to be merged with another identity, the command "sortinghat merge" is used and the original "<uuid-hash>.identities.id" is merged into the specified "<uuid-hash>".

The most useful SortingHat commands to deal with identities are the following ones:

  • sortinghat merge: to merge unique identities
  • sortinghat affiliate: to affiliate an identity to some organization
  • sortinghat show: to show information about an identity
  • sortinghat profile: to show profile information of that unique identity

User pages are linked from the contributor names in the top tables of each data source section.

In addition to several identities, there is extra information per contributor:

  • If the contributor is a bot
  • The country of the contributor
  • Canonical uuid (hash) to identify such contributor
  • Canonical name and email to identify such contributor

Extra information about the Sorting Hat usage is available at its README page.

Bots[edit]

Sorting Hat also keeps information about which identities correspond to bots. For that, it uses the "is_bot" field in the "profiles" table. If the field is 1, the identity is considered as a bot. Currently, except for changing the database there is no other way of tagging an identity as a bot.

wikimedia.biterg.io[edit]

This section is work in progress.

In 2016/2017, korma.wmflabs.org will be deprecated and replaced by a more powerful and flexible site based on Kibana dashboards and Elasticsearch.

The database provides indexes whose fields are used in panels, widgets and for searches.

Screenshot

The top bar lists Dashboards (also called Panels). By default the Overview is chosen. Each dashboard offers numerous widgets, and a result list at the bottom of the page (commits in Git, emails in mailing lists, etc.).

The interactive Widgets at the bottom display the actual data. Some panels support clicking displayed items to get more specific information about those items and some panels also allow downloading and exporting the displayed data as CSV or JSON.

You can share URLs of dashboards with applied filters by selecting the Share icon to the right of the Advanced filter field.

Applying filters[edit]

In the right corner of the top bar, the Time filter allows adjusting the time span of all the data being displayed in the widgets.

Some widgets allow creating Filters: The mouse pointer turns into a plus symbol when hovering over a listed panel item and clicking the item will apply an additional filter for that item. When creating a filter in Kibana, the filter is displayed in green below the Advanced filter text field and is applied to the view. In the screenshot above, 'Bots' and 'Empty commits' are excluded from the data displayed in the panels. When hovering over a filter, you can enable/disable, pin/unpin (the filter will still be applied when you open that page again), invert (e.g. to get all companies listed except for one), remove or edit (e.g. to change the organization name) the filter. The "Actions" menu to the right of the filters offers the same actions to apply them to all filters at once. For more information, see Discover Filters.

The Advanced filter text field allows searching for text in any items (commit messages, user names, repository names, etc.). It allows querying a subset of results provided by the time filter and filters already applied. By default, any free text items in any database columns are included (*; entering this also resets a search). The query syntax is based on the Lucene query syntax. Also see Kibana Queries and Filters for more information.

Using the advanced filter, you can also prefix searches by names of database columns to match a phrase (like project:"foo,bar" OR author:bot). TODO: Currently there is no public list of database columns available and no auto-complete suggestions are offered (this problem could be worked around by creating a panel that lists the names of the available columns, via "Discover"). To perform advanced search queries, you need to know the names of the available indexes and their fields.

Some more notes on advanced filters:

  • The type of field (string, number, date, etc.) influences the query syntax
  • Queries are case sensitive
  • You can only create queries which use fields within the respective index that is used in a panel, otherwise the search will return "No results found".
  • Fields not available in an index by default use -1 for numbers and na for strings

TODO: In the future, list some query examples for advanced filters here.

Source code[edit]

Code is available on https://github.com/grimoirelab. Most code is written in Python. The existing repositories are:

  • panels: Numerous JSON files. Contains all of the panels currently available for current architecture.
  • perceval: Data retrieval platform which creates JSON files. perceval/backends contains the available backends. Data is stored in Elasticsearch.
  • arthur: Commander tool to run perceval and set up the panels.
  • kibiter: A fork that contains changes until they get merged in the upstream kibana code base.
  • GrimoireELK: An incubator for new ideas.

Further links[edit]

If you would like to see specific customizations, please file a request in Wikimedia Phabricator including a user story.

Other data sources and tools[edit]

Git

Gerrit

Phabricator

  • "Phabricator monthly statistics" emails on the wikitech-l mailing list - see its archives.


mediawiki.org

Mailman

Team[edit]

Quim Gil and Andre Klapper from the Wikimedia Engineering Community team are coordinating the Metrics Dashboard project, which is being implemented by Bitergia as contractors.

The Bitergia team working in the MediaWiki dashboard is formed by Daniel Izquierdo, Luis Cañas and Jesus Gonzalez Barahona and Alvaro del Castillo as project manager.

The ownership of this project might get transfered to the Wikimedia Analytics team at some point.

See also[edit]