Developer Satisfaction

From MediaWiki.org
Jump to navigation Jump to search

Background[edit]

During FY2019 annual planning, Developer Satisfaction was identified as a target outcome for the Wikimedia Technology department's Developer Productivity program.

In our first attempt at measuring progress towards this target, the Release Engineering Team conducted a survey in which we collected feedback from the Wikimedia developer community about their overall satisfaction with various systems and tools. We followed with a round of interviews focusing on the local development environment. This page attempts to summarize the feedback into some numbers as well as broad themes that we were able to identify in the feedback received.

Since the privacy of respondents is important to us, we will not be publishing the raw responses, instead we will roughly paraphrase the most common complaints, suggestions and comments, along with some stats and other observations.

What we asked (Survey)[edit]

Respondents were asked to rate their satisfaction with several broad areas of our developer tooling and infrastructure. Each area was given a rating of 1 (very dissatisfied) to 5 (very satisfied). This was followed by several free-form questions which solicit feedback in the respondent's own words. The satisfaction ratings have been summarized by taking the average of all responses in each category.

Who responded (Survey)[edit]

In total, we received 58 responses. Of those, 47 came from Wikimedia staff/Contractors, additionally, 10 were from Volunteers and just 1 was from a 3rd Party Contributor. In the future we should make an effort to reach more volunteers and 3rd party developers. For this first survey, just keep in mind that the data are heavily biased towards the opinions of staff and contractors.

What we asked (Interview)[edit]

Respondents were asked some general questions about their local environment setup and progressed to conversation and show-and-tell about their local environment. Interviewers recorded notes.

Who responded (Interview)[edit]

In total, there were 19 interviews. Of those, 16 were with Wikimedia staff/Contractors. 2 were with Volunteers, and just 1 from a 3rd party.

There were 9 Mac users, 9 Linux users, 2 Windows users, and 1 other user (users may use multiple systems).

Aggregate Scores (Survey)[edit]

Bar graph summarizes the FY2019 Developer Satisfaction Survey results
Satisfaction Ratings: Average of all responses.

Analysis (Survey)[edit]

Most of the responses could be summarized as simply "satisfied" with most of the categories averaging near 4. Below we will discuss notable exceptions to this generalization and any other observations that can be gleaned from the available data. One thing stands out immediately when looking at the average scores:: Code Review is the lowest score by quite a margin. At 3.0 it's pretty far below the next lowest score which was Local Development at 3.7. Given that it was not possible to respond with zero in any category, effectively the lowest possible score is 1. That number is even worse if we look at just the responses that we received from Volunteers. That subgroup gave code review a very disappointing average rating of 2.75, with Wikimedia staff and contractors averaging 3.42.

Qualitative Analysis of Respondent Comments (Survey)[edit]

We attempted to divide the content of respondent comments into several categories to identify common themes and areas for improvement. We created diagrams with the comments in each theme in order to come up with "How can we..." questions that will help kick off future investigation and brainstorming for improvements in each category.

Bubble Graph of FY2019 Staff and Volunteer Comment Categories.svg

Categories[edit]

  • Automated Testing
  • Collaborating
  • Debugging
  • Deploying
  • Developing
  • Finding Things
  • Gaining Knowledge
    The Gaining Knowledge analysis produced the following questions:
    How can we...
    • help developers discover code maintainers/reviewers?
    • enable developers to configure alerts?
    • ensure the right person sees alerts?
    • make logs easily discoverable to developers?
    • help developers use logs for analysis?
    • help people discover and access documentation?
    • ensure documentation is always relevant?
    • ensure documentation is complete?
    • share information about the production environment?
    • present information from reports in a useful way?
    • develop and share the deployment process?
  • Getting Reviews
  • UI Testing
    UI Testing Comments Diagram

    The UI Testing analysis produced three questions:
    How can we...
    • do better catching bugs before they hit production?
    • enable quick and easy production-like testing environments for developers?
    • share information about the staging environment?


Qualitative analysis of interview notes[edit]

Responses in the form of notes were coded into problems, which were then grouped into categories in an attempt to reveal common issues. The color coded mind maps were created to aid analysis.

Analysis produced the following How can we... questions:

  • make documentation easier to read?
  • make documentation easier to access?
  • define clear standards for users?
  • make the local development environment easier to setup?
  • make the local dev env easy to configure and use?
  • make easily shared environments?
  • help non-devs have an environment?
  • decrease resource usage?
  • speed up setup, load, and run times?
  • avoid discouraging contributors?
  • achieve more parity with production?