Team Practices Group/Health check survey

From mediawiki.org
The Team Practices Group (TPG) was dissolved in 2017.

Status[edit]

As of 2016, Team Health Checks are not being conducted regularly.

Survey goal[edit]

The team health check survey is intended to measure qualitative aspects of working on a WMF engineering team. The voluntary survey can be used to inform targeted areas of improvement for individual teams, measure improvements in team health. The survey should be completed collaboratively by an entire team rather than by individual team members.

What is Team Health?[edit]

Team health is a qualitative measure based on the team's assessment of 11 focus areas (‘Quality’ and ‘Fun’ are two of the areas, for example) that were chosen as health indicators for WMF teams.

Survey focus areas[edit]

These 'focus areas' are qualities/aspects of WMF engineering teams that, when taken collectively, can serve as a barometer of a team's overall health. This list should be kept to 10-12 high-level focus areas universally applicable across WMF engineering teams.

Focus area Awesome example Crappy example Influencers/indicators
Delivery Releasing the end product is simple, safe, painless, and mostly automated where appropriate. Releasing the end product is risky, painful, lots of manual work, and/or takes forever.
  • Rhythm
  • Process
Communication Our internal and external communications are clear, proactively include all affected stakeholders and use channels appropriate for the message. Our internal and external communications are sparse, do not reach all stakeholders and lead to confusion, frustration and surprises.
  • Frequency
  • Transparency
  • Cross-team collaboration
  • Process
Quality We deliver high quality stuff! We are proud of what we create. Our output is a complete mess. For code, technical debt is increasing and never gets paid down.
  • Rework
  • Defects
  • Embarrassment
  • Clarity of requirements
  • Elegance
Value We deliver the right solution for our stakeholders and make them really happy. We deliver unwanted outputs. Our stakeholders are not amused.
  • UX
  • Data-driven
  • User research
  • Users/customers focus
  • Adaptability
Pace We get stuff done really quickly without feeling rushed. We operate at a sane and sustainable pace. We just can't get a steady flow. Either we are stuck/interrupted, or we are pushing too hard/too fast.
  • Cross-team dependencies
  • Interruptions
  • Having to work over-time
  • Rhythm
Mission and goals We understand the mission and goals of the organization as well as those of our team, and our work is clearly in service of them. We know exactly why we are here, and we are really excited about it. We have no idea why we are here, there is no high level picture or focus. Our so-called mission is completely unclear and uninspiring. We have no clear goals, or we don’t understand how our existing goals relate to the goals of the organization
  • Data-driven
  • Goals
  • Retrospectives
Fun We love going to work and have great fun working together. We are bored, angry, and feel isolated from another. Going to work feels like a chore.
  • Trust
Learning We are learning lots of interesting stuff all the time. We have time and space to learn things on our own. We take time to learn about ourselves as a team, our process, products, users, stakeholders, etc. We never have time to learn anything either individually or as a team.
  • Data-driven
  • Innovation
  • Improvement
  • Reflection
Support We always get great support and help when we ask for it from other teams, managers, directors, c-levels, etc. We keep getting stuck because we can't get the support and help that we ask for.
  • Cross-team collaboration
Destiny We are in control of our own destiny! We decide how to organize ourselves and how to get our work done. We are just pawns in game of chess, with no influence over what we create or how we create it.
  • Trust
Community involvement We collaborate with and receive high-quality and relevant contributions from our community. We're active participants in the creation of work on which we rely. We receive low-quality and irrelevant contributions from volunteers. We have bad relationships with other projects and our community..
  • Open to contributions
  • Open source citizenry

Previous surveys[edit]

Influencers/Indicators[edit]

These are important factors that influence and/or provide an indication of awesomeness/crappiness for various higher-level focus areas. Many if not all of these came up while brainstorming the high-level focus areas for the survey, and are included here to help frame thinking about certain focus areas.

Influencer/indicator Awesome example Crappy example
Transparency We know where to find out what's going on at any given time, and we know where to go to get or share information. We have no idea what's going on from day to day and feel constantly out of the loop.
Cross-team collaboration We understand what other teams are working on, and it's easy to collaborate with them when we need to. We have no idea what other teams are working on. When we need something from another team, we don't know when (or if) they'll be able to deliver what we need.
Process Our way of working is perfect for us and keeps our collaborators informed. Our way of working sucks and leads to miscommunications.
Rework We get everything right the first time. We find that our efforts have been wasted and then have to re-do what we already spent time and resources on.
Clarity of requirements We have a shared understanding of stakeholder requirements. We know what they are asking for and why. We are clear on how to give them what they want Stakeholders think that what we deliver is out of sync with what they asked for.
Elegance What we deliver is clean, easy to understand, and uncluttered. What we deliver is complicated and incomprehensible.
UX The features we build are beautiful and intuitive to use. They invite new participation and engagement from our users. The features we build are ugly and nobody knows how to use them. New users are running away screaming and crying.
Data-driven We make smart decisions in how we operate and what we build based on verifiable data. We make arbitrary decisions in how we operate and what we build based on hunches, gut feelings, and snake oil.
User Research We listen to our users and deliver that which delights them. We don't listen to our users and deliver what we want to regardless of what they want,
Users/customer focus We have a solid grasp on who our users/customers are. We understand what their needs are and what motivates them. We build features that satisfy them. We don't know who are users/customers are, nor do we know what they need or what motivates them. We have no idea if what we build is satisfying anyone.
Adaptability If requirements change, we can shift focus easily without much disruption. If requirements change, we need to scrap everything we've done and start all over.
Cross-team dependencies We let other teams what we need from them early, and they finish their part before we need it. Other teams are unaware of what we need from them. They don't deliver what we need at a time when it is most valuable to us.
Rhythm Our development process has a regular cadence. We know when a development cycle begins and ends, and what will be happening next at any point in the cycle. We do things on an ad hoc basis and are always scrambling to deliver what we promised.
Goals We intimately understand what we are trying to achieve at the quarterly and annual levels. We calibrate everything we work on against these goals. We have no unified understanding of what we are trying to achieve at the quarterly and/or annual levels. Or if we do, we do not work on things that are related to achieving those goals.
Roles Responsibilities such as making product decisions and delivering products are crystal clear. Who are we and what are we doing?
Trust We trust that members of our team are doing their best and will ask for help or provide help when there is a problem. We don't feel that everyone is pulling their weight and are suspicious of each others' motives. Problems are not addressed directly or respectfully.
Innovation We are doing unique or cutting-edge work! We're not doing anything noteworthy, just maintaining the status quo.
Improvement We target areas that we want to improve and actively work on them. We complain a lot but never do anything about it.
Retrospectives We regularly take time to reflect as a team on how well we are functioning as a team. We celebrate victories, and find ways to correct what is not working for us. We never take time to look back on what we have done or how we could improve how we work together. We never learn from mistakes.
Open to contributions Our plans are public and we seek feedback actively. Our backlogs include tasks for volunteers. External code contributions are reviewed damn fast. In practice, only WMF colleagues follow and influence our plans. Newcomers have a hard time finding appropriate tasks to work on. We never seem to have time to review external contributions.
Open source citizenry We collaborate with and receive high-quality and relevant contributions from our open source software community. We're active participants in the development of software on which we rely. We receive low-quality and irrelevant contributions from volunteers. We have bad relationships with other open source projects and our community.

Timeline to launch[edit]

  • Post draft documents to mediawiki.org by 11 August 2014 Yes Done
  • Finalize documents for launch by 31 August 2014 Yes Done
  • Begin collecting data as early as 1 September 2014 to set baselines at the end of Q1 FY2014-2015, which will help inform team improvement efforts through Q2 Yes Done
  • Inspect and adapt the process quarterly thereafter Yes Done

Implementation details[edit]

Surveys had been facilitated on a quarterly basis until FY1516 Q1, when the Team Practices Group put them on hold. As of FY1617 Q2, the surveys are available by request. The Team Practices Group can facilitate if available, or they could coach teams in how to facilitate them.

Artifact: Invitation Template

Each area of the survey will be evaluated by teams on a 5-point scale of ‘crappy’, ‘crappy/mediocre’, 'mediocre', 'awesome/mediocre', or ‘awesome’. If there are areas on the survey a team doesn’t care about, they can be skipped. Teams can add areas that they feel are missing from the survey. The survey will evolve with the collective needs/perspectives of the teams through quarterly evaluation by Team Practices Group members.

One such adaptation is the option, available as of FY1617 Q2, to conduct the survey with blind votes in the focus areas. The facilitator would reveal the distribution of how the team rated their performance in a given focus area. Then the team would then have the opportunity to discuss why they voted for the 5 point rating that they did. The facilitator would then toss the first vote and ask the team to vote blindly again to see if the discussion changed the distribution. The rationale for this new option is give insight into the distribution of opinions across a team and to create a safe space for dissenting opinions by trying to mitigate the effects of anchoring and groupthink.

See also: Guidance for facilitators

Open questions[edit]

See talk page

References[edit]

A major influence in developing this survey was Spotify's similar program: https://labs.spotify.com/2014/09/16/squad-health-check-model/