Talk:Team Practices Group/Health check survey

Open questions

 * Where should data be stored - in a simple spreadsheet?
 * Talk to analytics about this [Arthur]
 * How will we generate and publish visualizations of the data?
 * What do we need to do to automate this process?
 * Are there impediments to or strong reasons against making the data publicly visible?
 * Do we do this with every engineering team at the WMF from the beginning?
 * If so, should the TPG have responsibility/accountability for teams we don’t work closely with (eg through either scrummastering or via structured workshops)?
 * Should this intersect at all with annual performance reviews and if so how?
 * Should we translate the crappy -> awesome scale into numerical values for quantitative measure (eg crappy = 1, awesome = 3 or is it better to use larger numbers/logarithmic/etc)?
 * Talk to analytics [Arthur]

A few thoughts and suggestions
Firstly, I think this is a great start. I like that you're trying to cover the range of topics from very process-oriented to the human side.

Here are a few suggestions based on experience conducting similar kinds of surveys:


 * the 3 point scale is appealing in terms of its simplicity but I suggest a 5 point scale. The additional nuance is useful because one of the objectives of this kind of survey is to spot trends early and then to scrutinize and intervene, if necessary.  Having midpoints between "awesome" and "meh" and between "meh" and "crappy" helps spot those trends early.
 * +1 Manybubbles (talk) 17:54, 12 August 2014 (UTC)


 * I've previously always framed the questions as "On a scale of 1-5 (5 being highest), how strongly do you agree with the following statements". It's good to have a mix of statements formulated in terms of strong agreement being good and strong agreement being bad (to avoid various cognitive biases).
 * +1 on the "how strongly do you agree" language. -1 on switching whether agreement is good or bad.  That tends to frustrate me while taking the survey. Manybubbles (talk) 17:54, 12 August 2014 (UTC)


 * I suggest doing this monthly rather than just quarterly - again, it's about spotting trends before it's too late.


 * I suggest some additional questions on the human side. e.g. (again, statements rated in terms of the degree of agreement):
 * "I feel challenged in my work currently"
 * "I am experiencing discord with one or more of my team members"
 * "My coworkers are acting in a way consistent with Wikimedia values and culture"
 * "Overall, morale on my team is good"
 * "I am frustrated currently"
 * "We have enough people to do what is expected"
 * "I feel proud to work on this project"
 * Why not stick these questions in the table? Manybubbles (talk) 17:54, 12 August 2014 (UTC)


 * Some of these questions are funky to ask about the team as a whole. The open source citizenry one, for example, would get odd answers.  We're active and welcomed with some upstreams and unwelcome with others.  Might make sense to ask the question in terms of min and max for all the projects we deal with.  Like "For the open source project for which we're the best citizens we're actually good citizens." and "For the open source project for which we're the worst citizens we're still good citizens." Manybubbles (talk) 17:54, 12 August 2014 (UTC)

Release Planning (as a focus area)
I think this should be a focus area for teams too. It takes some maturity to have a backlog pointed out and a subset of stories prioritized and slotted into future sprints in order to declare a release date for a major update to a product. Teams could measure their effectiveness at delivering major releases by measuring [actual/expected number of sprints], [actual/expected number of points] and/or [actual/expected number of features]. KLeduc (WMF) (talk) 17:02, 12 August 2014 (UTC)

Link to Spotify survey
Can some wonderful helpful person link to the Spotify survey? Manybubbles (talk) 18:05, 12 August 2014 (UTC)