Team Practices Group/Best Practices Handbook

=Purpose= This page documents TPG's recommendations for how to execute various common activities within the WMF environment. These are based on feedback from WMF staff and stakeholders, best practices in software development, and TPG members' experiences. They are intended to serve as a starting point for Wikimedians

=Daily Standup= An effective daily standup is focused on information sharing, not problem-solving. Therefore, it can be most effective to allow each person in the standup to speak before anyone asks questions or offers comments. Alternately, teams may allow clarification questions; in this case, teams (and ScrumMasters) should pause any ongoing conversation and and ask team members to continue with the standup and hold detailed discussion until after. After everyone has spoken, the Standup is over and the team may decide who should remain for further discussion.

With extreme geographical split
If there is no convenient time for all team members to participate in one standup, options include: [WORK IN PROGRESS]
 * Have two standups, at different times ...
 * ... and have one person attend both and carry information from the first to the second
 * ... have the first standup send a brief report to the second one
 * Rotate the time of the standup so that everybody takes a turn meeting at a locally inconvenient time

=Scrum Charts= Several charts provide essential visibility into the performance of Scrum teams. Charts provide a relatively objective, repeatable, quantifiable understanding of what and how the team is producing. Charts can also help isolate problem areas with the process. However, charts are also strongly limited in their explanatory power, and should only be used as one tool among several for understanding and changing process. At a minimum, Scrum teams should maintain current Sprint Burnup, Product Burnup, and Velocity charts. Cycle Analysis and Bug Trend charts are also recommended.

Use and Limitations of Charts and Data

 * Activities in software are inherently unpredictable and hard to measure. Uncertainty of 50% or greater is typical.  This should always be taken into account when interpreting charts, by showing error bars and uncertainty and by remembering that "reality" may be quite divergent from data.
 * Measuring something can provide incentive to change it. Incentives to change a thing being measured may lead people to change the measurement itself, in ways that don't benefit the thing being measured.  This can be intentional but it can also happen through bias and wishful thinking.  Effective use of numerical measurements in judging peoples' work requires mutual trust that the measurements will not be tampered with, misused, or abused.  For example, comparing the raw velocity of two different Scrum teams may incent team members to inflate their estimates.

=Software Work Breakdown in Scrum=

Good stories
(Need to define/clarify "story" here, and/or link to TPG glossary. Does TPG want to declare "stories" a best practice?)

A good story in Scrum is small and meets the INVEST criteria.

(Can we agree on best practice for ideal story size? One expert suggested 10-20% of your velocity.)

Use the Story Splitting Flowchart to break large stories into a set of smaller stories.

The basic Catch-22: How long will it take vs what do you want?
In project estimation, there is a basic contradiction: the customer wants to know how long the work will take before they commit to a scope of work, and the engineers need a scope of work to produce a forecast. One way to break through this is to do quick, bulk estimation of the broadest scope of work, and then cut from there.

Working with big backlogs
The fundamental problem with using stories for work breakdown is that stories must be small enough to implement, but for any significant software project, stories that small will number in the hundreds or thousands, which makes things like prioritizing or just reading the backlog very time-consuming. There are two main approaches: Rolling up into Epics, and Grouping by Features. It is also possible to use a hybrid approach, where a project has some Epics, but also has some areas where large numbers of stories have been broken down and grouped by feature. (n.b. the usage of "Epic" here closely conforms to Scrum common practices; the usage of Feature may be more idiosyncratic, but it's not clear there is consensus on a useful definition of Feature in Scrum.)

(What about grouping by priority/timeline? I was on a project that had thousands small stories,, split into about 5 buckets, where the buckets ranged from "this version" and "next version" through "probably never".)

Identifying Sagas and Epics and bulk-estimating them
In the first approach, the team works with some extra-big Stories (Epics) that later get broken down into more useful Stories via progressive elaboration, slowly enough that finished stories can disappear at a similar rate, leaving the total number of open Stories/Epics manageable (i.e., under 100 or under 50). An Epic could be simply any story that is 100+ or 40+ points, or Epics could be a distinct type of entity. Sagas can be used to provide a third level of scaling. This approach is simpler and faster.x

(This is described as a paper process; it may be computerized in a spreadsheet. Ideas on how to implement in Phabricator are pending.) This will produce a list of Sagas that should comprise all work necessary to produce the key result. A rate of 20 to 40 cards per hour for one lead dev doing estimation, or more for groups of developers, seems like a reasonable balance between speed and quality.
 * 1) Identify the key results desired for the next major milestone.
 * 2) Prepare the Breakdown board:
 * 3) Make a column for each different functional area/team role for the software.  This will vary depending on the focus of the team and software.  Each column will contain work that is relatively independent from other columns, because different people do it or because it has few dependencies to other columns.  Examples: Graphic Design, CSS/HTML, server-side functions, browser functions, Database, Performance, Automated Testing, Packaging, Documentation.  The purpose of columns in this exercise is to spread out the cards on the wall so they aren't in one big lump; the columns may not be necessary after all sagas are assigned owners.
 * 4) Make rows based on level of effort bucketing used by the team.  Typically either T-shirt sizing (S/M/L/XL) or Planning Poker (1,2,3,5,8, etc to 100)+
 * 5) For each key result, identify the work necessary in each column to reach the result and write each piece of work on a different card.  This may happen in a group during a meeting, or the Product Owner (possibly with one or more developers) could prepare the cards in advance.  Balance these guidelines:
 * 6) Each distinct piece of work that can be completed independently of other work should get a card.  Consider stubbing and other methods of limiting dependencies during development.
 * 7) Work that must be done by different people or teams should generally get different cards.
 * 8) Have as few cards as possible.
 * 9) Each card should have enough information that someone who didn't write it, but is on the same team, can pick it up and understand it, possibly with a few clarifying questions.
 * 10) To be useful, the total number of cards should be around 50 or less.
 * 11) From cards of the most common column, identify one card that seems to be "medium" effort, and put it on the wall at the appropriate column and row.
 * 12) Put all of the other cards up in level of effort rows relative to the first card.  Some techniques to do this:
 * 13) Lead Dev puts the cards up one at a time, asking questions of the Product Owner.  This method is faster, but requires that the Lead Dev be able to produce estimates consistent with what the whole team would estimate.  This method has the least amount of discussion, which may be good or bad.
 * 14) Work in Shifts.
 * 15) Team members claim different sections of wall (different columns) in groups of two or three.
 * 16) Each small group puts up all cards for those column.
 * 17) Small groups switch to different areas, review all cards on the wall, and note which ones they think are misplaced.
 * 18) Small groups gather back in the larger group to discuss disagreements and reach consensus.
 * 19) Repeat until all small groups have reviewed all Sagas.
 * 20) If a card is unclear, the Product Owner should make a new card or cards that addresses questions.
 * 21) If a card is judged too big, the Product Owner should split it into two or more cards.
 * 22) Anyone can create a new card to capture work that is not included in any of the other cards.

Identifying Stories grouped by Features and bulk-estimating them
In the second approach, team breaks out as many Stories as possible, with each Story as small as possible. Hundreds of Stories is typical. The team also identifies a list of fairly high-level features, maybe a dozen or two dozen, and assigns all Stories to Features. Planning and Forecasting are then done by grouping and filtering by Feature. This approach is more accurate, but more time-consuming both initially and throughout the life of the project.
 * 1) Identify the key results desired for the next major milestone.
 * 2) Prepare the Breakdown board:
 * 3) Make a column for each different major Feature of the software.  This will vary depending on the focus of the team and software.  Each column will contain work that is relatively independent from other columns, because different people do it or because it has few dependencies to other columns.  Examples will be very particular to different programs.  E.g., for a real estate website, "Lists of Rental", "Maps", "Geocoding input", "Accounts", "Payment", "Shopping Cart", "Matchmaking".  Horizontal functions such as documentation, security, etc, may be mixed in or may have their own columns.  "Help", "Loading Time", "Security". The columns will identify the Feature for each Story, which is a permanent property of the Story.
 * 4) Make rows based on level of effort bucketing used by the team.  Generally use Planning Poker (1,2,3,5,8, etc to 100) instead of T-shirt sizing.
 * 5) For each key result, identify the work necessary in each column to reach the result and write each piece of work on a different card.  This may happen in a group during a meeting, or the Product Owner (possibly with one or more developers) could prepare the cards in advance.  Balance these guidelines:
 * 6) Each distinct piece of work that can be completed independently of other work should get a card.  Consider stubbing and other methods of limiting dependencies during development.
 * 7) Each card should be a piece of work small enough to meet the Scrum standards of the team (for example, "doable by 2 people in 2 weeks or less")
 * 8) Work that must be done by different people or teams should generally get different cards
 * 9) Each card should have enough information that someone who didn't write it, but is on the same team, can pick it up and understand it, possibly with a few clarifying questions.
 * 10) If a card belongs to two Features, try to break it down further, but some cards may always belong to two Features.
 * 11) This approach may product hundreds of cards or more.
 * 12) From cards of the most common column, identify one card that seems to be "medium" effort, and put it on the wall at the appropriate column (which Feature) and row (Level of Effort).
 * 13) Put all of the other cards up in level of effort rows relative to the first card.  Some techniques to do this:
 * 14) Lead Dev puts the cards up one at a time, asking questions of the Product Owner.  This method is faster, but requires that the Lead Dev be able to produce estimates consistent with what the whole team would estimate.  This method has the least amount of discussion, which may be good or bad.
 * 15) Work in Shifts.
 * 16) Team members claim different sections of wall (different columns) in groups of two or three.
 * 17) Each small group puts up all cards for those column.
 * 18) Small groups switch to different areas, review all cards on the wall, and note which ones they think are misplaced.
 * 19) Small groups gather back in the larger group to discuss disagreements and reach consensus.
 * 20) Repeat until all small groups have reviewed all Sagas.
 * 21) If a card is unclear, the Product Owner should make a new card or cards that addresses questions.
 * 22) If a card is judged too big, the Product Owner should split it into two or more cards.
 * 23) Anyone can create a new card to capture work that is not included in any of the other cards.