Team Practices Group/Strategy process FY2016/TPG Strategy - What We Learned

Good - learned about ourselves:

 * Existing customers really appreciate us


 * We can challenge ourselves to explore ways that we can help teams currently not served by TPG


 * We explored a variety of alternative ways of working


 * Bumblebeeing and swarming became part of vocabulary and way of collaborating


 * Opportunity for the TPGer who already works outside of product and audience verticals to emphasize that we already operate outside of product, and to raise awareness of the unique pain points of horizontal integration between verticals

Could have gone better - learned about ourselves:

 * Writing hypotheses is central to the process and is not a skill that all of us had when we started.
 * Some team members felt that the rules of the process were not modified appropriately to fit our context.
 * Could have explored more frameworks before settling on P2W
 * We tend to be slow coming to consensus and have trouble making final decisions
 * We discovered late that our strategic problem was being being interpreted differently by different members - we learned that shared understanding of meaning is important.
 * Some team members preferred to stick with fidelity to the process and others wanted to apply a more adaptive approach, which created tension.
 * We didn’t all agree on how the guerilla tests (asking stakeholders questions) should be run. Some felt that we should ask a specific question without context, to try to get an honest answer to the very specific question being asked. Others wanted to give the full context and have a more open-ended discussion. The former fit the P2W binary test model better, but the latter is generally more compatible with TPG’s style.

What we learned about Playing-to-Win

 * We spread it out over months, when the initial part should be done (according to the P2W guidelines) in a week. It can be challenging to implement this process in a timely fashion in the midst of busy schedules, and the out of the box guidance on timelines provided in the P2W facilitator’s guide may not be accurate!
 * P2W asks you to run tests with binary outcomes, which seemed difficult. When asking people if they would accept X, it felt arbitrary to require any specific percentage in order for the test to pass. Should the test pass if 50% accepted X, or would 90% be more appropriate? In some cases, we didn’t have a binary criteria, so we had to subjectively decide whether or not a test had passed.
 * The P2W process itself is inherently waterfall-like. The team gets together, identifies its problem, comes up with possible solutions, and creates detailed tests for each possibility. All of that is done in a room, without any external feedback. Once the tests are underway, the process doesn’t seem to allow other tests to be changed, or more importantly, for possibilities to be added, removed, or overhauled, based on what the early tests revealed.
 * Ability to write tests and hypotheses is important to the process.
 * Terminology was not very compatible with foundation work and values
 * See Maggie’s re-implementation for WMF Strategy process for an alternative
 * In some cases, choosing a meaningful standard of proof was difficult or impossible, and led to arbitrary selections for some tests
 * The initial format of the slides was needlessly complex--it was better after we streamlined the templates
 * It led good discussions within the team about what we do and why
 * It led good discussions with stakeholders about what we do and why
 * Process may be more suited to industries where testing would offer definitive quantitative results (vs qualitative)
 * Felt like a heavier process than what was used for the WMF 2016 strategy process
 * We did not actually pilot anything, and instead relied on “guerilla tests”
 * Example- do you want chocolate cake for dessert? Would you be willing to try chocolate cake made with mashed potatoes rather than flour?  NO!  Vs. actually serving potato chocolate cake to elicit response
 * Having some structure/framework was better than none
 * Coming up with the list of strategic possibilities was unsatisfying for some team members. We might not have thought of some good ones.
 * It wasn't clear where in the process we should adjust possibilities based on what we were learning: as we tested or after we completed the process (we chose the latter).
 * It wasn’t clear when in the process we could reject a possibility simply because nobody on the team would enjoy actually working within it
 * The process relies on having a clear and unambiguous problem statement, which was challenging for us
 * Halfway through, we substantially changed our problem statement, and had to adjust all the possibilities to align with the new version
 * At times it felt risky undertaking the process in an unstable organizational environment, because the ground was regularly shifting beneath us and with so much change and uncertainty, we couldn't have high confidence that our outcomes from the process would still be relevant by the end any actions or changes were taking place on an evolving playing field.