Team Practices Group/Light engagement survey pilot 2 results

Summary
In September 2016, the Team Practices Group ran a second pilot of a new Light engagement survey. This document presents what we learned from the pilot.

Round 2 of the pilot was very successful. The changes based on the first pilot worked well, and need only minimal additional revisions. We experimented with a non-batch approach, and have made that a standard part of the process. We encountered challenges with google forms, and will work around those with documentation. We discovered that a separate form for “ongoing” engagements would be helpful, so we created and tested a variant of the survey.

Changes compared to the first pilot round

 * Switched from sending surveys out in a batch to sending them individually (see below).
 * The first pilot included specific rating questions for each of the possible skills used. With the second pilot, we just had a checkbox for each possible skill, indicating whether it had been used.
 * Minor wording tweaks.

Non-batch approach

 * Rather than distribute these surveys in batches (as we do with our embedded CSAT), Grace strongly advocated that we send individual surveys at times that make sense for each engagement. For engagements that have an endpoint, that would ideally be right after it ends.
 * One pilot survey was sent out after an event, but before some follow-up meetings. With hindsight, it would have been better to have waited until after the follow-up meetings.
 * A consequence of eliminating batches was that we need a separate form for each engagement. This simplifies the form by avoiding the “Which engagement” dropdown, but makes the back-end slightly more complicated. We should revisit this after several months.
 * For each engagement, the involved TPGer(s) will be responsible for generating an instance of the survey, sending out the email with link, and analysing the results. It will be entirely decentralized.

Survey design

 * The questions in the second pilot were modified and streamlined substantially, based on what we learned in the first pilot round.
 * We considered adding a question about how the respondent viewed the cost/benefit of this engagement. We ended up deciding that the participant wouldn’t have enough knowledge to judge the cost, and we don’t yet have a way to quantify the value.
 * One-off and short-term engagements vs. ongoing/long-term
 * The initial form was oriented toward engagements with an endpoint.
 * When we were about to send a survey to customers in an ongoing engagement, we (Arthur) ended up forking the form (in consultation with Design Research folks) to create a separate variant.
 * Both versions will be available to TPGers, so they can select whichever applies.
 * We need to evaluate whether any of the wording changes from the fork should be brought back into the original survey (T146822).
 * In addition to surveying our customers, we also set up a simple process to solicit feedback from the TPGer(s) about how they felt the engagement went. To avoid bias, TPGers are encouraged to write their thoughts before viewing any results from the corresponding survey, or thoughts from other TPGers who were involved with that engagement.

Frustrations with google forms

 * The google forms UI is often awkward or confusing, and documentation is sometimes inadequate. We will need to compensate for this by having clear step-by-step instructions.
 * We sent out some surveys using the URL provided by the “get pre-filled link” option. That sounded reasonable, but is actually an example of a counter-intuitive feature. We didn’t test the link enough, and it didn’t actually work. We had to provide updated links to some recipients.