Review action items from previous retrospective:
- A/B test process concerns [Owner: Oliver]
- This seems much better now, although partly because Oliver was acting PO recently
- Still too much discussion in IRC and not enough in phab/email
- Elasticsearch 1.7.1 upgrade was not as fast as we expected: it requires time to investigate the cause, Chase proposed to have look. ++++++6 [Owner: Erik]
- This is not a high-priority item for us right now.
- Maps sometimes feels like a foster child to the "search" team (only recently we started using the "discovery" word instead of "search") ++++++ 6 [Owner: Tomasz]
- Tomasz has moved forward with adding KPI's to dashboard, renaming IRC channel, renaming mailing lists
- Proposal: If it didn't happen on the mailing list, it didn't happen. Let's stick to that. ++++ 4 [Owner: None/Kevin]
- We are getting better, but it's an ongoing process. We need to keep ourselves and each other honest on this.
We will experiment with a different retrospective format
- Take time for everyone to type in what went well/poorly (5 min)
- Vote on which topics to discuss further (5 min)
- Discuss! (25 min)
- Retro the retro (5 min)
What went well?
- A/B test cadence continues to go well
- User satisfaction stuff finally got off the ground and Erik's support around it has been hella-awesome ++
- Trey's research around language detection is really cool to see and highly informative ++++
- Soft launches of Maps and WDQS, we effectively became comms
- Interviewing for ops has kicked off and has some prospective candidates
- New meeting schedule better for eastern team members (?) ++
- Had our first weekly UX planning meeting which went well
What could have gone better?
- Analysis doesn't feel tremendously supported by Maps (and vice versa) +++
- Analysis team would appreciate if we were involved in data-related problems/tasks earlier (e.g. designs of schemas) and not pulled in at the last minute to analyze data we had no hand in collecting +++
- Data doesn't seem accurate/reliable (took a while to realize/choose to act?) +++
- Would like better coordination on launches, e.g. WDQS, with Comms & others +
- A/B test pre-launch still happens too much in IRC ++
What didn't fit into either of those buckets?
- Transition to "Discovery" is slow and ongoing
- Need clarity/decision making around what we're doing with further hiring +
- Phabricator continues to work, but continues to have frustrating aspects +++
- Confusion because Readeringship Department’s Q2 goals seem to all be related to content discovery (e.g. reading suggestions on desktop / mobile web, hovercards on mobile web, etc.)+
- Confusion around inter-team dependencies and requirements, also centred on Reading ++
Analysis doesn't feel tremendously supported by Maps (and vice versa)
Analysis team would appreciate if we were involved in data-related problems/tasks earlier (e.g. designs of schemas) and not pulled in at the last minute to analyze data we had no hand in collecting +++
Specific example was maps: Schema was implemented differently from what analysis thought was agreed to, and which didn’t collect what was needed for KPI’s. “K” in KPI is key, so if it’s key, we should measure it Early planning (of metrics) seemed good, but problems happened near the implementation step
This was the first maps-analysis collaboration, so some problem expected Maps wasn’t used to having analytics, so did some hand-made stuff Max felt like he and Yuri weren’t included in the just pre-implementation meetings. (There was a meeting with just Dan/Tomasz/Analysis, but it didn’t alter the KPI’s.) Maps feels that KPI changes were not communicated to them
Kevin wasn’t invited to some of the meetings Transition from Dan to Tomasz contributed...new maps team had to learn how to interact with other teams Having 2 full-time Product Managers in the future should be a bit smoother It is important that key stakeholders don’t just attend, but fully participate
Big action item: Make sure stakeholders are involved in relevant meetings. Add Kevin to all these meetings (when possible).
Data doesn't seem accurate/reliable (took a while to realize/choose to act?) +++
Only noticed it last week, so we are reacting quickly (but we don’t know how long it will take). Has already messed up our current “big test”, so fixing right now won’t help that test. Previous tests were fine. All of event logging has the same problem as of last Thursday.
Stuff like this will happen, but we need to understand what happened and what caused it We “intentionally stumbled across” the problem. Mikhail was looking to see if the data made sense, and found the problem. This is probably affecting other foundation users.
Our standard process requires a quick check 24 hours after test starts. Looks like an analytics engineering bug, so we should try to nudge them toward automated tests.
Overall, this was actually a success story for us (although it involved pain). Our standard process quickly found a recently-injected org-wide logging problem.
Phabricator continues to work, but continues to have frustrating aspects +++
As is true of all ticket systems throughout history.
“it’s badly designed” (from a UX standpoint): small target areas, poor nav model, no live updating.
Are we looking for action items here, or is this a cathartic session? Other teams previously reported issues as blockers, but they were rejected. TPG is currently moving forward to try to identify a few top priority issues to take upstream, so get any nominations to Kevin TODAY or tomorrow.
Dan: Live updating is biggest problem that is quickly summarized
Retro the retro
- “Worked pretty well for me”
- Should fill in etherpad before the meeting, to save time
- Send reminder Friday morning
- Should we fill in additional commentary?
- Liked starting with action items (better than rushing at the end)
- Need to be better about taking detailed discussions offline
- Need to manage time better during the retrospective
- Feels like a safe space for tough team discussions. Honest. Makes team better as a whole.
Action Items (pulled out after the meeting ended)
- Need clarity/decision making around what we're doing with further hiring [OWNER: Wes]
- Confusion around inter-team dependencies and requirements, also centred on Reading [OWNER: Dan]
- (Map-centric) Make sure stakeholders (and Kevin) are involved in relevant meetings. [OWNER: Tomasz]
- Need to manage time better during the retrospective [OWNER: Kevin]