Wikimedia Discovery/Meetings/Discovery retrospective 2017-08-30
Jump to navigation Jump to search
Previous retrospective: https://www.mediawiki.org/wiki/Wikimedia_Discovery/Meetings/Discovery_retrospective_2017-07-13
Restrospective Working Agreements
- Assume good faith
- Roles are not boundaries, don’t feel restricted by your role
- Take a Discovery-inspired holistic, cross-team approach to problems and tasks
Previous action items
- MP: Follow up on parallel AB tests and debugging/QA (T171904)
What has happened since the last retrospective?
- Hackathon, Wikimania and Structured Data on Commons offsite in Montreal
- MVP Run of Search Relevance Survey
- Solved latency issues in older elasticsearch servers
- First A/B test of MLR
- Explore Similar A/B test
- Couple blog posts (Mikhail and Trey)
- Mikhail learning Puppet
- Guillaume started to get a deeper look at WDQS
- Analysis analysis failed for two language plugins: Japanese and Vietnamese, but good feedback was provided to Vietnamese plugin developer
- Wikidata entity search with Elastic (code complete & test set up)
- Archive search has been enabled
- Language Analysis Analysis tools released
- Map stuff getting done by volunteer(s), esp. TheDJ!
- Thermal paste finally got applied to overheating servers (but did not fix the latency issues :)(i still think it helped resolve the maxed out cpu issues faster than if it was heat throttled :P) (I do too!)
- Throttling was enabled on Wikidata Query Service
- Derivative feature support in LTR plugin
- Improved documentation for LTR plugin
- Categories to RDF work started
- Stas is having great idea about categories, but he lost me (gehel) along the way...
- Sister project snippets are now in a defined order on enwiki
- Mikhail helped with namespace questions from WMDE
- Realized our A/B testing for search has some bugs with not always triggering the “treatment”
- Chelsy finished A/B test report generator
- Found out our manual delete indexing is completely wrong, figuring out the fix (but due to layers of additional checks users at least don’t see deleted pages in search)
What's working well?
- Gehel helping out with java review in David’s absence (and having a lot of fun in the process!)
- Gehel and Paul working together on maps (not search stuff, but important to note Mr G’s work)
- Discovery is not dead yet! +!! (required link: https://www.youtube.com/watch?v=dGFXGwHsD_A)
- The team formerly known as Discovery is still working together reasonably effectively
- Spark project is really fun!
- Our initial MLR models look to possibly perform as well, if not better, than previous ranking implementation. Needs more analysis to be sure.
- Spending more time reading / writing Java code is fun (and probably productive for the team)
- The technical discussion we have are great (Java concurrency comes to mind)
- Trey’s work on the schwag for Hackathon and Wikimania!
- With Gehel’s help, our logging becoming much more organized and useful (and Gehel still has quite a few idea of things to do there!)
- Chatting about how the search ranking and testing that we’re doing is fascinating (mathz r hard) +1
- Talking to editors at wikimania/hackathon was great (learned a lot of interesting stuff about redirects creation/curation)
- Talking to lots of people at Wikimania was interesting and enlightening
- Found out that a lot of folks would love to have more of an impact within Wikimedia projects and that their stuff is simply pretty cool
- Happy that David and Gehel were able to take their well deserved long vacations and we didn’t fall apart while they were gone :)
What's not working so well?
- Maps is taking a disproportionate amount of energy (if not time) compared to the progress
- Still adjusting to re-org fallout, some uncertainty about future work and need for better clarified goals
- People going on vacation did stall some tasks (though taking vacation is good!)
- Infrastructure problems (elasticsearch latency, job queue size explosions) are a big distraction from the goals, but necessary to resolve.
- Nobody knows how deletes in elasticsearch are really supposed to work (at least when indexing manually)
- People outside Discovery still have a stigma about it. The way we approach things is different than other teams, driven by testing and being a little slower and more methodical, which is not how others do things. It’s been working for us, but to hear that we’re oddballs doing things differently is sad.
- We have a bottom-up mentality, while other teams have a top-down mentality.
- We have been told we have too much detail in the dashboards, but that’s what we do. That’s how we track the things we do. When we make a change we need to be able to look back and compare whether it was better or worse.
- Many of the things we’d planned for the next six months are changing
- It feels like a lot of use cases are being considered within the reading vertical outside of our team, seems like they are limiting search as a function on Wikipedia
- Since we didn’t have firm KPIs and measurements to reach (we make things better in smaller increments revealed by ongoing data collection), we’ve been told we don’t know whether we’re being effective, which is super frustrating
- So, less numbers on dashboards, and more KPIs?
- Seems the work we’ve been doing on the front-end is coming to a close
- “the way we work is very important to me and why I came to join the foundation. If that goes away, I’ll be really sad about it”
- One thing that’s different is the cross-role composition of the team, having the different capabilities all represented, whereas audiences relies more on the Ops team, because that’s the way they work, and that’s hard to reconcile
- Cross-team coordination is really hard, and with embedded resources it’s easier and faster to meet priorities
- Having someone from the respective areas explain why things are important or not would be helpful, exposure to more upper management thinking
- We’ve been given a lot of direction on what not to do in audiences, but not so much on what to actually do instead