Talk:Wikimedia Apps/Team/Android/Release process

iOS Release Process
See https://etherpad.wikimedia.org/p/iosappcadencenojoke for notes from the 1-April-2015 meeting.

As discussed during the 1-April-2015 meeting, new regression tests should be added to the Google Sheet.

Kristen
I love that we're starting to formalize this.

However, reading between the lines of the proposal as written, I hear "we have little to no confidence in our code and not much trust in our team roles". My tl;dr is: 1.) can we focus on getting more of the "safety" and confidence pieces covered earlier in the process/during the sprint (earlier testing, better test coverage, signoff) so that the beta testing becomes more of a formality rather than a major undertaking and 2.) could we scale down the voting and escalation pieces to streamline things and hand off more of the testing pieces to QA? KLans (WMF) (talk) 18:28, 1 April 2015 (UTC)


 * I'm going to remove the voting concept. We're planning to stage gate all feature cards through separate columns, which in a way means anyone can block. But in practice the Product Manager can overrule. --ABaso (WMF) (talk) 19:37, 1 April 2015 (UTC)

Cards
Effective mid-sprint 54 on iOS, Phabricator cards on iOS will go through Code Review, QA Signoff, and Design Sign Off before they go to the Ready for Review (Product Manager signoff). We'll tweak the process as necessary. Important points about this:


 * The Description field should contain a link to the design card. The design card should contain mocks, expected user interaction behavior, and any finer points that may escape the attention of engineers (e.g., gradients, font faces, transition durations, etc.).
 * The spec should be noted to mobile-tech one week before a sprint starts to allow feedback
 * The Description field should contain sufficient information for QA to know what to test
 * QA must test on iOS 6, iOS 7, and iOS 8 on a phone form factor, and must test on at least one version of iOS for the tablet form factor. Recommendation: test daily.
 * Alpha builds will be available each weekday morning, so QA should be able to review cards in the QA column at that time
 * Design to sync with QA twice per week on any cards in its column for sign off. One approach: test alpha Wednesday and Friday of week 1, Tuesday and Thursday of Week 2 for all cards in Design column.
 * The developer working the card should shepherd it through the columns (e.g., prompt QA or Design if necessary)
 * Work product may be rejected and moved back to To Do, with a Comment explaining why, if the implementation did not match the specification. Otherwise, new Tasks should be filed if follow-on work is required, and the work will be prioritized for later implementation (potentially in the current sprint, but potentially in a later sprint).
 * If sign off is not required from a particular stakeholder (e.g., Design not needed for data layer testing), the Description card should indicate as much, and the card should be dragged to the appropriate column.
 * The engineer should indicate a risk rating and low level technical testing steps as appropriate to the technical depth of the card.
 * Features that we know can't be used by the end of a spring (e.g., a multi-sprint epic) should be feature flagged early on.
 * Regression tests pertaining to cards in the current sprint identified during work should be added to the Template tab of the iOS Test Cases

Release cards
A Release column (or separate board) should be added for each sprint with the following, and comments added to Phabricator plus the The Releases page updated as updates arrive. Cards can be moved to Done when done.


 * Regression testing
 * App Store submission
 * Comms
 * Released
 * Aborted release

Week 1 of iteration

 * Monday morning: Tech Lead engages Specialists Guild, informing them that Wikipedia Mobile will be released via TestFlight the following day and that their testing results are requested to be returned by Wednesday no later than noon.


 * Monday afternoon: 3 PM San Francisco bugfix code cutoff


 * Tuesday morning, 9:30 AM San Francisco time: whatever's merged into the release branch goes into both Wikipedia Beta for TestFlight *external* testing and Wikipedia Mobile (stable) for TestFlight internal upgrade smoketesting as the release candidate. The corollary of this is the engineers need to be judicious about not polluting the code; there are many ways to deal with this.


 * Product manager, lead designer, lead QA, and lead iOS engineer do final verification for crashes and serious regressions, flagging blockers. Just in case, because it's possible that cards tested in isolation introduce problems in concert. Team swarms on issues if necessary.


 * Thursday morning: Product Manager gives final go/no-go.


 * If by Thursday afternoon at 3 PM San Francisco time there hasn't been serious badness identified (iOS engineering to report all crashes and user reported feedback), we then submit this release candidate to Apple.

Week 2 of iteration
If we didn't submit for formal App Store review in week 1, then in week 2, we follow the same process as week 1, mandatorily submitting a release for formal App Store review on Thursday week 2 of the iteration.

If we're for some reason still blocked by Thursday week 2 of the iteration, we notify Directors of Engineering, Product, Design, and Platform, plus the Manager of Release Engineering.

If we submitted for App Store review in Week 1 without issue, then 9:30 AM San Francisco time on Tuesday of week 2 the TestFlight for Wikipedia Beta and Wikipedia Mobile (stable) is released for whatever's at the tip of master.

Feature cutoff
Feature cutoff for week 2 of a sprint Thursday afternoon 3 PM San Francisco allows for Friday to be a day for experiments and any "last minute" bugfixes.

More process
During the current sprint, Corey and Brian are working on systematizing build stuff, and there will definitely be a transition period to make things more automated. Additionally, Brian has noted the need to set firmer release criteria. And we're all aware of the need for other automated quality criteria But these are the broad strokes of what I think could work for the basic process.

Open questions

 * Branching strategy to handle rejected cards