Talk:Git/Conversion/issues

Topics for Chad's email to wikitech-l this week
Topics Chad will need to address:


 * how are we gonna handle the translators' work?
 * Chad has talked with Siebrand about this very roughly. We'd like to automate this process as much as possible--it currently takes about 20mins/day for Raymond to do and that's a huge timesink. Chad to follow up.  This has to be ready before ANY migration of core to git starts.
 * what about gerrit? when will it be not a UX nightmare?
 * File bugs upstream! We're looking at how open their community is to non-Google contribution. The backend is *really solid* and if we've got annoying UI bugs hopefully we can push on getting them fixed.
 * So, you're saying that we are going to simply use gerrit as-is, even though its usability sucks? Why not make it bearable before we switch and we have to use it?  And why not use Phabricator?
 * I'm not convinced it's a "usability nightmare." It's just a matter of learning a new tool that we're unfamiliar with. Yes there are some rough edges, but it's largely good. Plus, Gerrit it being used by a lot of places outside of Android and seems to enjoy wide support. Plus with git-review, it lowers some of the bars w.r.t. gerrit-isms when committing. I wish someone had brought up Phabricator about 6-9 months ago, rather than in the last ~month when we've already started making plans with gerrit. Other than the UI being sub-par, I've yet to see any substantive arguments against using gerrit--it supports the commit model we want, has built-in LDAP support already, and has a robust permissions system that applies per-project.
 * big probs with Phabricator: LDAP and permissions per project! The latter needs investigating
 * Chad says none of the gerrit-bugs-to-fix are blockers.
 * Chad asks for specifics: what do people hate about gerrit?
 * TODO ask for specifics in the emails
 * RobLa points out: git means our repo will be a lot more portable. Let's start with gerrit now.  Migrating to Phabricator.


 * TODO: put in the roadmap that, 3 months from the switch, we will seriously revisit our tools, including gerrit, git-review, and other tools we are now using.


 * who will be able to commit to what?
 * Everyone will be able to push to everything. No distinction between core and extensions! Since code doesn't automatically make it into master until pulled, there's no danger of bad commits unless someone pulls irresponsibly.
 * OK, so, who will be able to PULL for what?
 * This is like how ops does it. Ops, core MW and deployed extensions, "gated trunk" model - needs review before merge.  For any other extensions, you get an option of doing a straight push.  ("post-commit review")


 * who will have the ability to do code review?
 * Ops, insofar as they have code review rights globally. In practice I doubt they'll review much MediaWiki stuff
 * "Code Review group" - This would be based on our current crop of code reviewers. Remember that gerrit requires a "+2" (please let's not call it that? argh) superapprove on CR to merge, so we can give "+1" rights to all current committers, sort of like "signoffs" in our current setup.   Eventually anyone who wants can get a gerrit account to comment, and, if they want, to +1.
 * TODO:    TODO - point this out in the workflow document.
 * "Current crop of code reviewers" -- so, everyone who currently has core commit access? An alternative: start with the people who currently have deploy access.
 * Yeah, starting with the deployment group would be good. Especially for the "WMF" branch (see below)
 * who are these people? People who can deploy - https://gerrit.wikimedia.org/r/gitweb?p=operations/puppet.git;a=blob;f=manifests/admins.pp;h=2080ad4588963dc512543978936ac5367c8d1efd;hb=HEAD and Ctrl+F "admins::mortals" + "admins::roots"
 * https://gerrit.wikimedia.org/r/#admin,group,11 will be the easier list to follow once we get groups set up.
 * eventually, people who have access to WM branch would remain only people with deploy access, but people who can review stuff into master
 * Deployers will automatically be part of this group for WMF branch. Maybe we will also add people with n commits or SVN statuschanges.
 * When will we figure out the final group (whom to add to the deployers?) + procedure for adding and removing people? March 7th.
 * TODO: create a ramp to get more people in the "review stuff into master" group. There will be more incentive to be in it now!
 * Extension owners will be able to +2 their extensions (plus the above CR group and ops), unless they choose the "I don't want review, just let me push" workflow (which can be customized per-extension). We could define groups to make this easier for batches of extensions (eg: SMW developers?) ... For any given extension, we will honor the wishes of the person/s listed as the main  author on the extension's mediawiki.org page.
 * migration: first the batch of WMF deployed ones, then alphabetically through the list. Chad will give each of them The Choice.


 * how do I get training on using the new tools?
 * Guides will need updating -- could plan a workshop at WM2012 to hit a wide audience + be filmed?
 * Wikimania is in July. Aren't we talking about moving MediaWiki core + extensions at least four months before that?
 * Well the guides will be updated well before Wikimania. But I don't see any other time to do training until then unless someone else is volunteering.
 * We need to do a video training before we make the big move.
 * Set up a test repo.
 * Tell 3 people to submit patches simultaneously.
 * Walk them through that, then walk them through reviewing it
 * Screenshare it with G+ !
 * (Monday the 27th of Feb.) is the first time Chad is avail. Ryan Lane & Roan are other candidates.
 * Who is going to update the code review guides, and when?
 * Ongoing [Chad mostly].... Guillaume!
 * this will happen before the migration of core.
 * this will include docs on how current SVN committers can link their LDAP accounts, get passwords put on them, and thus get gerrit accounts ... currently on labs, need improvement


 * what deadlines do I need to know?
 * Everyone: RIGHT NOW stop creating new extension (directories)
 * Ops - NONE
 * random noobs - general cutover dates
 * Core developers - training dates? & cutover dates
 * Extensions developers - training dates? & cutover
 * Other-stuff developers - "after core & extensions" -- tentatively, July 1st 2013... SVN readonly

***  Prove that you're the person who runs it :-) or that you have community consensus
 * I run a project that's currently hosted at svn.wikimedia.org -- how do I convert to git?
 * Ask Chad and he'll help you.
 * could have a convention where you just check a file with this info into your part of SVN
 * Tell Chad what paths in trunk/branches/tags you want
 * Tell Chad what you want the project name in gerrit to be
 * Tell Chad what commit model you want
 * Push-for-review (with who has review powers)
 * Straight push


 * WMF branch strategy
 * Maintain a "WMF" branch of mediawiki/core.git. Use submodules for deployed extensions, can pull from master as regularly as we want for deployments.
 * See below


 * Please make it so Bugzilla patches automatically turn into git pull requests
 * some unknown future. Rusty Burchfield is working on it.
 * depends on gerrit vs phabricator vs other

TODO: tell people to stop creating new extension directories NOW Rob: tell Features!

An ops dependency: ability for anyone to sign up for a gerrit account!
 * Ryan Lane
 * This needs to be scheduled by launch!

Chad will be unavailable March 10th-19th
 * Antoine to backup
 * TODO: Chad to do info transfer starting NOW

TODO: after 1.19, Aaron S to work on fenari git stuff
 * This blocks 1.20 deployment/release

Rough roadmap:
 * Feb: 1.19 deploy
 * Mar: git change (and 1.19 release)
 * April: 1.20 deploy

How to avoid post-review slush? Either    be really stringent on committing to SVN, continue slush, or try to reduce time to get from svn to git. The latter will be much easier - if I continue a dump I've already done before, it can go incremental, go faster into gerrit.

Let's do a combo -- iterate getting people into git ASAP, but still encourage people to consider SVN indefinitely slushed. If you are gonna do large things, we prefer you start doing them in git.
 * The sooner we get git up and running so we can say with a straight face "yes use git" the better

biggest hangup: trunk & extensions & figuring out our deployment strategy

Let's have a WM branch of the core software that never stays too far behind master --   exts we deploy can be added as submodules within that branch. You can differ submodules across branches. So, part of deployment process is: update rev in that We may need some tools to work with this
 * if you are deploying prt of core, grab from master
 * if ext, grab the ref the submodule is pointing to
 * reduces time to deploy, but we have a branch apart from master that we can be picky about. and we have a gated version of extensions
 * you might not be able to nest submodules.
 * TODO: get Aaron involved to see what will be involved re changing procedure on fenari
 * TODO:   re generating branches & submodules -- Chad to add branch, start pulling in submodules we want

Facilitating patches from Bugzilla
How is this supposed to work? Not every attachment is a patch, not every patch's destination is fully qualified, not every patch is based on a (parsable) commit — I think this project would come close to natural language processing.

Rather than trying to wed old workflow and new tools, why not make use of the inherent design features of git/Gerrit? Any user who is able to produce a patch that can be automagically processed can also just apply for a Gerrit account and push the patch for review himself. Any user who is unable to do so is also unlikely to produce a working patch in the first place. --Tim&#160;Landscheidt 22:08, 14 April 2012 (UTC)
 * Over time I do expect that the quantity of patches put into Bugzilla will decline, as we encourage developers in our community to get Git/Gerrit accounts. But there will always be some inflow, and we still have around 200 existing patches in Bugzilla waiting to be processed.  And it's not actually that hard to at least run all the existing patches against trunk and see whether they work.
 * Scores of contributors have already given us gifts -- patches. Instead of saying "your gift came in the wrong wrapping paper, so we won't accept it till you rewrap it," which is not a very hospitable thing to say, I'd prefer for us to at least automatically run the existing patches through Jenkins and through Rusty's tool, and say to those whose patches do not apply, "could you please rebase against current trunk."  I am also already encouraging every contributor who gives us a patch via Bugzilla attachment to sign up for a Developer access account to push any patches into Git.  Can I ask you to aid me in this?
 * Many of our experienced developers are having trouble with our Git and Gerrit setup, and that's orthogonal to their competence as developers. And there are other reasons, tied to core values, why it's better to be kind and inclusive in working with patches from newbies.  The patches in Bugzilla have been second- or third-class citizens for a very long time, and with a little automation, most of them needn't be. Sumana Harihareswara, Wikimedia Foundation Volunteer Development Coordinator (talk) 22:54, 16 April 2012 (UTC)
 * First of all I'd like to point out that I didn't suggest to reject patches to Bugzilla. Furthermore, I think your analysis is fundamentally incorrect.  Those 200 patches aren't there because there isn't an automatic import to git/Gerrit and they weren't there because there wasn't an automatic import to Subversion.  They weren't ignored because they were lacking "patch" or "need-review" keywords.  They are there because noone cared for them.
 * In almost no case the issue is whether a patch runs through Jenkins; the standards MediaWiki developers hold high are in part not even codified which explains the frustration of many volunteer contributors who run into walls they couldn't see. Automatically asking them to rebase their patches (which I understand from the spreadsheet would or has happened in 112 of 162 cases) will put work on their shoulders without even the perspective of having them applied afterwards and leave them with the feeling that the time they invested in the rebase had no value for the MediaWiki developers.  And saying "please" won't make it better.
 * What is needed is human interaction with someone competent: A (human) developer has to examine the patch and discuss with the contributor any issues that he sees which even might mean that a patch isn't and won't be applicable because it doesn't fit into MediaWiki's design.
 * I don't see how this is helped if this process involves two systems instead of one and even more developer's time is redirected from the actual review to setting up and maintaining such a fragile bridge especially if only 200 patches have to pass it and many of the experienced developers already have trouble mastering git/Gerrit. --Tim&#160;Landscheidt 00:31, 17 April 2012 (UTC)