Development process improvement/2010 Q3 assessment

This is an attempt to assess where WMF Engineering is today.

Development Input: Feature/improvement identification, specification, and prioritization
This section covers:


 * How engineering identifies and gathers input from the stakeholders involved about priorities
 * How engineering identifies the work that needs to be done
 * How engineering prioritizes that work

These answers depends on the type of feature being developed.

Operations-driven work
Example: security vulnerabilities, performance enhancements

Because development and operations are still pretty closely intertwined, input gathering in this category is still some combination of firefighting and tackling the problems that cause the most headaches for the devs themselves (scratching an itch). Prioritization is mostly ad hoc, with firedrills taking the highest priority, easy stuff being knocked off the list without much ceremony, and profiling + community discussion being used to prioritize the other stuff.

Community-driven work
Currently, this is done in a very ad hoc way. Commit access is given in a fashion that's pretty liberal by open source standards (submit a couple patches, show reasonably good faith, you're in as long as you color within the lines). Smaller work can be checked in directly into the trunk, reviewed after the code has already been checked in. Larger work is frequently developed on separate branches, with Tim Starling in charge of the merge queue, and lots of stakeholders pestering Tim about what is next in line.

It must be noted the Wikimedia community is composed of very different groups of people. Participants of Wikimedia wikis are the biggest part of the community; MediaWiki developers are also part of the community, but they form a very small part of it. It takes a really motivated developer to build an entirely new feature and get it mature enough to be deployed on Wikimedia projects. Even then, it is not sure the feature actually meets the needs of users. Developers often create software whose behavior reflects the implementation model and not the users' mental model.

The current development process is one typical of organic open source. Product management and usability experts do not get a special place in the community, and as people who don't typically contribute code, don't have as much of a voice as they feel they should. Furthermore, lines of communication between disciplines are poor. There is no roadmap or target versions for bugs and features. Specifications are often thought to be superfluous, unless the topic interests more than one developer and there is some disagreement regarding the best implementation to choose. The community of developers recently created a "Requests for comment" process for such situations. Prioritization happens on an individual basis, as a balance of "borked things that must be fixed" and "things a developer finds interesting".

Grant-driven work
Example: usability work


 * Starts with the grant writer, probably targeting a specific foundation
 * The foundation providing the grant iterates on the grant language (how much negotiation occurs in this phase?)
 * Resources are assigned + hired if necessary
 * Project/product manager develops project plan to fulfill milestones/goals identified in the grant, and develops detailed specifications to match grant proposal
 * Because of the funded nature of this work, the work gets priority, though possibly through contract developers

The major issue with grant-driven work is the lack of flexibility. In non-grant driven work, it is often desirable (and possible) to reassess the project plan, goals & resources en route. When the plan, goals & resources are frozen by the agreed grant proposal, it is almost impossible to do course-correction, even though it's at least as desirable as in non-grant driven work.

Contrary to what one might expect, this is particularly problematic when the grant proposal isn't well defined. One might think it's easy to meet the goals of a vague proposal. However, the more vague the proposal, and the less defined the scope, the bigger the risk to miscalculate the schedule and necessary resources.

Last, there are obviously major differences depending on the grant, its amount, its stakeholders, its timeframe, etc.

Non-grant WMF-driven work
Examples: fundraising code, single sign-on, Flagged Revs

WMF staff interested in a feature generally lobby one of the decision makers (formerly Brion, now Erik and/or Danese) to implement the feature. If the feature is one critical to the functioning of the foundation (e.g. fundraising code), a team is pulled together from the resources not tied up in the above categories. If the feature/improvement is one that is necessary for the under-the-hood improvement of the platform, it probably happens without a lot of ceremony or WMF-wide communication. With the commitments from the work in other categories, there's not a lot of spare cycles for this type of work, with contractors being brought in to accelerate the pace/predictability of development (e.g. Four Kitchens developers for fundraising code, William Pietri and Rob Lanphier for project management Flagged Revs).

WMF-driven contract work (outside firms w/outside project management)
Traditional contractor relationship where there is a need for a software related task that can be isolated into a single work unit. These relationships may involve further project management that is outside of the control of WMF for pieces such as resource allocation. Typically the task is either something that the WMF is not capable of doing due to lack of skill sets or one where the work is basic and can be done faster by another party.


 * Examples
 * FourKitchens (Fundraising)
 * Calcey QA (Browser Testing)

Dev Support/Biz Dev type work
Currently driven by Mobile and Offline work. Contractual obligations to provide technical support and development for our business partners. Typically involves content re-use through either our XML snapshots or API. Meetings are generally held weekly where the WMF is updated on new initiatives that the business partner is doing. It is up to the WMF to inform partners of any operations changes that might impact their services. These partners are usually chosen to further WMF reach with interesting and innovative projects with technology.

WMF-driven dev staff + extended staff work

 * Examples
 * Flagged Revs
 * Liquid Threads

Development Pipeline: scheduling, reporting, testing, and assessing
This section covers:
 * How estimates are arrived at
 * How progress against estimates is reported
 * How testing occurs
 * How success and failure is assessed, and organizational learning occurs

Operations-driven work
Operations-driven work happens on a very ad hoc basis, with much happening in the volunteer community (e.g. Domas). Security vulnerabilities usually result in a scramble to address quickly without much (any?) formal process associated with completing the work. Performance work has some more rigor associated with it, with profiling information on noc.wikimedia.org to assess the success or failure of a given change.

Community-driven work
This is also very ad hoc, by the very nature of community-driven development. Between 2006 and 2009, MediaWiki releases were attempted quarterly, though in reality only 2-3 releases happened every year. The idea behind such frequent releases was to remove the tension of needing to get a particular feature into the next release, because next+1 was only a few months away and not that long to wait. We have yet to settle into a new cadence with the many changes in staffing that have occurred since the 1.15 release in June, 2009.

Grant-driven work
For grant-driven work, the process tends to be much more buttoned-down.

For the Wikipedia Usability project, an inventory of tasks is collected by the project manager into a spreadsheet. Tasks are ordered, estimates are made, and triage occurs to keep the schedule within rational bounds. Some testing as appropriate to the project has occurred; e.g. a lot of the work on Vector has been a combination of usability testing and a conservative, opt-in roll-out to *.wikipedia.org. Additionally, frequent updates have occurred at prototype.wikimedia.org. The usability testing has been particularly fruitful, driving further iterative improvement of the user interface (which is fed back into the scheduling process).

Again this depends on the project.

Non-grant WMF-driven work
There has been a lot of experimentation in this area. Flagged Revs has been managed via tasks entered into an external instance of Pivotal Tracker, with the testing occurring on a public instance. Fundraising code: ???.

Tools
Here's a quick rundown on the tools in use by WMF as of 2010-05-13:


 * Bug database:
 * Current bug DB: Bugzilla
 * Current favorite: Unknown. See Tracker/PM tool and
 * Bug triage meetings
 * Current frequency: none
 * Preferred frequency: weekly?
 * Scheduling:
 * Current tool: mishmash of spreadsheets and misc
 * Current favorite: something else
 * Wikis
 * Currently: various. Meta-wiki is historically the coordination wiki for Wikimedia projects, and used to contain MediaWiki documentation. This documentation was then moved to mediawiki.org, which now also serves as the platform for Code review; it also sometimes serves as coordination platform for developers. Some project-specific wikis are sometimes used, such as the Usability wiki, but they are not necessary, as their content really belongs to meta or mediawiki.org
 * Preferred: didn't discuss changes
 * Communications
 * Techblog (public r/o)
 * IRC (many channels with different access levels)
 * Mailing lists (many with different access levels)
 * Unit/integration testing:
 * Currently: a little bit of Selenium
 * Preferred: a lot more Selenium
 * Deployment
 * Currently: Buffalo Bill! Pew pew pew!
 * Preferred: Something not like that. A documented release process.