User:AKlapper (WMF)/Sandbox

T101686, T78768 - Reduce CR queues and time to review; prioritizing CR of patches submitted by volunteers

 * This section incorporates random literature. TODO: It does not yet incorporate all comments from T114419.
 * General information: For Code Review, Wikimedia will migrate from Gerrit to Differential (cf. T114320).


 * Dimensions of potential influential factors and potential actions:
 * 3 aspects: social (soc), technical (tech), organizational (org).
 * 2 roles: contributor, reviewer.
 * 3 factors: Contributor Onboarding (onb) (set up dev env, find right place in code, docs and size and findability, finding right feedback place); Patch-Acceptance/Positivity-Likeliness (accept), Patch-Time-to-review/merge (time2rev).

Potential influential factors

 * 1) [ soc | time2rev ] Due to poor quality of contributors' patches, reviewers spend time on reviewing many revisions/iterations before successful merge. Might make reviewers ignore instead of reviewing again and again with CR-1.
 * 2) [ soc | time2rev/accept ] Due to unhelpful reviewer comments, contributors spend time on creating many revisions/iterations before successful merge.
 * 3) [ org/soc | time2rev ] Prioritization / weak review culture: more pressure to write new code than to review patches contributed?
 * 4) [ soc | time2rev ] Lack of sync between developer teams: team A stuck because team B doesn't review their patches?
 * 5) [ org | time2rev/accept ] Reviewer workload; too much on the plate already
 * 6) [ org | time2rev/accept ] Not enough skillful or available reviewers in general?
 * 7) [ org | accept ] Not enough reviewers with CR+2 rights to actually merge?
 * 8) [ org/soc | time2rev/accept ] Lack of repository owners / maintainers, or under-resourced or unclear responsibilities (everybody expecting another person to review) -- cf. T115852, T1287 for MediaWiki core. "Changes failing to capture a reviewer's interest remain unreviewed" due to self-selecting process of reviewers, or everybody expects another person in the team to review. "when everyone is responsible for something, nobody is responsible"
 * 9) [ org/soc | time2rev ] Hard for new contributors to identify and add good reviewers
 * 10) [ org | time2rev ] Outdated documentation for contributors how to find a reviewer/maintainer in order to ping
 * 11) [ soc/tech | time2rev ] Hard to realize a repository is unmaintained for a potential contributor
 * 12) [ soc/tech | time2rev ] Too long time to review for first patch of a contributor but crucial for keeping them engaged
 * 13) [ org | time2rev/accept ] Changesets are rarely picked up by other developers
 * 14) [ tech | accept ] Hard to find existing "related" patches in a certain code area when working on your own patch in that area. Hence more potential rebase/merge conflicts?
 * 15) [ soc | time2rev ] Potential lack of confident reviewers

Potential actions

 * 1) Document for contributors:
 * 2) Patches should be "small, independent, and complete".
 * 3) When it comes to changesets, "[I]f there are more files to review [in your patch], then a thorough review takes more time and effort" and "review effectiveness decreases with the number of files in the change set."
 * 4) When it comes to changesets, small patches (max 4 lines changed) "have a higher chance to be accepted than average, while large patches are less likely to be accepted" (probability) but "one cannot determine that the patch size has a significant influence on the time until a patch is accepted" (time) Small, independent, complete patches are more likely to be accepted.
 * 5) Patch Size: "Review time [is] weakly correlated to the patch size" but "Smaller patches undergo fewer rounds of revisions"
 * 6) Reasons for rejecting a patch (not all are equally decisive; "less decisive reasons are usually easier to judge" when it comes to costs explaining rejections):
 * 7) Problematic implementation or solution: Compilation errors; Test failures; Incomplete fix; Introducing new bugs; Wrong direction; Suboptimal solution works but there is a more simple or efficient way); Solution too aggressive for end users; Performance; Security
 * 8) Difficult to read or maintain: Including unnecessary changes (to split into separate patch); Violating coding style guidelines; Bad naming (e.g. variable names); Patch size too large (but rarely matters as it's ambiguous - if necessary it's not a problem); Missing docs; Inconsistent or misleading docs; No accompanied test cases (❌  TODO: How much is this a NoGo in WM? In which cases do we require unit tests? Should be more deterministic?); Integration conflicts with existing code; Duplication; Misuse of API; risky changes to internal APIs; not well isolated
 * 9) Deviating from the project focus or scope: Idea behind is not of core interest; irrelevant or obsolete
 * 10) Affecting the development schedule / timing: Freeze; low urgency; Too late
 * 11) Lack of communication or trust: Unresponsive patch authors; no discussion prior to patch submission; patch authors' expertise and reputation
 * 12) cf. Upstream Phabricator reasons why patches can get rejected
 * 13) There is a mismatch of judgement: Patch reviewers consistently consider test failures, incomplete fix, introducing new bugs, suboptimal solution, inconsistent docs way more decisive for rejecting than authors.
 * 14) Propose guidelines for writing acceptable patches:
 * 15) Authors should make sure that patch is in scope and relevant before writing patch
 * 16) Authors should be careful to not introduce new bugs instead of only focussing on the target
 * 17) Authors should not only care if the patch works well but also whether it's an optimal solution
 * 18) Authors should not include unnecessary changes and should check that corner cases are covered
 * 19) Authors should update or create related documentation --- ❌: cf. Development policy
 * 20) Patch Writer Experience is relevant: Be patient and grow. "more experienced patch writers receive faster responses" plus more positive ones. Contributors' very first patch is likely to get positive feedback in WebKit; for their 3rd-6th patch it is harder.
 * 21) Document for reviewers:
 * 22) Reviewers' CR comments considered useful by contributors: identifying functional issues; identifying corner cases potentially not covered; suggestions for APIs/designs/code conventions to follow.
 * 23) Reviewers' CR comments considered somewhat useful by contributors: coding guidelines; identifying alternative implementations or refactoring
 * 24) Reviewers' CR comments considered not useful by contributors: Authors consider reviewers praising on code segments, reviewers asking questions to understand the implementation, and reviewers pointing out future issues not related to the specific code (should be filed as tasks) as not useful.
 * 25) Reviewers' CR comments considered somewhat useful by contributors: coding guidelines; identifying alternative implementations or refactoring
 * 26) TODO: Document how to use -1: "Some people tend to use it in an "I don't like this but go ahead and merge if you disagree" sense which usually does not come across well. OTOH just leaving a comment makes it very hard to keep track - I have been asked in the past to -1 if I don't like something but don't consider it a big deal, because that way it shows up in Gerrit as something that needs more work."
 * 27) Stakeholders with different expertise areas to review aspects need to split reviewing parts of a larger patch.
 * 28) TODO: T115850: Should there be a guideline for reviewers to mark controversial patches (when it comes to project direction) as CR-2? If yes, have that guideline also link to the item "discussion prior to patch submission" under "Document for contributors".
 * 29) Introduce and foster routine / habit across developers to spend a certain amount of time each day for reviewing patches (or part of standup), and team peer review on complex patches ~ Team Practices Group?
 * 30) TODO: "a prominent indicator of whether or not you've pushed more changesets than you've reviewed" ?
 * 31) Blocked-on-* projects in Phabricator?
 * 32) Reviewer's Queue Length: "the shorter the queue, the more likely the reviewer is to do a thorough review and respond quickly" and the longer the more likely it takes longer but "better chance of getting in" (due to more sloppy review?).
 * 33) TODO: Tool support to propose reviewers or display on how many unreviewed patches a reviewer is already added so the author can choose other reviewers. Proposal to add reviewers to patches but needs good knowledge of community members as otherwise creating noise.
 * 34) TODO: Potentially document that "two reviewers find an optimal number of defects - the cost of adding more reviewers isn't justified [...]"
 * 35) Capacity building: Recognize contributors (supported by korma stats?) and "somehow" encourage them to become habitual and trusted reviewers; actively nominate to become maintainers
 * 36) Reviewers who have prior experience give more useful comments as they have more knowledge about design constraints and implementation.
 * 37) TODO: Review current CR+2 handout practice: Review documentation at Gerrit/+2. Discuss consider handing out code review rights to more volunteers? Related: Trust levels.
 * 38) TODO: Recognize people not executing their CR+2 rights anymore and outreach to active CR+1 users based on korma stats? Check total number of volunteers/staff and their ratio.
 * 39) Need for better tools to identify unmaintained areas within a codebase or codebases with unclear maintenance responsiblities. Clarify within teams or Engineering/Product if applicable.
 * 40) TODO: Define a role to "Assign reviews that nobody selects." (There might be (old) code areas that only one or zero developers understand.)
 * 41) TODO: (vague:) Have a culture of documenting and sharing knowledge.
 * 42) ❌ TODO: Far future: Automatic reviewer suggestion systems?
 * 43) ❌ TODO: Improve outreaching to volunteers for stuff that Eng/Prod feels not responsible for by pointing to Gerrit/Project_ownership; ❌
 * 44) TODO: more centralized statistics and active communication which codebases need maintainers?
 * 45) ❌ TODO: Monthly "Project in need of a maintainer" campaign?
 * 46) Related example: T121889: Reading team to discuss how they manage extensions
 * 47) Technical: "choice of reviewers plays an important role on reviewing time. More active reviewers provide faster responses" but "no correlation between the amount of reviewed patches on the reviewer positivity" . TODO: Allow a reviewer to notify them of patches in their areas of interest; existing via https://www.mediawiki.org/wiki/Gerrit/watched_projects but limited. TODO: Check "owners" tool in Phabricator "for assigning reviewers based on file ownership"
 * 48) ❌ TODO: Have more automated updating of outdated and manual Developers/Maintainers. ❌ TODO: Have publically documented (and findable) Team/Maintainer <-> Codebase/Repository relations.
 * 49) Need for better technical implementation to display information; allow contributor to act
 * 50) Technical: Tool to allow finding / explicitly marking first contributions. korma.wmflabs.org to list recent first contributions and their time to review. Someone responsible to ping, follow up, and (with organizational knowledge) to add potential reviewers to such first patches.
 * 51) Social: Set up and document a multi-phase, structured patch review process for reviewers: Three steps proposed by Sarah Sharp for maintainers / reviewers, quoting:
 * 52) Is the idea behind the contribution sound? / Do we want this? Yes, no. If the contribution isn’t useful or it’s a bad idea, it isn’t worth reviewing further. Or “Thanks for this contribution!  I like the concept of this patch, but I don’t have time to thoroughly review it right now.  Ping me if I haven’t reviewed it in a week.” The absolute worst thing you can do during phase one is be completely silent.
 * 53) Is the contribution architected correctly? Squash the nit-picky, perfectionist part of yourself that wants to comment on every single grammar mistake or code style issue.  Instead, only include a sentence or two with a pointer to coding style documentation, or any tools they will need to run their contribution through.
 * 54) Is the contribution polished? Get to comment on the meta (non-code) parts of the contribution.  Correct any spelling or grammar mistakes, suggest clearer wording for comments, and ask for any updated documentation for the code
 * 55) Service Level Agreement (SLA) of WMF staff on time to review for CR? T113707 (not specific to "first patch" though)
 * 56) ❌ TODO: Requires clear (single person, and/or gerrit-wrangler who needs institutional knowledge and have people in each team to delegate to?) deterministic responsibilities who owns what -- how do we decide who the owner is? Bus factor? Brian proposes "the review-owner person should do one of these actions in no more then 1 week time since the patch is submitted."
 * 57) TODO: A query (korma?) of patches without review within a week, automated message to wikitech-l or so?
 * 58) but "Difficult to enforce that volunteers must follow some deadline."
 * 59) Document best practices to amend a change written by another contributor if you are interested in bringing the patch forward: T121751
 * 60) Differential offers "Recent Similar Open Revisions". Gerrit might have such a feature in a newer version.
 * 61) Reviewers: ❌ TODO: "we recommend including inexperienced reviewers so that they can gain the knowledge and experiences required to provide useful comments to change authors"
 * 62) Reviewers: Organizational: Vague: "Project management can also identify weak reviewers and take necessary steps to help them become efficient." - but how to actually identify that?
 * 1) Reviewers: ❌ TODO: "we recommend including inexperienced reviewers so that they can gain the knowledge and experiences required to provide useful comments to change authors"
 * 2) Reviewers: Organizational: Vague: "Project management can also identify weak reviewers and take necessary steps to help them become efficient." - but how to actually identify that?

Misc stuff to merge into better sections

 * Agree and document for reviewers: "I also don't understand why 'Can Merge: No' is a reason to assign -1 code review"
 * Followup fixing culture? "after the castle has been conquered and the change is in, it is very difficult to revert it or to get original developers to help fix some broken aspect of a merged change"
 * Agree on and document testing responsibility: "making clear who is responsible for testing. I often refrain from merging simple patches because I feel I should not merge code without testing it, but then never get around to do that as it might require setting up a whole new test environment and often figuring out exactly how to test. As I understand Differential will be an improvement there as it requires patch authors to fill out a test plan. "
 * Roles for people, like Reviewers? ! TODO: Clarify more which problem this is supposed to solve.
 * "among the factors we studied, non-technical (organizational and personal) ones are betters predictors" (means: possible factors that might affect the outcome and interval of the code review process) "compared to traditional metrics such as patch size or component, and bug priority."
 * Priority: significant correlation between priority in issue tracker and positivity
 * Component: "no relation between positivity and the component factor"
 * Organization: "both Apple and Google are more positive about their own patches than 'foreign' patches" to WebKit
 * CR-1/CR-2 gets lost when a Gerrit reviewer removes themselves (example); ❌ TODO: Check if same problem in Differential?

Reviewers:
 * "[R]eviewers from different teams gave slightly more useful comments than reviewers from the same team [...] however, the magnitude of the difference is quite small"
 * Patch acceptance: Developer experience, patch maturity; Review time impacted by submission time, number of code areas affected, number of suggested reviewers, developer experience.

Related documentation pages

 * TODO: Find more; Review their content; split by contributor and reviewer?
 * https://www.mediawiki.org/wiki/Manual:Coding_conventions for contributors
 * https://www.mediawiki.org/wiki/Gerrit/Code_review for reviewers
 * https://www.mediawiki.org/wiki/Gerrit/Code_review/Getting_reviews for contributors (too long?)
 * https://meta.wikimedia.org/wiki/Grants:IdeaLab/Making_Gerrit_access_easier_for_developers_new_to_MediaWiki

T113706 - Cut-off date to abandon patches in CR

 * TODO: Clarify first which problem this potential solution should target
 * TODO: Nothing found in literature. Andre blogged: https://blogs.gnome.org/aklapper/2015/12/01/volunteer-contributions/
 * Very similar to a lowest priority vs WONTFIX/declined discussion in Maniphest/Bugzilla?

Stuff we know

 * "If a project does not make a good first impression, newcomers may wait a long time before giving it a second chance."
 * 3 different profiles of newcomers: "Newcomers can be novice developers who are starting their career, people who are experienced developers from industry but are not used to OSS projects, or people who are migrating from other OSS projects."
 * Entry barriers are not always bad: Some lead to improved contributions in the long run.
 * "Successful FOSS projects grow their communities outward to drive contribution to the core project. To build that community, a project needs to develop three onramps for software users, developers, and contributors, and ultimately commercial contributors."
 * "Sometimes, a contribution barrier cannot be lowered, but instead, the FLOSS project may shift the contribution barrier to the inside of the community's onion model."
 * Dialogs shown to first time contributors trigger additional contributions.
 * T73357: Gerrit: Add Welcomer bot to Gerrit - ❌ TODO: Does something similar exist for Differential?
 * AFAIK no study exists to compare successful and unsuccessful contributors specifically.
 * Annoying little bugs: Labuschagne and Holmes looked at Mozilla's 'Good First Bugs' (GFB) and 'Mentored Bugs' (MeB; something that Wikimedia does not have) programs taking quantitative data from Bugzilla and a qualitative survey with 11 developers with at least 10 contributions who started in one of these programs.
 * "newcomers who make 10 contributions take an average of 203 days to complete this work"
 * Interviewees appreciated mentorship, list a variety of starting points (Bugsahoy, Bugzilla search, IRC, Codefirefox.com), and underestimated the required effort to contribute compared to size and impact of the contribution. TODO: Advertise https://www.whatcanidoforwikimedia.org/ (T91633) more?
 * "The likelihood that a developer's first contribution is successful" is 86% for Meb&&GFB, 80% for MeB, 67% for GFB, 73% for not in any program, hence "The dropout rate for program participants is much higher than for developers that did not start in a program" and "the data suggests that while developers in onboarding programs are more likely to succeed with their first attempts they are generally less likely to become long-term contributors". Possible reasons: "attract individuals who would not otherwise have attempted to make a contribution at all" who are "less equipped to transition into long-term contributors". "downplaying the difficulty of tackling a GFB may in fact increase the chances of failure since the expectation may be that these bugs should be 'easy'".
 * Note that "it is unclear whether these successful contributors would have started contributing without these programs, this paper provides quantitative evidence that these programs alone do not automatically improve the odds of a new developer becoming a long-term contributor" but newcomers might potentially "continue to be actively involved in the project in other ways, such as reporting bugs" (which was not measured in this paper).

Barriers faced by FOSS newcomers
TODO: Incorporate appropriate CR items into other section, leave "stuff happening before starting to write the first patch" here.

Steinmacher et al. went through 291 studies and considered 20 relevant. The majority relies on quantitative past data instead of qualitative studies. Via grounded evidence they analyzed 15 newcomer contribution barriers in 5 categories with 3 origins (Newcomers, Community, Product). The following items and quotes are from Steinmacher et al. if not noted differently.


 * 1) Documentation
 * 2) Outdated docs. Not indicative for FOSS projects as no specific studies. Creates uncertainty whether docs can be trusted. Potential waste of time on already existing features. ❌ TODO: "make newcomers aware of the status of the documents": Encourage using mw:Template:Outdated?
 * 3) Too much docs. Supported by two experimental studies. Overwhelming information overload. '❌ TODO:' "The projects need to provide easy ways to find and navigate the information provided by the projects, linking different sources of information and enabling the recommendation of relevant parts of the group memory" for the newcomer's task.
 * 4) Unclear code comments. Rather irrelevant.
 * 5) Social interaction.
 * 6) Lack of social interaction with project members. Newcomer's social network; social status and need to build an identity. All 7 studies "show a correlation between the centrality of necomers' social relationships and newcomers' successful permanence of a contributor. However, there is no clear evidence of the causal relationship between social network centrality and newcomer success."
 * 7) Receiving an improper answer. "Three studies brought evidence of the negative impact of the content of answers received by newcomers" when it comes to answering "politely or positively". "[N]ewcomers demand attention and friendly hands to start contributing".
 * 8) Not receiving a (timely) answer. Feeling demotivated or unimportant. (Studies with contradictory results, however:) "absence of responses, improper answers, and not receiving recognition [...] can lead to newcomer dropout." "[...] could nominate people with social skills to receive newcomers in the communication channels.", "avoiding the use of project specific terms and jargons", "need to receive proper directions in a positive way". Potentially "make newcomers aware of the average time to receive answers [...] to help manage their expectations."
 * 9) Newcomers' previous knowledge
 * 10) Lack of domain expertise. Not indicative. People who contribute are nearly always users. No specific study exists about domain expertise though!
 * 11) Lack of technical expertise. Practical hands-on experience strongly associated with continued contribution; knowledge received via academic education not significant for successful contributing. A contributor can benefit from presenting skills and demonstrating expertise by sending mail or patches to influence the perception of the community. "[B]efore contributing to a project, newcomers must, for example, verify whether their skills math wtih skills needed". "Newcomers that showed proactivity [...] were better received by the community" and "the content of a message influences the reception of a newcomer". "[S]ocial and political behavior was important for newcomers to become long-term contributors or to be accepted [...] as members."
 * 12) Lack of knowledge of project practices. Not indicative, but "making it clear what is expected from them and what the process and practices are that must be followed to contribute" by having a "more informative and less technical initial environment" which looks "less geeky/daunting for newcomers". --- TODO: Could this be turned into an actionable task? Or not?
 * 13) Onboarding: Finding a way to start
 * 14) Annoying little bugs: Difficulty to find an appropriate first task. "the community wants the newcomers to pick the task themselves; however, newcomers have no clue of how to do this." Capiluppi and Michlmayr report that it is easier for new contributors to work on newer codebases than on older codebases. . TODO: Again, do not list MediaWiki core as first item on Annoying little bugs?
 * 15) Difficulty to find a mentor. Not indicative for FOSS projects as no specific studies. Not common in FOSS to offer formal mentorship. ❌ TODO: Document the fact that there is no formal mentorship (except for ) in Docs for Contributors?
 * 16) Technical hurdles (in general rather poorly studied)
 * 17) Onboarding: Setting up a local development environment. Getting stuck setting it up due to disproportionate configuration effort, reuse which creates dependencies and complicates building, platform diversity due to build configurations, constant changes in the build process requiring the developer who once mastered has to constantly keep up, prior experience, (not applicable:) interpreted languages, nobody in charge to simplify the build configuration. . "Make it easy to contribute by making the software easy to configure, build, and test to a known state. The more time you save outside developers that might be interested in contributing, the more time they have to work on the contribution they want to make, rather than losing time and possibly interest in trying to get past building the software." Hence provide a VM like Vagrant. ✅
 * 18) Code complexity. (Too hard to fix in this scope; action item already covered before.) "code complexity negatively influenced newcomers' decision to contribute". Hence "directing newcomers to peripheral modules" is better. TODO: Again, do not list MediaWiki core as first item on Annoying little bugs? Furthermore, fear of introducing new issues and feeling embarrassed, introducing platform specific bugs (e.g. different database backends), only superficially testing and creating unpredictable side effects can be reduced by providing easy access to unit tests via CI (Jenkins). ✅
 * 19) Software Architecture complexity. Providing visual information (e.g. class diagrams) reduces newcomers' challenges. Some people prefer more "visual learning styles". ❌ TODO: Check if we can provide better visual information in technical documentation?

Hospitality and tone

 * cf. https://en.wikisource.org/wiki/Hospitality,_Jerks,_and_What_I_Learned
 * "the level of politeness in the communication process among developers does have an effect on the time required to fix issues and, in the majority of the analysed projects, it has a positive correlation with attractiveness of the project to both active and potential developers. The more polite developers were, the less time it took to fix an issue, and, in the majority of the analysed cases, the more the developers wanted to be part of project, the more they were willing to continue working on the project over time."; "Issue fixing time for polite issues is faster than issue fixing time for impolite issues for 10 out of 14 analysed projects."; "Findings. In the majority of projects Magnet and Sticky are positively correlated with Politeness."; ”Politeness is the practical application of good manners or etiquette. It is a culturally defined phenomenon and therefore what is considered polite in one culture can sometimes be quite rude or simply eccentric in another cultural context. The goal of politeness is to make all of the parties relaxed and comfortable with one another.”
 * "Would you be interested in contributing a fix and a test case for this as well?" style instead of "this isn't the forum to clarify support requests"?; "The latest changeset isn't applying cleanly to git master anymore - could you resubmit it please?"
 * Cultural differences: Gupta et al tested four strategies (Brown and Levinson's theory): Direct ("Do X", "You could do X"), Approval ("Would it be possible/Could you do X"), Autonomy ("I'm wondering whether it would be possible if"/"Could you possibly do X"), Indirect ("X is not done yet", "Someone should do X") on 26 people (11 british, 15 indian, mixed gender, 20-30y) asked to rate on a scale between overpolite to excessively rude. While Brown and Levinson posit that the indirect strategy should be the politest form, to Indian people it sometimes sounds "like a complaint or sarcasm" as "English and Indian native speakers of English have different perceptions of politeness". Still "utterances to strangers need to be much more polite than those to friends".

T78639 - Addressing the long tail of low priority tasks in active projects
Saha et al. looked at 7 FOSS projects using Bugzilla and tried to find out why 125 manually analyzed bug reports open for more than a year take so long to get fixed. (Andre challenges some smaller assumptions but in general the authors have a clue, e.g. that triaging does not always and immediately take place, that severity and priority fields often keep their default values etc). Reasons are diverse:
 * Hard to understand and to find the right place where to fix in the code
 * Uncertain how to fix due to missing the bigger picture or related technical debt
 * Hard to fix due to required complex solution or architecture
 * Risky to fix with regard to release schedule
 * Incomplete fix due to missing corner cases
 * Importance not realized until duplicates were reported or someone comments, then activity follows
 * Hard to reproduce, e.g. steps to reproduce missing
 * Scheduling: Developer's workload and personal to-do lists
 * Unaware of corresponding task for a code change fix, hence not closed
 * Infrequent use case: Destructive bug but in a not very common area
 * Others, e.g. blocked by other tasks; developer vacations etc
 * No specific reasons / as-usual delay: limited resources etc, "there are many bug-fixes that got delayed without any specific reason".
 * Added by Andre: Misclassification in wrong basket/project hence not on the screen of devs?
 * Solution for more important tasks: Careful prioritization, predicting severity, change effort, and change impact. ''Added by Andre: Also by just saying "No until you do it yourself?"
 * Misc: "40% of long-lived bug fixes involved only a few changes in only one file."
 * ❌ TODO: Expectation setting, again - maybe "The developers have a lot to do but if you feel strongly about this particular issue, please consider a code contribution which the developers would be happy to review."''
 * ❌ TODO: How to set more realistically more tasks to Priority=low?
 * ❌ TODO:How to explain? See T87411: "Add help link to explain meaning of priority levels"

Bounties / Crowdfunding

 * cf. T88265 -- ✅: Relevant parts merged into comment there


 * "All bounty programs are characterized by a winner-take-all incentive structure. [...] a developer must consider if they should spend a considerable amount of time working for an uncertain prize of not enter." (page 176; cf. game theory).
 * Advantages from Company/corporation perspective: Can reduce development costs; can lead to greater number of alternative implementation designs due to competition/rivalry; can create broader interest in product; might change priorities for developer community. (pages 174--179)
 * Disadvantages from developer perspective: Risk of not making it due to getting paid only in case of success; hard for developer to evaluate difficulty of task and quality of documentation (pages 174--179)
 * Disadvantages from Company/corporation perspective: Can be hard to define the amount of money - if amount is too low might not attract qualified developers; potentially attracting students who wish to learn; potentially attracting developers in countries where salaries are low (cf. GSoc?); in FOSS it could be work that someone else might have done for free at some point been (cf. Free rider problem); time spend to judge contribution is a cost (though also with non-bounty reviews). (pages 174--179); Target group: "Typically, bounties have been won by a few people who have worked on projects for a long time." (page 177)
 * Risks for project perspective: might influence direction that project is heading; short-term orientation may create direction that is not scalable and creates maintenance costs (but also a non-bounty issue?) hence long-term orientation required; potential rivalry with volunteer developers and potential withdrawing of them; potential bypassing of maintainers if bounty organizers decide which tasks are pushed/prioritized instead of actual maintainers/community; attracting developers which do not know the direction and context of project. (page 177--178)

Other related tasks

 * T114320, T114311 - CR migration to Differential
 * T114419 - Make CR not suck, especially for volunteers
 * T89907 - MW developer community governance model
 * T102920 - Unmaintained/inactive repos
 * T115659 - Collaboration on prioritization

Misc

 * Large single vendor governed FOSS projects tend to be controversial; non-profit foundations seem to be successful
 * "Solid Engineering Practices + Strong Community Governance + Clear IP Management enables Growth."
 * Magnetism and Stickiness: A project is Magnetic (magnetism) if it attracts new developers over time. A project is Sticky (stickiness) if it keeps its developers over time.
 * Magnetism is the portion of new active developers during the observed time interval, in our example 2/10 (dev 6 + dev 7 were active in 2011 but not in 2010).
 * Stickiness is the portion of active developers that were also active during next time interval, in our example 3/7 (dev 1, dev 2, dev 3 were active in 2011 and in 2012).
 * Covered by korma's Demographics page
 * Time to merge
 * OpenStack: Number of active core reviewers growing, 50% of the changesets that landed into master were merged in less than 3 days since the review process was opened.
 * Linux Kernel: 33% of the patches make it into the Linux kernel within 3-6 months; Factors are submission time, affected subsystems, number of requested reviewers
 * Who works on what and how to find out and contact (WMF) teams?
 * Key Wikimedia software projects extremely outdated when it comes to areas, teams, projects, links. Transclude from each team page (if existing) for WMF stuff at least? More automatic approach needed, e.g. list of deployed extensions and adding/removing mw:Category:Extensions used on Wikimedia from extension homepages on mediawiki.org
 * T115853 - Every wiki page of a WMF engineering and product team should have a "Contact" section
 * Teams to consistently (structure!) document and update their codebase responsibilities on-wiki, like Reading in https://lists.wikimedia.org/pipermail/mobile-l/2015-November/009926.html ?
 * Teams to consistently (structure!) document and update their codebase responsibilities in Phabricator, listing their "sub"projects in Team project descriptions?
 * Teams to update mediawiki.org extension homepages?
 * Wikimedia Engineering and Staff to link to corresponding team wiki pages
 * Apply structure of each section of Wikimedia Product also on team wiki pages? Proposed in mw:Topic:Su7ud4jz6a4338wx

Active Gerrit code review users per month
Uploaders, Reviewers, Committers

{ "version":2, "width": 800, "height": 350, "padding": {"top": 10, "left": 30, "bottom": 30, "right": 30}, "data": [ {     "name": "table", "values": [ {"id": "uploaders", "date": "Sep 2014", "amount": 204}, {"id": "uploaders", "date": "Oct 2014", "amount": 201}, {"id": "uploaders", "date": "Nov 2014", "amount": 187}, {"id": "uploaders", "date": "Dec 2014", "amount": 208}, {"id": "uploaders", "date": "Jan 2015", "amount": 208}, {"id": "uploaders", "date": "Feb 2015", "amount": 210}, {"id": "uploaders", "date": "Mar 2015", "amount": 217}, {"id": "uploaders", "date": "Apr 2015", "amount": 205}, {"id": "uploaders", "date": "May 2015", "amount": 212}, {"id": "uploaders", "date": "Jun 2015", "amount": 209}, {"id": "uploaders", "date": "Jul 2015", "amount": 194}, {"id": "uploaders", "date": "Aug 2015", "amount": 193}, {"id": "uploaders", "date": "Sep 2015", "amount": 205}, {"id": "reviewers", "date": "Sep 2014", "amount": 181}, {"id": "reviewers", "date": "Oct 2014", "amount": 176}, {"id": "reviewers", "date": "Nov 2014", "amount": 170}, {"id": "reviewers", "date": "Dec 2014", "amount": 170}, {"id": "reviewers", "date": "Jan 2015", "amount": 189}, {"id": "reviewers", "date": "Feb 2015", "amount": 192}, {"id": "reviewers", "date": "Mar 2015", "amount": 185}, {"id": "reviewers", "date": "Apr 2015", "amount": 187}, {"id": "reviewers", "date": "May 2015", "amount": 183}, {"id": "reviewers", "date": "Jun 2015", "amount": 188}, {"id": "reviewers", "date": "Jul 2015", "amount": 200}, {"id": "reviewers", "date": "Aug 2015", "amount": 189}, {"id": "reviewers", "date": "Sep 2015", "amount": 189}, {"id": "committers", "date": "Sep 2014", "amount": 113}, {"id": "committers", "date": "Oct 2014", "amount": 123}, {"id": "committers", "date": "Nov 2014", "amount": 119}, {"id": "committers", "date": "Dec 2014", "amount": 112}, {"id": "committers", "date": "Jan 2015", "amount": 119}, {"id": "committers", "date": "Feb 2015", "amount": 127}, {"id": "committers", "date": "Mar 2015", "amount": 122}, {"id": "committers", "date": "Apr 2015", "amount": 122}, {"id": "committers", "date": "May 2015", "amount": 125}, {"id": "committers", "date": "Jun 2015", "amount": 132}, {"id": "committers", "date": "Jul 2015", "amount": 132}, {"id": "committers", "date": "Aug 2015", "amount": 126}, {"id": "committers", "date": "Sep 2015", "amount": 128} ]   }  ],  "scales": [ {     "name": "x", "type": "ordinal", "range": "width", "domain": { "data": "table", "field": "date" }   },    {      "name": "y", "type": "linear", "range": "height", "nice": true, "domain": { "data": "table", "field": "amount" }   },    {      "name": "color", "type": "ordinal", "range": "category10" } ],  "axes": [ {     "type": "x", "scale": "x", "field": "date", "tickSizeEnd": 0 },   {      "type": "y", "scale": "y", "field": "amount" } ],  "marks": [ {     "type": "group", "from": { "data": "table", "transform": [ {           "type": "facet", "groupby": [ "id" ]         }        ]      },      "marks": [ {         "type": "line", "properties": { "enter": { "x": { "scale": "x", "field": "date" },             "y": { "scale": "y", "field": "amount" },             "stroke": { "scale": "color", "field": "id" },             "strokeWidth": { "value": 2 }           }          }        },        {          "type": "text", "from": { "transform": [ {               "type": "filter" }           ]          },          "properties": { "enter": { "x": { "scale": "x", "field": "date", "offset": 2 },             "y": { "scale": "y", "field": "amount" },             "fill": { "scale": "color", "field": "id" },             "text": { "field": "id" },             "baseline": { "value": "middle" }           }          }        }      ]    }  ] }