Google Code-in/Lessons learned

This page lists feedback and ideas for potential improvement.

2018

 * Andre: Passing contest information to mentors via email: After talking with Martin: So far we've put mentors' individual mail addresses into BCC when sending out email announcements. Admins should probably instead set up a mailing list for mentors, subscribe the mentors' email addresses as list members, and after GCI has finished remove those mentors' addresses again. The corresponding list archive archive might also solve the problem of forwarding previous emails to mentors who joined GCI after those previous email had already been sent out.
 * Platonides mentioned in T200540 that we should explicitly ask students for licensing their design work.
 * For SVG design tasks we should likely also explain embedding fonts. See for example https://phabricator.wikimedia.org/T206249#4774025 --AKlapper (WMF) (talk) 14:40, 26 November 2018 (UTC)
 * There seem to be applications which convert a bitmap image to a (not very beautifully looking) SVG image (all nodes on the same layer, not structure at all). I don't think we want that. --AKlapper (WMF) (talk) 14:06, 5 December 2018 (UTC)
 * If we repeat the "Learn how to use Gerrit for code review, by submitting a patch" task (which asks students to use https://gerrit.git.wmflabs.org/) in GCI 2019, we should request a GCI repository in main Gerrit instance. Labs one is unreliable. It'll also allow students to do configuration just once (although it can be useful to do it twice maybe :)) --Martin Urbanec (talk) 10:19, 29 November 2018 (UTC) EDIT: There's "sandbox" repository --Martin Urbanec (talk) 10:28, 29 November 2018 (UTC)
 * We should have slightly stricter rules about students working on/submitting multiple tasks at a time --Shreyas Minocha (talk)
 * Could you please elaborate? Is "we" Google or is it Wikimedia? How to be stricter exactly? Google's current contest rules have section 4.2 for this topic. --AKlapper (WMF) (talk) 15:36, 3 December 2018 (UTC)
 * I originally meant Wikimedia, but I know realise that something should be done at the Google level. Perhaps a rule that says "a task done simultaneously when detected will not be accepted". Feel free to get rid of my point if you feel that we can't/shouldn't do anything at the org level. --Shreyasminocha (talk)
 * So you're proposing to make section 4.2 more explicit? I'm happy to forward that feedback once Google will ask for feedback in a few weeks! :) Full disclosure: There recently was a discussion called "Doing more than one task at once" on Google's (non-public) GCI Mentors mailing list. My understanding of the outcome of that discussion was that it is the risk of the student to work on a task that the student has not claimed yet, as another student could claim that task in the meantime. However I'm not sure if that covers all your concerns. --AKlapper (WMF) (talk) 17:53, 3 December 2018 (UTC)
 * Hm, yeah that pretty much sums up my concerns. Personally, I don't see harm in a student working on a task that has potentially infinite instances since that isn't stealing anyone else's chance to try them out although I can understand why that is a concern for some. I wonder if the discussion led to something actionable (yet). --Shreyasminocha (talk)
 * For those types of task, while the task instances are infinite, the mentors' attention is not. It's more of a rate-limiter than about task claims. But the discussion wasn't quite clear in this context. Ebe123 (talk) 16:17, 11 December 2018 (UTC)


 * Importing Phab task descriptions into the GCI website: Phab has its own markup but does support Markdown. Make the Mentor instructions recommend markdown (e.g.  instead of  ) to save time? --AKlapper (WMF) (talk) 11:57, 4 December 2018 (UTC)
 * Instance comments should be mark-up enabled so that a clear distinction between text and code (and to have emphasis and the like). This could add clarity to the comments I leave. Ebe123 (talk) 16:17, 11 December 2018 (UTC)

2017
Summarizing from T181738:


 * Admin aspects:
 * Nikerabbit: For multiple times I was wondering and almost going to ask some admin whether there was enough tasks or should I put effort in creating more. Some more insight into that status would be nice. Filed as T200777.
 * AKlapper: Find some way how to make sure mentors have "enough" tasks published, so mentors don't have to ping "please publish more tasks of mine; I'm running out". Filed as T200777.
 * AKlapper: Potentially unclear responsibilities among admins who publishes tasks?
 * RexxS: Have the online conference call for new mentors (T178483) a few weeks earlier
 * Legoktm: a few cases where mentors were just added on the GCI website, and not mentioned in Phabricator, which got a little confusing to keep track of.
 * Mentors approving tasks:
 * Legoktm: Sometimes mentors (with good intentions of course) would prematurely approve tasks that weren't merged / might have had problems. At least for some tasks, I was thinking about having a "primary mentor" which would be the expected person to sign off on tasks, with other mentors still being able to "needs more work". I'm not sure how to balance that with not being a bottleneck for students. ✅
 * AKlapper: As anyone who is a mentor could add themselves as a mentor to any task of that org, I think it can happen that someone adds themselves as a mentor without necessarily fully understanding the task they are supposed to mentor. We might want to make it clearer that mentoring means that you must be able to understand the code base sufficiently enough to review the patch, to not have people who want to help with best intentions but might not help. ✅
 * Legoktm: Like every year, we need a better solution to getting students added to the jenkins whitelist. Maybe we can have a GCI task itself like "once you've had two merged patches in Gerrit, submit a patch to add yourself to the jenkins whitelist". ✅ by creating T200778
 * Stock answers/canned responses in tasks / post-contest retention:
 * AKlapper: Maybe sync what to reply in beginner tasks like "Get on IRC" to be motivational? Like "Thanks a lot! We hope that IRC was an interesting experience and hope to see you around. Good luck with your next tasks!"
 * AKlapper: Maybe sync what to reply in last tasks before contest ends, to improve retention. Filed as T200779
 * RexxS: It's worth asking all mentors to encourage students to continue their engagement with Wikimedia after the event closes. Perhaps having a page on meta specifically targeting CGI students with leads, contacts, suggestions for how they can continue, etc. might be useful as well? Filed as T200779
 * jayvdb: it would be good to add our GCI students as GSOC project co-mentors for projects they were competent in. As co-mentors, GCI students can add a lot of value, especially at the beginning of the GSoC project where they can ensure the GSoC students are following coding and testing guidelines, ensure code is regularly getting merged. The GCI students learn a lot of project management skills this way, and also helps prepare them for proposing and completing their own GSOC projects in a year or two. All GCI orgs I've talked with try this to some degree. Filed as T200781
 * jayvdb: we should be helping the GCI students get connected with their local chapter/usergroup where possible, in order to engage with coding projects that their chapter is working on, or wanting, and may have a higher local impact. The high social impact is something Wikimedia GCI students often indicate made them want to participate with Wikimedia instead of some other org. Local engagement keeps student momentum between GCI, and helps them understand how their developer skills can be used to help the editing community. Filed as T200779


 * Docs for mentors / better task descriptions:
 * AKlapper: Quite some mentors marked tasks as "Beginner" and I often did not understand why and reverted (as an org admin before publishing those tasks), as mentors deliberately limit the number of students who can claim these tasks. ✅
 * AKlapper: Graphic tasks like "Propose a design for stickers" must make it clear that when reusing existing work, students must provide sources (URL) and understand licenses and link to the license (URL). ✅
 * AKlapper: I had a task requesting a logo proposal in SVG format. Many students embedded a bitmap image in an otherwise empty SVG file. That was not my intention. :P ✅
 * Dedicated communication channel / student involvement (filed as T200782:
 * Bawolff: My personal opinion is that #wikimedia-dev was not the most appropriate channel and that #wikimedia-tech should have been used. I would be opposed to a specific wikimedia-gci channel. / I think taking over a non-bot infested channel (either #wikimedia-tech or #mediawiki) would help with this too.
 * divadsn: Given the constraints of IRC, "I still think that the communication during GCI should be done on another platform like Telegram".
 * jayvdb: I think a dedicated channel for GCI is preferred. It can be used as a congregating area for students and mentors alike, and students are then referred to other rooms depending on the tasks they are working on. Most other orgs do this. The students inevitably also hang out in those other 'work' rooms, and grow more comfortable chatting in them, but can have idle chatter in the GCI room from day 0 because it is 'their' primary work room.
 * RexxS: I was a little disappointed that I only interacted as a mentor with a few other mentors (thank you particularly, Derick). Having a sense of working as a team is something I'd like to see developed among the mentors. Do we need a regular comms channel? make use of a separate mailing list? have regular conference calls?
 * jayvdb in T181738: Zulip integration uses the GCI web-hooks, which we could extend to ping mentors and admins, or and we could implement our own web-hook receiver in one of the IRC bots.

2016

 * Andre would have also loved to see tasks like T154198 to T154201, but they did not even get marked as easy. cf. T149564
 * Marking a task on the GCI site as "ready" did not always work. Having to ping admins to publish a task on the GCI site was cumbersome. We need a better workflow.
 * Mailing list or IRC channel to contact org admins specifically, instead of random private pings on IRC and emails answering mentor questions and publishing tasks when wanted in a timely manner, plus also to gather feedback in the end?
 * We had no "concept how to offer harder tasks at the end"; cf. learning curve. It was pure luck that we got mentors (like TTO) for harder tasks at the end.
 * We had way more mentors this year at the beginning. but this did not necessarily result in more tasks :(
 * Hangout session for mentors at the beginning worked well. (Srishti's idea) T150636
 * We need better examples for really well written tasks; e.g. jayvdb pointed to https://github.com/coala/coala/wiki/Google-Code-In-Task-Use-coala
 * In 2016, we went 16 times over the 36h review/feedback deadline which is way more than in previous years (however never longer than 44h; I've heard stories of 72h from other orgs).
 * Some input from TTO (not always exact quotes):
 * a lot of the tasks that have been offered have been almost too trivial even for GCI... Maybe before GCI 2017 we (as Wikimedia) should discuss what standard the GCI tasks should aim to be. Obviously there will always be a mix of easier and harder tasks, but tasks that require changing one or two lines of code don't seem suitable as standalone non-beginner tasks.
 * A recurring issue in our participation in GCI is availability of org admins. Last year I pointed out that all the org admins were in Europe, creating difficulties for mentors and students in the US and Asia. This year, the situation was a lot better, as mentors could now edit and accept tasks they weren't mentoring. But it still proved challenging to get new tasks published; waits of more than 24 hours, during periods when we were short on tasks, were often experienced. [...]
 * Provide clear list of org admins and ways to contact them on MediaWiki.org. It would be useful to place a separate table on the mentors subpage with this info.
 * The biggest issue for me, though, was the statement on the mentors subpage that "Tasks are supposed to take 2-3 hours to an experienced contributor." A number of the tasks created by other mentors were trivial, and would have taken the best part of 5 or 10 minutes for an experienced contributor to fix. Most of the students I worked with felt that they weren't learning anything from these tasks, and they were clamoring for some more juicy tasks to sink their teeth into. The fact that one student completed tasks at a rate of more than one per day on average is another sign that some tasks were too simple or involved too little work. I think org admins might need to take a bit of a harder line on task difficulty next time around, to ensure Wikimedia's GCI effort stays high-quality and more students remain engaged right through to the end. For example, a task could have required the student to remove a deprecated method in three extensions instead of one, or six extensions instead of three, etc.
 * jayvdb in https://codein.withgoogle.com/tasks/5785685814411264/ says "Set up a Vagrant instance (your development environment) - see instructions." ; it is tagged "vagrant". Rather than change that task, maybe retitle it to be vagrant, and create other tasks about installing the Docker, installing from the package manager, and have the participant report any problems.  In each of these "setup a [dev|prod] environment" tasks, we should say something like "if you run into an unreported bug in the process, you should abandon this task and do [this task](link to create a bug) task instead, and then maybe re-start this task again later."
 * AFAIK Dan / Analytics wanted a earlier heads-up for GCI (at least one month before) to have time to prepare tasks.
 * Consider an optional Hangout session after GCI has finished, to gather more feedback and Lessons Learned??
 * Work already performed by students cannot retroactively become a GCI task (GCI contest rules, section 4.2). However before GCI starts we plan potential tasks in public so there is a bit of a conflict here we cannot easily avoid (students could investigate tasks before claiming them but it is their risk if someone else is faster to claim the GCI task once available).
 * MovingBlocks' 'Introduce yourself to the community on IRC' tasks have a "Definition of Done" and "Suggested next task".

2015

 * We had tasks about extensions that do not list their software license (and how to add it). However students often did not check all files and the task turned out to be more complex when it came to required investigations than initially thought.
 * We edited some on-wiki developer documentation but results were mixed as students often performed (more or less acceptable) copy&paste on each page without actually understanding what problem they were solving by their wiki edits.
 * The CI whitelist caused confusion (and waste of time as the student had to wait for half a day due to timezone differences just so someone else can say "recheck"). We either need more documentation on how to run tests locally, or ideally add all students to the whitelist at the beginning. Without adding, only a subset of the tests is run by jenkins and that can cause confusion. cf. https://gerrit.wikimedia.org/r/#/c/261322/
 * Vagrant docs might need better documentation for Windows platform?
 * How to encourage having more tasks with more than one mentor?
 * Better workflow for mentors creating tasks to tell admins they are ready to publish? Or is "Either add a "[READY]" prefix to the task summary to let org admins know, or contact them explicitly." sufficient?
 * Part of GCI is "getting work/tasks done", part is "recruiting new contributors / community members"
 * We miss a "pipeline" how to direct students to further stuff (try GSoC if you are 17 years old?) Point to local hackathons? / "GCI is missing a follow-up program for the best contributors. (Or maybe the MediaWiki/Wikimedia technical community is missing a non-time-limited mentorship system?)" Every year there are GCI participants with a "hire on sight" competence level and it would be great to keep them engaged.
 * Students sometimes still have no way to gauge the complexity / required skill level of a task, and making a task description perfect can be the enemy of good.