Reading/Web/Phabricator

From mediawiki.org
< Reading‎ | Web

For project, workflow and task management the Web Team uses Phabricator. Here you can find some documentation about how we use it, and which boards we're working with.

Projects/Boards[edit]

Workflow[edit]

We use project classification and setting the task's priority to organize our work (harnessing the power of Phabricator search). As a secondary method we use the project's boards to sub-classify tasks, but not on all boards, only in the biggest ones.

Tasks that need the team's attention to set a priority need to be on the "Needs Triage" priority. Then the team will have a look and chime in, and assign a priority based on several factors (urgency, workload, need for discussion, etc). Priority should be set by people intending to work on the task or by product owners prioritizing for engineers to work on it.

  • Unbreak now! tasks will be moved to the current sprint and worked on as soon as engineers are available. This priority will also ping other teams (notably Release Engineering) to let them know that this bug may impact a deploy. If the task is not time sensitive, use "High".
  • High priority tasks will be reviewed when planning following sprints and will likely be tackled soon.
  • Medium priority tasks will be reviewed when the backlog of "High" priority tasks is low, and will be promoted to "High" if we're planning to work on them next.
  • Low priority tasks likely need to be discussed or are not a priority for the team. They may be re-prioritised during quiet periods, or are far from getting in to the teams work load in their current state.

High level overview[edit]

Inside our backlog there is a column "Epics/Goals". Epics will have subtasks that live in either the backlog or the tracking column, providing easy access to related work. As we work on tasks they will be pulled into the sprint board with the view of being completed within a 2 week period.

Backlog columns[edit]

The backlog is where we store work that we plan to work on in the future. It also contains a tracking column for tracking work that impacts the team or is the responsibility of the team that we do not plan on working on in the near future. Tasks that are marked low priority are hidden from view. We hope to review low priorities on a regular basis. The tasks in the backlog may or may not be evaluated. If a task has been evaluated and is ready to go, it will have a points estimation on it.

Incoming[edit]

Tasks are swept in via the Web dashboard. The dashboard should be manageable by all team members of #Reading-Web-Backlog. Click manage panel on any panel to edit it (and if you can't see manage it - ask someone in the team for that privilege!). Click customization to see the complex query that powers the dashboard. This query is dictated by Reading/Component responsibility#Extensions.

For triaging we use the backlog. Tasks that come from outside our workflow are picked up as part of a shared chore list. The backlog by default only shows tasks that are priority Medium or higher.

"Incoming" is the default destination for new tasks. Tasks are expected to move from this column in the following manner:

  • Anything that needs product owner input is moved to "Needs Prioritization (Product)" for the product owner, with a comment explaining why to discuss and prioritise. The product owner should move tasks that need analysis/working on onto the sprint board within a 2 week period.
  • Anything that is purely technical (ie. no or minimal input needed from product or design) should be moved to "Needs Prioritization (Tech)".

Epics/Goals[edit]

High-level tasks that represent the team's goals for the fiscal year. An epic is big chunk of work that encompasses many smaller chunks of work, and often takes more than a couple of dev cycles (2 weeks) to finish. It might consists of multiple steps and/or have multiple sub tasks. For example an epic might involve running an A/B test, and that epic may have several subtasks such as "Start the A/B test", "End the A/B test" and "Analyse the A/B test results". Subtasking is encouraged because of the Two week rule. Generally, we should have at most 6 epics in flight at any given time because of the overhead of context switching.

Not ready to estimate[edit]

Tasks that require investigation or other pre-work before the team can evaluate it.

Backlog[edit]

Tasks that are appropriate for the team to work on, though may not be done within the current fiscal year.

Current Fiscal Year[edit]

Tasks are not yet ready to go into a sprint, but should be completed before the end of the fiscal year.

Milestones for the current and next two sprints[edit]

Milestones will open a separate board for each sprint. Generally, unless the tasks is an epic, the expectation is that adding a task to the current sprint means that it will be resolved within a 2 week window. Sprint boards should be archived after 2 weeks.

Analyst Consultation[edit]

Work that requires input or action from the team's supporting data analyst. These move onto the sprint board at the discretion of the product manager and analyst.

Community Consultation[edit]

Work that requires input or action from the team's supporting community relations specialist. These move onto the sprint board at the discretion of the product manager and CRS.

Needs Prioritization (Tech)[edit]

Workboard (milestone)

Work that the Web tech lead is responsible for scheduling. Tasks here will be processed and tagged from a tech / engineering perspective.

Things that need significant product or design input should be added to #Readers-Web-Backlog.

The inbox in monitored, there is a prioritization process, the board is primarily owned by @Jdlrobson

Columns in the workboard identify common areas of focus. The existence of a column does not promise it will be worked on, however columns on the left are considered higher priority than columns on the right.

Tracking[edit]

Used for tasks that are of interest to the Web team but are not being worked on by them. Tasks in tracking may move into the prioritisation pipeline at any time based on team/community needs. Note the tech lead might use a user board tag e.g. User-Jdlrobson to track work relating to upcoming epics/projects.

Sprint columns[edit]

The sprint board is a milestone of the backlog. It has its own columns and process. The sprint board is the single source of truth for what the team is currently working on.

Tasks that enter the sprint board are expected to move along the sprint board to the right hand side quickly and then get resolved (closed). The sprint board is not the place for things which do not concern at least 3 members of the team or long-term work.

For instance, if a developer is working on a side project that makes an alarm go off every time a wiki page gets edited, but she's not seeking review, design input, QA or analysis from any of her colleagues that does not belong on the board. If a designer is working on some mocks to prepare for the next quarter, but only requires input sporadically from a developer and product person, that also does not go on the board.

Anytime a task is moved onto the sprint board a detailed comment should accompany its movement onto the board.

Incoming[edit]

These tasks need estimation or tasks that have estimation but will be moving into the "Ready for Development" column during the sprin kickoff.

Ready for Development[edit]

Fully evaluated and estimated tasks that people can pick up and work on. Generally tasks are picked from the top, but depending on estimations and time available, it is fine to pick from other smaller tasks. Tasks will appear here from "Incoming" (see above).

Blocked on Others[edit]

Someone from another team has blocked progress with the task somehow. This is flagged during standups, during which work on the task may be reconsidered/re-evaluated. It's encouraged for tasks to be removed from the sprint board if they remain blocked for long periods of time because of the Two week rule.

Doing[edit]

Tasks here being worked on but is not yet at a stage where others can participate. The assignee is the point person for the task and they may pull in others as needed, but that responsibility sits with them.

Design Review[edit]

Many tasks need input before we can even consider merging them and sending them to our users. This checkpoint allows a designer to have their say on in-flight work. Note in some cases, tasks will move between "Code Review" and this column depending on the work.

Code Review[edit]

Tasks here need feedback from others. Usually this relates to code, but it can also relate to asking for feedback on a write-up or a spike. The person assigned to the task is the person who does the review.

Needs More Work[edit]

Tasks here require further development before moving on. Often tasks that fail to pass "Code Review" or "Needs QA" will move back to this column for code fixes.

QA[edit]

Tasks move here to ensure that code is bug-free. Tasks must include steps for testing, as well as where the code should be tested, if applicable. Once a task has passed testing it should be moved to the Ready for Signoff column.

Ready for Signoff[edit]

When a task moves here, someone (often the product owner) signs off that everything has been done that has been asked for and that a certain level of quality has been achieved. The signer-off will scrutinize the task and check that it matches the work done. New tasks can be created if necessary on the back of work. Some tasks have signoff steps which must be followed before resolving the task.

Signoff Criteria[edit]

The following guidelines apply for the successful completion of a single task:

  1. Review QA & QA screenshots - Does everything look okay?  Is there something odd/unexpected?
  2. Test on at least one browser and device
  3. Review description - make sure all the boxes are checked off in acceptance criteria, signoff criteria, etc
  4. Check for subtasks - if there's subtasks open, can we move them to a different task or are they a blocker for signoff?
  5. Check if follow-up tasks need to be created and create them if necessary
  6. If no testing in production is needed, the task can be close as resolved, otherwise it should be moved to QA in Production.

QA in Prod[edit]

These tasks must be tested in a production environment. Once a task has passed production testing, it can be closed as Resolved.

Evaluating tasks[edit]

Evaluation should take at most 1 hour. Any tasks that are ready for estimation should have an estimation scheduled or performed if necessary. Tasks are arranged by priority like all other columns with the top card being the most important.

Evaluation can:

  • deem a task not ready to be worked on, spikes should be created to make it ready and the task should move back to "Not ready to estimate"
  • deem a task unimportant: task should be dropped in priority and put in "Backlog", e.g. This bug only impacts 1 person! Why is this important? OR This task is huge - in fact it's an epic. It needs to be broken down.
  • in the case of a task being incredibly easy result in a patch to fix it. In standup, flag that it needs an estimation.
  • make a task estimable - when the task has an estimate on it, it can be moved to "Incoming" on a sprint board.

Things to think about when evaluating:[edit]

Spend at most one hour looking at a task.

  • If it's a bug, can you reproduce it? Can you explain to somebody else how to reproduce it? If it's not reproducible move to "Needs QA" on the sprint board
  • Is the task clear that you could explain it to somebody else? I.e. which context was missing when you started looking at it?
  • Is it more complicated than an hour? If so, should it be/does it need a spike?
  • If it is a bug, is the source of the bug known? Can you isolate it to a particular browser/module? Even better.. can you find a minimal test case that fails?
  • If a feature, is how to do this clear? Can you write a strawman proposal?
  • Are any mocks needed?
  • Are there any developer notes that can be added to make a task more useful?

Developer Notes[edit]

Developer notes are added to tasks during evaluation to share information with the rest of the team that is helpful for them to discuss and estimate the task. If a task needs input from the design lead, @mention them on the task.

Possible outcomes of evaluation

  • You might find it's super easy and solve the issue. Post a patch and ping the tech lead to get it fast tracked
  • You might discover its priority is wrong - for example if a bug impacts 0.001% of page views there's probably not so much urgency to fix it.
  • You might discover it's a bigger problem than we first thought. Bump the priority.
  • You'll make the task easier for whoever ends up working on it
  • You may determine the work is better suited to another team and should not be in our backlog.

What do when evaluation has completed

When you have finished an analysis the task is moved to the "Incoming" column. It's important for the team to discuss the task as a group before continuing with it so work should not continue on the task until this has happened and a point estimation has been added.

Estimating tasks[edit]

We estimate as team, discussing the task together so that everyone has a shared understanding of the work. Use this time to ask questions, raise concerns, and offer suggestions. This is not a deep-dive into the task, but a (hopefully) brief conversation for everyone to gauge how much work it will be.

Use the Fibonacci sequence for points

  • 1 = 0.5 days
  • 2 = 1 day
  • 3 = 1.5 - 2 days
  • 5 = 2 - 3 days
  • 8 = 5 days
  • 13 = 1+ week

Each person offers their estimate via a tool like planning poker, where all estimates are initially hidden. Once everyone had submitted their estimate, the numbers are revealed. We then discuss any discrepancies among the estimations until we arrive at a consensus.

The estimation is added to the task via the "story point" field. The task can then be moved from the "Incoming" column to "Ready for Development" when the sprint kickoff begins.

3-day / 2 week guideline[edit]

The 3-day guideline[edit]

If an in-progress task has not had any update in 3 days, and/or has not moved from a column in 3 days, the team will take this as a signal to investigate why. The team understands that there are valid reasons for this (vacation, illness, work is slightly harder than expected, etc), and simply wants to make sure the assignee of the ticket has what they need to proceed. If a task must remain static for longer than 3 days, the team may consider removing it from the board (and stop working on it), or else spin off followup tasks, etc.

The two week guideline[edit]

If a task has not been resolved within 2 weeks of entering the team's "in-progress" board (typically a Kanban board), or is lingering in a column for an extended period of time suggesting it will not be resolved within 2 weeks, the team interprets this as a red flag, and investigates. When noticing this is the case, any team member should raise the situation as a concern (ideally in the team standup - email or in person). There are various reasons a task may be blocked in another column or not resolved, and it's worth asking the following questions:

  • Has scope creeped?
    • It's possible the task was not well defined up front, and/or new information has emerged. This happens. However, it might indicate one of several things:
      • The task is not understood
      • New tasks need to be created, including spikes
  • Is another team not prioritising this task as highly as us?
    • In some cases, we will work with another team and it might be the case that they are not prioritising the work as highly as us. In this situation we might want to consider rescheduling the work or finding other ways to make progress that are not blocked on others.
    • We'll probably want to move the task out of the sprint board and pursue it separately in a way that doesn't distract the rest of the team.
  • Is the task actually an epic?
    • If various subtasks are being creating (or related tasks that are not associated with the task), this has probably happened. Label it as an epic and move it to the epics column (or backlog if the product owner doesn't thing this is something we should commit to).
  • Is there a misunderstanding of what done means for the task?
    • If a task lacks acceptance criteria, it's possible this has happened.
    • Understand the expectations/confusions in the task.
    • Consider splitting out into multiple tasks if different disciplines are expecting different things.
    • Example: A task is created to create a prototype, however the card has remained in design review for 3 weeks. An engineer may have expected "done" to mean "deploy the prototype" whereas the designer may be running some design research on the prototype and expect "done" to mean "run and finish the design research". In this case, we might amend current tasks/create two tasks "Deploy the prototype" and "Perform design research on the prototype" so it's clearer.