Core Platform Team/PET Work Processes/PET Ideal User Story

Overview
Here we cover what the general outline of an optimal User Story that meets the Platform Engineering Team's Platform Intake Standards.

The outline is based on an existing task, T252202, that was built using a TDD process involving Product and Engineering breaking down the user story together and focusing on atomic level work.

This document will be refined and updated as our User Story process improve.

User Story Goals
Our core goals in using user stories are the following:


 * 1) A clear story that upon completion delivers value
 * 2) To be able to iteratively and continuously release work
 * 3) To have clear done criteria that drive testing and validation

How to use this doc
The first section below outlines some example User Stories that build into an overarching Epic of work. These stories can be used as a guide to gauge the level of atomicity we're aiming for with User Stories.

The second section details the process we took to break down an initial User Story into more atomic representations that would allow us to iteratively deliver value by completing and merging each unit of work.

Epic
As a Contributor, I want to get a list of revisions that I have made, to get a sense of the scale and importance of my own work

Story 1
[ ] A non-logged-in client will receive a Status 401

Story 2
[ ] Returns a list of N page revisions by the current logged-in user


 * Response object must be JSON
 * Response object must contain the following fields:
 * Revisions: an array of 0 to segment_size limit revision objects, each with the following information:
 * id: revision id
 * comment: edit summary of the change, provided by the user
 * timestamp: date of change, YYYY-MM-DDTHH:MM:SSZ
 * size: count of bytes
 * page: Page that was modified, JSON object with the following properties: {id, key, title} (see the schema for details)

Story 3
[ ] Client should be able to request the previous (later in time) segments. The response should contain the following properties:


 * "older": Full link to API endpoint + "before|" + rev ID or rev timestamp. May be null if there are no known older revisions.
 * "newer": Full link to API endpoint + "after|" + rev ID or rev timestamp
 * "latest": Full link to API endpoint without parameters

Approach
The goal of a User Story is two fold. First, to deliver clear enough value upon completion that furthers the objective of an overarching Epic. Second, to be atomic enough to represent a single unit of work to complete.

Our initial User Story was: "As a Contributor, I want to get a list of revisions that I have made, to get a sense of the scale and importance of my own work."

Our goal was to break this work down into a consistent actionable structure containing


 * Story
 * Done Criteria
 * Design/Interface/Mockups/References

Our initial User Story looked like this

Story
"As a Contributor, I want to get a list of revisions that I have made, to get a sense of the scale and importance of my own work."

A rough equivalent of the user contributions page in REST form. Compare T235073, which gets the contributions for other users.

Done Criteria
[ ] A non-logged-in client will receive a Status 401

[ ] Returns a list of N page revisions by the current logged-in user


 * Response object must be JSON


 * Response object must contain the following fields:
 * Revisions: an array of 0 to segment_size limit revision objects, each with the following information:
 * id: revision id
 * comment: edit summary of the change, provided by the user
 * timestamp: date of change, YYYY-MM-DDTHH:MM:SSZ
 * size: count of bytes
 * page: Page that was modified, JSON object with the following properties: {id, key, title} (see the schema for details)

[ ] Suppressed revisions are not exposed to unauthorized users, but visible to authorized users

[ ] Client should be able to request the next (earlier in time) segments The response should contain the following properties:


 * "older": Full link to API endpoint + "before|" + rev ID or rev timestamp. May be null if there are no known older revisions.

[ ] Client should be able to request the previous (later in time) segments. The response should contain the following properties:


 * "older": Full link to API endpoint + "before|" + rev ID or rev timestamp. May be null if there are no known older revisions.


 * "newer": Full link to API endpoint + "after|" + rev ID or rev timestamp


 * "latest": Full link to API endpoint without parameters

[ ] There is a stable chronological order among the revisions, based on the combination of revision ID AND revision timestamp

[ ] Each revision should also have the field: delta: +/- count of bytes changed from previous

[ ] Each revision should also have the field: tags: array of Tag objects with [ tag, url ], per schema Patch

Designs/Interface/Mockups
GET /me/contributions?segment=&limit=

Segmented contribution history by user with name name. (I'm calling it "segmented" instead of "paged" so we don't all go crazy.)

segment_marker: ( "before" | "after" ) "|" timestamp "|" revision_id segment_size: size of the segment to get; minimum is 1, default is 20, and maximum 100

Request body: none

Notable request headers: none

Status:


 * 200 - OK


 * 401 - not authenticated


 * 400 - invalid segment marker or segment size is out of bounds

Notable response headers: none

Body: JSON, an object with the following fields:


 * older: full link to API endpoint for the next older segment of results. oldest segment will have this field null. all other segments will have a value.


 * newer: full link to API endpoint for the next newer segment of results. This will never be null (to account for edits occurring after we get results)


 * latest: full link to API endpoint for the latest values, usually just this endpoint with no parameters


 * revisions: an array of 0 to 20 revision objects, in reverse chronological order, each with the following information:


 * id: revision id


 * comment: edit summary of the change, provided by the user


 * timestamp: date of change, YYYY-MM-DDTHH:MM:SSZ


 * delta: +/- count of bogobytes changed from previous


 * size: count of bogobytes


 * page: Page that was modified, JSON object with the following properties: {id, key, title} (see schema for details)


 * tags: array of Tag objects (see schema)

Obstacles to complete the story
The story now contained a clear definition of done and a detailed design of the required interface, however, as we began to progress the work it became clear that the task was not in anyway atomic.


 * 1) It covered several distinct areas of functionality that individually delivered value
 * 2) In order to deliver these features the entire task would need to be completed meaning that the work could not be done iteratively
 * 3) The task was much larger than could be completed within a single sprint

Solution
To make the work actionable we broke each of the identified Done Criteria out as individual stories each delivering value when completed and allowing us to release progress iteratively.