Talk:Wikimedia Language engineering/Definition of Done

From mediawiki.org

TODO - define "done" for the following[edit]

These need to be clarified:

  • manual testing tasks
  • (These would include tests that need manual verification)
  • The functionalities to be tested are clearly identified and verified against the designs
  • The test cases with the steps-to-follow are clearly documented
  • The test case set can be replicated and has the scope of recording test results and observations
  • A test environment is set up (either by the team or with the help of a system administration team member) prior to the start of manual testing
  • Test events are announced and conducted within the team
  • Test events are announced and conducted with participants from outside the team
  • Test results are collated and published

Testing Tasks[edit]

  • Test Specification and Preparation task
  • The test scenarios for the UI functionality are written in GWT. This is the first step before automatic or manual test-cases are prepared out of them.
  • A test environment is set up (either by the team or with the help of a system administration team member) prior to the start of testing.
  • Automatic Testing task
  • The GWT test scenarios for which automatic tests can be prepared are identified (this is a separate task so that the load/complexity of the actual test writing task can be estimated)
  • This should be probably a part of story analysis, not a task. Do you believe that it's so big to be a separate task? [AA]
  • Hmm.. I think you are right. This should be part of story analysis.[RB]
  • The automatic tests for the identified GWT scenarios are written in Ruby and the code is reviewed. This will require coordination with QA.
  • And if my reply above is true, then this is just a regular coding task.[AA]
  • Verificiation Testing task
  • Completed code reviews are verified for functionality by a second developer/designer.
  • Looks good to me, but up for discussion.[AA]
  • Manual Testing task (if required)
  • What's the difference from the previous one? Is it a big full regression testing task, like what we did before the deployment? [AA]
  • The GWT test scenarios for which manual tests are required are identified.
  • A document with usable test cases including steps-to-follow is prepared. This test case set document can be replicated and has the scope of recording test results and observations.
  • Test events are announced and conducted within the team
  • Test events are announced and conducted with participants from outside the team. This would require coordination.
  • Test results are collated, summarized and published

Coding Tasks[edit]

We should also add:

  • No (known) silent errors.
  • No (known) problems with the public method call API.
  • jsdoc for all public methods, documenting parameters and return values (including their types).