Quality Assurance/Strategy

Manual testing
This Group will become our central repository for information about manual/functional/features testing

Goals

 * 1) Improve Wikimedia software products:
 * 2) * User perceived quality.
 * 3) * Areas difficult to cover with automated testing.
 * 4) Grow the Wikimedia technical community.
 * 5) * Accessible entry point for Wikimedia users and editors. No technical skills required.
 * 6) * Good motivation for experienced and professional testers.

We still need a central "these are our QA priorities" page.

Volunteer profiles

 * Wikipedia/Wikimedia editors
 * Motivated users willing to try what's coming next.
 * Experienced / professional testers willing to contribute.
 * Companies developing products where Wikipedia/Wikimedia software needs to run well.
 * ... and of course other regular contributors at https://bugzilla.wikimedia.org/ willing to get involved in a more structured way.

Consolidating a testing team
We need to identify, empower and let lead to those experienced in testing and QA, and those experienced in Wikimedia software & community.

We must build a healthy meritocratic structure with a dose of fun and incentives to those doing great progress and helping out others progressing as well.

Based on this we have another view on profiles required:


 * Senior testers - can help and teach others.
 * Organizers - can increase the quantity and quality of QA activities.
 * Connectors - can bridge QA volunteers efforts with development teams.
 * Promoters - can help reaching out to new volunteers.

Activities
In theory almost all combinations apply:


 * Testing / bug triaging.
 * Online / on-site.
 * DIY / team sprint.
 * Synchronous / asynchronous.

However, not all combinations are equally productive towards different contexts and goals. For instance, a face to face team sprint requires well defined scope and goals, and a heavy involvement from the development team. Individual tasks can provide great results as long as they are not related to urgent deliveries and critical paths.

We need good documentation to clone efficiently at least these cases:


 * Online testing sprint: how to organize, announce, perform, evaluate.
 * Proposed: right after deployment of new MediaWiki versions to non-Wikipedias.
 * Proposed: right after feature deployments.
 * Note: this requires availability of effective announcements and release notes.
 * See Mozilla Test Days, Fedora Test Days and our own Weekend Testing Americas held on 2012-05-5 and Article Feedback Testing on 2012-06-09.
 * See alsoSession-based testing.
 * Individual testing: tasks that a person can perform and report about anytime / anywhere.
 * Individual bug triaging: reports to look at and instructions to improve their status.

Reaching out
We need to go beyond the sporadic isolated efforts and build a continuous, incremental flow of activites. The success of each activity must contribute to future successes.

We need to let people know about ongoing / DIY opportunities as well as events. We need to reach out to the current MediaWiki / Wikimedia communities as well as to external groups and potential new contributors.


 * Calendar: a central place where activities are announced. It should be possible to subscribe and receive notifications of new activities.
 * QA communities: reaching out and having processes in place to promote our activities.
 * Work with promoters to spread the news.
 * Contact companies testing Wikipedia in their products e.g. browser developers.
 * Organize on-site activities engaging local groups.

Follow-up activities
Testing events require a follow-up to


 * Evaluate and announce the outcome.
 * Triage and process the feedback received into the regular development flow.
 * Keep the contributors engaged.
 * Warm up for the next event.

For instance, it is a good idea to organize an online bug triaging sprint after a testing event.

We need a way to keep in touch with participants in events and get back to them.

Measuring success
Team activities:
 * Number of events.
 * Diversity of events across development areas, promoters and locations.
 * Number of participants.
 * Effectiveness welcoming new contributors.
 * Effectiveness retaining and promoting current contributors.
 * Quantity and quality of reports created / triaged.
 * Impact in software releases and Bugzilla statistics.
 * Contributions to the goals of the Wikimedia Foundation.
 * Feedback from the related development team.

Individual activities:
 * How easy and how long before doing a first contribution.
 * Positive / negative feedback, complaints, bugs.
 * Individual contributors showing up in community channels and team activities.
 * Statistics on individual contributors (to be defined).
 * QA contributor retention.

Community incentives
To be defined. Some ideas:


 * Tester barnstar.
 * "I test Wikipedia" shirt.
 * Sponsored training e.g. AST courses.
 * Sponsored travel to Wikimania.

Test automation

 * Browser testing Group


 * How community works for browser test automation


 * Jenkins host with connections to Sauce Labs


 * QA/test backlog

Future
Emphasis on good unit tests