Language Testing Plan/Testing Strategy

Goals of this document
This document describes about current and future testing procedures and strategy inside Wikimedia Language Engineering team.

Testing workflow
Current workflow of testing is not well defined or well documented. We’re using two types of testing in our workflow.

For each developer’s testing methodology, see spreadsheet at https://docs.google.com/a/wikimedia.org/spreadsheet/ccc?key=0AjtYqLvFZZVndFhCcFM0aU5WUkwtRW9OQ2l0dU5IS2c&usp=drive_web#gid=1

Design reference

 * Feature conception
 * GWTs

Manual testing
Manual testing of tests like PHPUnit and QUnit tests where applicable. See ‘Statistics’ section about number of Unit tests in our extensions at moment.
 * Manual testing of patches submitted to Gerrit.
 * Manual testing in MLEB is done once in a month (mostly release is near end of month). This includes Universal Language Selector (ULS) extension as of now. Other extensions are not well-tested or less-tested for manual testing in MLEB.

Automated testing
Automated browser testing on beta wikis and several instances.
 * Automated testing after patch is submitted to Gerrit. This will check patch against jslint, PHP Codesniffer and other syntax errors etc. This procedure (along with release) is well documented at, Continuous Delivery Process diagram


 * Browser tests - coding, review, maintenance (i.e. how is it being changed if the original feature changes or bugs are corrected)


 * Pairing with QA:
 * Jenkins configuration fixes (optional)
 * Bug fixes.
 * Betalabs reliability.

Minimal testing
I have defined testing as ‘Minimal testing’ which we always should do before and after submitting changes to production.

Manual

 * Change passes QUnit test.
 * Change passes PHPUnit test.
 * Change passes browser test.

Automated

 * Change passes all browser tests. Failures will be notified on Cloudbee or via Email (if email is in job).
 * Change satisfy all criteria of what patch is suppose to do as describe in commit message.

Average testing
An average testing scenario would be what we are doing in Minimal testing case plus following test cases:

Manual browser testing

 * We run browser testing our own. ie run tests manually upon patch.
 * Test it with different browser.

Ideal testing
Ideal testing will include all scenarios from Minimal, Average plus following. Ideal scenario will also focus on testing procedure that is well documented and uses systems like TCMS.


 * Unit test is written for each patch/changes for backend.
 * Browser test is written for each frontend change.
 * We test change to make sure that several components are taken into account. For example, we do stress testing, try to make test fails, try edge cases etc.
 * We use different browser(s) to test changes.
 * Several people are involved into testing when needed. ie More eyes.
 * If change is big, we set up instance(s) on Labs to give more robust testing (We already did this in CX for example).
 * We use Test Case Management System for manual testing. (In progress)
 * Browser tests are always Green!

Browser tests
We run browser tests on several beta wikis and instances. Following are number of scenarios for each extension our team maintains:


 * Universal Language Selector: 45
 * Translate: 35
 * Content Translation: 11
 * Twn Main Page: 23

Unit tests
We have two types of Unit tests for our extensions. There is discussion/thoughts going on writing unit tests for node.js for Content Translation.

QUnit tests
ULS also inherited following QUnit tests from upstream jquery.* libraries. Note that this is total list of assertions in unit tests.
 * Universal Language Selector: 3
 * jquery.i18n: 160
 * jquery.ime: 5109
 * jquery.uls: 48
 * jquery.webfonts: 29


 * Translate: 3
 * Content Translation: 0
 * Twn Main Page: 0

PHPUnit tests
Note: This number can be misleading, as I counted number of files as one ‘unit’. Several tests are done using single file in some cases. But, it should give clear idea which extension has most PHPUnit tests as of now.


 * Universal Language Selector: 2
 * Translate: 45
 * Content Translation: 0
 * Twn Main Page: 2

ULS

 * Stage 0:
 * ULS testing inherited testing from upstream jquery components (jquery.uls, jquery.ime, jquery.i18n and jquery.webfonts).
 * Unit tests are mostly written for jquery.* libraries. This is one time procedure along with updates in code as required.


 * Stage 1:
 * Frontend features are tested with Browser Testing which involves coding, review, local testing, various validations and pairing with QA.
 * Once Browser Test is merged in Gerrit, it runs regularly by Jenkins twice in a day. Failures are reported via Email to listed developers in Jenkins configuration file.


 * Stage 2:
 * Failures of Browser Tests are fix by developers and/or QA. Pairing sessions are done for this.


 * Manual tests are often performed for ULS's various features such as manual verification of features like Autonym font, IMEs and other components.

WebFonts
Webfonts are part of main ULS repository.


 * Webfonts code is on jquery.webfonts Github repository.
 * Actual webfont is in ULS fontrepo, which contains webfont testing interface which needs manual testing when fonts are updated or changed by developer. Manual testing of font is also received as feedback from various language community.
 * Some part of webfont testing often done as part of Translate.
 * RTL testing and feedback is done by team and community as part of automated and manual testing.

MLEB
MLEB test cases are repeated for 4 browsers (Google Chrome, Firefox, Safari and Internet Explorer) with 2 releases (stable and legacy-stable at time of testing).


 * Total number of test cases of each browser for each release: 153
 * Total test cases (all): 612

Updates: TestLink setup has reduced MLEB test cases to almost 60% now after reviewing and removing possible duplicates.

Platforms
We are using following platforms in our testing:

Beta labs

 * Beta labs has similar (almost) set up as production wikis.
 * It is used as Test Wiki in browser testing.
 * Current list of test wikis used by Language Engineering team is: https://www.mediawiki.org/wiki/Language_portal/Test_wikis

Cloudbee
Cloudbee is used in automated browser testing jobs.

Recommendations
1. Labs instance for each project where applicable.

Example: instance for Content Translation (CX) http://language-stage1.wmflabs.org has been setup exclusively to test. Content Translation server is setup on http://cxserver.wmflabs.org This server is also used in automated browser testing procedure.

2. Automated browser testing instances with controlled environment.

Example: instance for web fonts enabled by default in ULS extension at, http://language-browsertests.wmflabs.org Such setup eases testing where our it is difficult to test in default senarios.

3. Test Case Management System(s).

This will reduce time in manual testings like MLEB testing. We can easily compre previous results with current result. This will also help us in tracking bugs reported by such tests.

4. Automated unit testing. This is experimental idea and can be done with discussion with team.

5. Test driven development.

6. System and performance level testing. For example, Page Loading.