Manual:JavaScript unit testing

Unit testing in MediaWiki for its JavaScript code base is performed using the QUnit JavaScript Testing framework..

The unit tests are located in the tests/qunit directory. Tests are organized into a directory structure that matches the directory structure of the code that they are testing. For example: The unit tests for file resources/mediawiki.util/mediawiki.util.js can be found in tests/qunit/suites/resources/mediawiki.util/mediawiki.util.test.js.

Running the unit tests


Run the unit tests from the browser.
 * 1) Browse to.
 * 2) There's no step 2, sit back and wait.
 * 3) Nope, no step 3 either.

Completeness test


In order to enable completeness test plugin, set " " in the query string
 * Browse to.

How to help?
If you're not a developer or don't feel comfortable enough to write a unit test, you can still be of great help by running unit tests:
 * Make it a habit to run unit tests after updating your subversion checkout. Any problems? See if you can find a cause and let the committer know by posting a comment under that revision (mark it FIXME as well).
 * Join Wikimedia's TestSwarm and let it run in the background (TestSwarm is not configured yet)

Another way to help out is by expanding our unit test coverage of the MediaWiki JavaScript library. Run the CompletnessTest and find out which methods don't have unit tests yet and write one.

How to write a QUnit test in MediaWiki

 * Todo


 * The file should be named after the module (or file) it is testing.
 * Inside the test suite file should be one, and one exactly, call to QUnit's . The string passed to   must match the filename (but without the .test.js part). This is to enable sectioning in the QUnit output as well as for TestSwarm to allow distributed and asynchronous testing of each module separately.

TestSwarm

 * Todo: Branch off to a separate wiki page

TestSwarm is a system to parcel out JavaScript-based tests to multiple browsers. As many folks can connect as desired, and tests will be automatically sent to whatever connected browser and recorded based on success and browser version etc. Because MediaWiki jobs submitted to TestSwarm are assigned to all the browsers we support at that time (even more rarely-used browsers, regardless of whether these are connected to the swarm when the commit is made), we can ensure that test coverage gets run on all of them, not just whatever a developer has handy when they make a commit.


 * On Toolserver: http://toolserver.org/~krinkle/testswarm/
 * MediaWiki commits to core modules and test suites (previously, pre-: here).

"Automated distributed continuous unit testing for JavaScript" (where QUnit="unit testing", and TestSwarm="Automated distribution, continuously.") However PHP is pretty much static/similar on all platforms. Which is why running PHPUnit on your localhost is enough to know that the code is good if (it passes the test). But due to cross-browser differences, running QUnit on your localhost in Firefox doesn't say all that much. That's why we need to run tests in all browsers we support (IE7,8,9, Firefox 3,4,5 etc.).
 * From conversation with Krinkle: "TestSwarm does this: It automatically checks out every new revision (related to js-resources) from SVN and then it sends them to different browsers to run the test and summarizes the results – in order to easily track if something brakes, and when it does we can see the exact revision in which the breakage started (even if new commits were made), and where (modulename and testfunction) of the breakage."
 * In a sentence, relation between QUnit and TestSwarm is:
 * It's a bit like CruiseControl for PHP Unit ("Automated continues integration for PHP"), except for the "distribution" part. Which CruiseControl doesn't do, but also doesn't have to do, because:
 * Every browser is different in terms of how far and which version of the ECMAScript-specification was implemented (ie. JavaScript).
 * TestSwarm keeps track of which module in which revision is tested each of these browsers. And whenever a client connects TestSwarm sends tests that haven't been run in that svn-revision / browser-name-version combination yet.

<-- TODO we might want to use a drawing to explain it all -->

With TestSwarm we crowdsource the "grid". And aside from the advantage in maintenance (all clients being crowdsourced), TestSwarm generally does a better job at presenting its information about the status of the unit tests and has the ability to proactively correct bad results coming in from clients (As any web developer knows: Browsers are surprisingly unreliable (inconsistent results, browser bugs, network issues, etc.). Another great aspect of the swarm is that people can opt in with their browsers at any time as the swarm keeps all old revisions. So if a certain type of browser hasn't been in the swarm it can catch up at any time. Or a project manager could reset an old revision to 'untested' and have the swarm redistribute as if it were a new one.


 * More about TestSwarm functioning and comparison to other platforms

jQuery's manual describes this setup with TestSwarm and QUnit as "Automated distributed continuous integration for JavaScript" (where QUnit is "unit testing", and TestSwarm "Automated distribution, continuously.).

Developers write unit tests for their modules with QUnit. Both the modules and the test suites are in Subversion. Developers can also run the test suite locally (/phase3/tests/qunit/index.html).

TestSwarm automatically checks out every new revision (a job) and creates to-do runs (a run is a combination between a browser-engine version, a MediaWiki module and subversion revision). When a client connects, the swarm sends out "runs" to run that haven't been ran yet and the client pushes back the results.

Current Problems

 * The test runner page has no pointer to this documentation (added r88734; not pretty but at least it's in)
 * Test files often have the same name as the files they are testing, eg 'mediawiki.js' or 'jquery.colorUtil.js'. This makes code maintenance more difficult because bare filenames are often shown when navigating or working with code. Consider using the word 'test' visibly in the filename.
 * For instance, our phpunit test case files are almost always named by appending 'Test' to the tested class/file's name:
 * Block.php <-> BlockTest.php
 * IP.php <-> IPTest.php
 * Title.php <-> TitleTest.php
 * LanguageConverter.php <-> LanguageConverterTest.php
 * It looks like new test files must be manually added to index.html. Among other things this static list means that extensions cannot add test cases except by implementing a second test runner page?
 * Krinkle's got some ideas for this, but for now we're concentrating on the core tests which are all bundled together, so easy enough to work with in the meantime. For more info/discussion see this thread.