Testing portal

Testing approaches and how we do them
There are several overlapping approaches in testing:
 * fuzz testing: you inject random values and look at the results. Developers, testers, and users do this manually.
 * unit testing: making sure your API public methods works as expected. Automated via PHPUnit, QUnit, and CruiseControl.
 * integration testing: testing the integrated whole as it operates together; looking at an installation and trying to break it and filing bug reports; manually or via automation
 * behaviour testing : trying to reproduce an user scenario (as you can program into Selenium)
 * logical testing : just invert any logical conditions (aka: == is made != while != are made ==).
 * code coverage : in a method, try to inject values to test all possible code paths and conditions (usually automated, in our case via parsertests & CruiseControl)

MediaWiki internal

 * ParserTests
 * Currently automatically run on trunk and integrated into CodeReview overview
 * Hooks for extensions to add tests (not yet automated)
 * Cite (ref)
 * LabeledSectionTransclusion
 * ParserFunctions
 * Poem
 * SlippyMap
 * t/ unit tests
 * not totally functional 20112
 * Some of the more generic ones (eol checks, bom checks) moved to tools/code-utils in r54922
 * tests/ unit tests
 * not totally functional 20077
 * checkSyntax
 * In maintenance/
 * Needs upload capability

External suites
Extension test framework

Client-side test suites

 * There's some stuff going on with some of the UsabilityInitiative extension, not yet automated.
 * Some talk of doing Selenium-based browser-hosted testing but this has not yet been implemented.
 * Some Selenium tests
 * Selenium no longer the way forward Sumanah 19:15, 1 July 2011 (UTC)
 * Using HTML rendering engine (Gecko? WebKit?) to run user interface tests
 * some JS testing notes
 * Once you get QUnit tests, have a look at TestSwarm, a distributed JS tester.

Performance testing
It would likely be wise to set up some automated performance testing; e.g., checking how much time, how many DB queries, how much memory used, etc. during various operations (maybe the same tests running above).

While performance indicators can vary based on environment and non-deterministic factors, logging and graphing the results of multiple iterations from a consistent testing environment can help us with two important things:
 * Identify performance regressions when we see an unexpected shift in the figures
 * Confirm performance improvements when we make some optimization

See for instance Mozilla's performance test graphs for Firefox: (more FF stuff via https://developer.mozilla.org/en/Tinderbox )
 * http://graphs.mozilla.org/dashboard/?tree=Firefox

Other notes

 * Extension Testing
 * Mobile browser testing
 * Database testing

Links
Search for testing in MediaWiki.org.