Quality Assurance/Browser testing

From MediaWiki.org
Jump to: navigation, search
Noun project 8974.svg

Status[edit | edit source]

2014-11-monthly:

In November the CentralNotice repo acquired its first browser tests. In addition to adding new test coverage, we continue to refactor existing tests across all of our repositories. While the primary purpose of this refactoring is to update all of the tests' assertions to RSpec3 syntax from RSpec2, we are also taking the time to address technical debt and sort other issues in the tests such as removing ineffecient code and replace explicit wait statements with dynamic wait-for statement. This not only improves the speed at which the tests run, but also helps immensely with maintenance and usability into the future.


Rationale[edit | edit source]

Why automate tests at the browser level? They find bugs!

Browser tests are most useful for:

  • Beta features: checking that they work in all browsers.
  • JavaScript: different browsers interpret JS in different ways.
  • Features requiring navigation: unit tests don't usually navigate.
  • Fragile: if something breaks often, we want to test it regularly.
  • Critical: features that must work all the time.

It's tedious to manually test Wikimedia sites using the multiple browser-operating system combinations that are out there, so we do automated testing to find errors and assure quality.

We use Cucumber because it lets you write tests in plain English; you don't need to know much Ruby to be effective with Cucumber. Cucumber implements an idea called Acceptance Test Driven Development. The plain English test specifications open a communication channel with non-technical people who wish to contribute to browser test automation.

Check some real examples.

How to contribute[edit | edit source]

Here's one way to dive in

  • Analyze failing tests in Jenkins browser test runs.
  • Investigate a recent test that's failing
    • Drop in to #wikimedia-qaconnect and ask if it's a known issue.
    • Look at the test's Build Artifacts. There may be .png files that are screenshots of failed tests.
    • Watch the screencast of the failing test and/or view the test's code.
  • If you manually go through the test's steps on a wiki site (see Quality Assurance/Feature testing) and the test failure reveals a problem, then file a bug against the feature under test.

This doesn't require any local code at all, it's all on the web.

The next level is to fix broken tests and write new ones.

None of these steps require you to have a working local MediaWiki installation, though you may find it easier to deploy a MediaWiki-Vagrant virtual machine "appliance" which automates installation of some of the tools you need and obtains the MediaWiki source code with git. This also gives you a local working MediaWiki instance, so you can go a level deeper and modify the code of the feature under test to fix it yourself.

If you are interested in automated browser tests, join the proposed MediaWiki Group Browser testing.

Resources[edit | edit source]