Selenium/Ruby/Writing tests


 * To get started with browser testing, see How to contribute.

Test automation provides channels for non-programmers and inexperienced programmers to be valuable contributors. Non-programmers, particularly people involved in the product and project side of software development, can contribute well designed acceptance tests for features. Less experienced people who know the Document Object Model and browser behavior can contribute page objects for the automated tests. And more experienced programmers can contribute test steps that tie together acceptance tests and page objects by automating browsers.

Acceptance Test Driven Development
ATDD is a practice where we specify the behavior of software features in plain language sentences, often in some specific format. We then transform these specifications into executable code and threat them as automated acceptance tests for that feature. When the acceptance tests pass, the feature is known to be done. ("Done" has a specific meaning in agile software development: that the code is fit for use and ready to deploy.)

The canonical example of ATDD has the automated acceptance tests created before the code to satisfy them is created. In practice, this requires a level of cooperation between project/product people and development/test people that many organizations find difficult to achieve.

The current practice at WMF is to develop the feature specifications for Wikipedia software after or as the software is developed. There is nothing intrinsically wrong with this approach, although it is generally desirable to have testing activity match development activity as closely as possible, even to the point of moving testing activity *ahead* of development activity. Doing true ATDD is a great aid to focus and completeness of features.

In practice, acceptance tests for software features may be created and automated at any time, after which they serve as useful automated regression tests for those features. This is the point where non-programming members of the community, people with an interest in the proper function of Wikipedia software, can contribute.

We are using the Cucumber ATDD framework for browser automation. Cucumber requires that feature tests be specified as a series of statements of the form Given When  Then  People from any area of the Wikipedia community are potential contributors of acceptance test criteria for features. Wikipedia editors would certainly like to be assured that their favorite tools in their favorite browsers work and continue to work in the ways that they expect; Wikipedia user advocates such as the Teahouse contributors and Ambassadors likewise. WMF staff are also invested in the behavior of software features, with perhaps the Product area being the most concerned. Finally, there is a community of people outside Wikipedia who are interested in browser testing in and of itself who are a significant potential source of automated acceptance tests.

Designing and defining excellent acceptance test criteria in the form of browser behavior can be tricky, and we aim to train our community to do it well. For more information, see Writing feature descriptions.

How to write WMF browser tests
Automated browser tests involve: You'll also need to watch out for aspects of the test environment.
 * 1) Features and Scenarios
 * 2) Page Objects
 * 3) Test steps
 * 4) API-based setup methods

Potential contributors who are not programmers will be most interested in the description of Features and Scenarios. If you want to dig a bit deeper into how this project works, look at Page Objects. People actually creating end-to-end tests will be particularly interested in information about Test steps. All can read Quality Assurance/Browser testing/Very Basic Howto.

To run tests locally, see Running tests.

Tests are in a  subdirectory of each project, e.g. of the browser tests in core. The links section of the mediawiki-selenium README file lists the repositories that contain browser tests.

Features and Scenarios
Features and Scenarios in Cucumber are an implementation of a design pattern for Acceptance Test Driven Development, ATDD. They follow a generally accepted convention for such tests: Feature: My Feature Scenario: Testing some aspect of My Feature Given When  Then  Any Given/When/Then statement may also have an arbitrary number of "And" clauses: Scenario: Testing some complicated aspect of My Feature Given And When  And  And  Then  And  A Given statement should only ever mention starting conditions. A When statement should always be some sort of verb, or action. A Then statement always has the words "should" or "should not", or equivalent. While not required, it is good practice for technical reasons to use "should" when implementing the automated test steps later.

The Scenarios are the most important part of the tests. Take time to make them well considered, granular, and create as many individual Scenarios as necessary to adequately test the feature in question. A good question to ask is "when this test fails, what will that failure tell us about the software?" The language of the Scenarios is reported directly in the output when the test fails, so it is important that each step in each Scenario be designed well. This is the heart of ATDD.

Here are scenarios for a search test in use right now (from features/search.feature):

Feature: Search Scenario: Search suggestions Given I am at random page When I search for: main Then a list of suggested pages should appear And Main Page should be the first result Scenario: Fill in search term and click search Given I am at random page When I search for: ma     And I click the Search button Then I should land on Search Results page Feature files containing Scenarios are named "foo.feature" and reside in the /browsertests/features/ directory. Some good examples already exist.

Page Objects
Just as Cucumber implements an ATDD design pattern, we are using the "page-object" Ruby gem to implement a design pattern called Page Object. The idea of a Page Object is that every element on every page in the test suite is defined once and only once, in the context of the page being tested. This makes for ease of maintenance, since any change to that page element only requires updating one line in browser tests.

Such a design allows more radical changes as well. For example, it would be possible to swap out entire systems, say mobile for desktop, or a different language for English, by changing only the page objects and not the actual tests.

The definition of page objects for a simple search of Wikipedia looks like (from CirrusSearch browser tests): "SearchPage" is our class name for the page being tested. The code following "PageObject" identifies the elements that tests reference on this page – buttons, links, message boxes, etc. The syntax to identify an element is a Domain-specific language and is slightly odd: element_type(:your_element_name, attribute identifying the element: "value of attribute", ... [more attributes] )

The first parameter to the function is your name for the element, such as  in the example above, prefixed with a colon. For readability and maintainability, we order the page elements alphabetically by this.

The elements documentation lists all the element_types available, there are elements corresponding to most HTML elements. For each element_type it presents the attributes you can to locate the element; these usually include,  ,  ,  ,  , and    for form elements.
 * A digression on Ruby:  is a key-value pair in a Ruby 1.9+ hash where the key is a Symbol object. Some online documentation uses the older Ruby 1.8 "hash rocket" syntax for this, with a leading colon,  . We prefer the former.

Locating by id is bulletproof, otherwise using a CSS selector can work well. Note that identifying by CSS does not use jQuery/Sizzle, instead it's the W3C CSS selector that the target browser implements. So you can't necessarily copy a jQuery ; for example you can locate with :button(:change_post_save, css: "ul.flow-edit-post-form .mw-ui-constructive") but not :button(:wont_work, css: ".flow-edit-post-form [class^='flow']:last" )

You can usually specify more than one attribute to locate a single element (because we are using the watir-webdriver API for Selenium).

Often you need to locate a page element inside another, such as an item within a menu, or the title of the second item on the page :
 * craft a CSS selector that does the right thing
 * craft an xpath that navigates the DOM
 * identify the parent and then nest the definition of the other element

Page-object files are named  and reside in the features/support/pages subdirectory.

Test steps
With a Scenario (or several) in place, and a Page Object defined for the page to be tested, now we can tie them together with test steps. This is the programming part, and it is helpful to have some grasp of the technologies being used.

An implementation of a Search test using the Page Object above looks like (from features/step_definitions/search_steps.rb):
 * New 2014: Prefer  to trailing

Each line in the .feature file becomes part of a regular expression in each test step. Just as we specify each element in each page only once and we specify each page only once, we also specify any particular aspect of the test such as text only once.

Since many tests may use a random page, RandomPage is specified separately. Because each line in the .feature file becomes a regular expression, we can using matching within the expression to specify strings to be searched for in the step, from the .feature file. We use a regex so we specify that text in the .feature file, and not in the steps file.

Note that the lines "Given I am at random page" and "When I search for:" are used twice in the .feature file. In the test steps, those lines are implemented once and called twice. Any single test step is implemented only once and may be used as often as necessary by the Feature. In the example above, we need only one test step to search for "main" and to search for "ma" using a regex as noted above.

In terms of architecture, it is Cucumber that ties together the lines in the .feature file and the test steps. It is the page_object gem that provides the means to identify elements on pages. The page_object gem is aware of both watir-webdriver API syntax and also pure selenium-webdriver syntax, so either approach is acceptable.

And for assertions, Cucumber uses the assertion framework called RSpec. This is why every "Then" statement in the test steps must involve  or (older), a "should" or "should_not" clause:  otherwise the test will always pass and never fail. These are simple examples, but RSpec provides rich and complex ways to create potentially elaborate expectations for tests to satisfy beyond "should" and "should_not".


 * Update 2014: prefer  to trailing   , see http://myronmars.to/n/dev-blog/2012/06/rspecs-new-expectation-syntax

Again, a  step should always include an assertion of some kind, it shouldn't just end with e.g. , which is a precondition for something else

Comprehensive documentation for Cucumber and RSpec is beyond the scope of this document, but is readily available.

Debugging
See ../Debugging

When tests fail

 * Is there a better page for this @skip? It is not a the same as a straight cucumber tag like @firefox

It's important that tests run by Jenkins be green. If a test is regularly failing that is tricky to fix, you can as a temporary measure skip running it.
 * 1) create a Phabricator task in the code's project and in #qa to fix the problem.


 * 1) add the   tag above the failing Feature or Scenario it
 * 2) * add a comment above the @skip tag indicating some actionable follow-up (TODO, FIXME), with  link to the task, above the skipped test, so that this doesn't become a mechanism for simply silencing noise

Example:

At the end of a run, cucumber will print "5 skipped".

@skip is implemented in Gather and MobileFrontend. Note  is different from other cucumber tags such as   or. They tag those tests you want to run when you execute cucumber -t @internet_explorer_10

If a test is so flaky that it needs to be skipped for the indefinite future, you might want to rethink the test implementation or simply remove the test. Testing is all about tradeoffs and, if you can't find the sweet spot in implementing a scenario, many times it may be better not to introduce it at all lest you degrade the trust in the test suite as a whole.

Skipping test programmatically
Internally  calls.

You can call this programmatically if you determine a test is inappropriate.

Another user of this is MediaWiki-Selenium, it supports  tags that skip a Feature or Scenario if extension name isn't available.

Miscellaneous

 * See also Quality Assurance/Browser testing/Guidelines and good practices

Timeouts
Tests take time to run, especially in a VM on SauceLabs across a network or on your own slow MediaWiki instance.

So tests may fail due to timeouts when waiting a few more seconds would allow them to complete successfully. Inserting explicit  calls in tests is bad design. Instead, poll for conditions: All take an optional timeout parameter, the default is 5 seconds.
 * polls until an element can be engaged
 * polls until a condition returns true
 * polls until a condition returns false

That aft5 test is a good example:

This says: ''We'll hang out until the AFT post is processed. We know that the processing is finished when the page contains the text "Thanks". At that point we should have a message showing a link to the feedback page.''

Parallel tests
If you run tests in parallel with parallel cucumber and phantomjs you may speed up test time dramatically. These steps got CirrusSearch tests running in parallel:
 * Make sure all tests pass in PhantomJS. Mostly, this shouldn't require any work beyond installing PhantomJS, setting   to , and running tests.
 * Make sure your tests are in many small features rather than a few huge ones.
 * Add  to your   and run bundle install.

Running tests is now a two-step thing. First run tests in parallel. bundle exec parallel_cucumber --nice -n 5 features/ Some of them will fail, because of general flakiness. You can rerun failing tests like this: cat cucumber_failures.log | xargs bundle exec cucumber

Common functions
We wrote the RubyGem which provides several common functions such as logging in and creating a page. Quality Assurance/Browser testing/Shared features describes its features.

API-based test setup methods
To facilitate more deterministic browser tests, we wrote the RubyGem. The API client can be used to create dependent wiki resources from within  steps via the MediaWiki action API. See the RubyGem and API documentation for further info on each available method and required input parameters.

An API client is available from within your step definitions as.

Example
Feature file:

Step definition:

API login
Many API actions require that the client has first logged in. This authentication will be done automatically given all the right environment variables have been set.

You can also explicitly login with the API as a particular user. For example, Echo notification browser tests use the  API call prior to.

Note that by design logging in with the API bypasses the browser, so you can't and then visit a page in the browser. (Although supporting this might be a possibility to speed up tests that log in, .)

Anonymous tests
Browser tests should never execute an explicit logout, because it interferes with other tests using Selenium_user. Instead, browser tests should use the API to create the appropriate conditions before launching a browser session as an anonymous user.

Further information
This document does not describe every aspect of the design and architecture of the browser automation system. It is intended to explain enough that any interested person might get a good understanding of how things are put together and how to get started. The administrators of the project are happy to elaborate on aspects of the system not covered here.

Manual:Coding conventions/Selenium has the coding conventions we follow.