Core Platform Team/Initiative/Add API integration tests/Epics, User Stories, and Requirements


 * Work with Release Engineers to validate technology choice to ensure it can be moved to CI and that there isn’t a preferred existing system.
 * Write a spec for the test runner.
 * Write the test runner (custom, or Selenium?)
 * Develop a way to automate the setup and tear down of a test wiki based on the developer environment (Docker)
 * Identify the first trivial tests.
 * Identify critical cross-module use cases to cover in the first iteration of this project. See stories.
 * Write first tests

Features, MVP:

 * The test runner executes each test case against the given API and reports any failures to comply with the expected results
 * human readable plain text output
 * Support regular expression based value matching

Features required to fulfill this project's goals:

 * support for variables
 * extracted from response
 * randomized
 * loaded from config file
 * support for cookies and sessions
 * discover tests by recursively examining directories
 * support fixtures
 * execute fixtures in order of dependency

Features expected to become useful or necessary later, or for stretch goals:
(roughly in order or relevance)


 * discover tests specified by extensions (needs MW specific code?)
 * allow test suites to be filtered by tag
 * execute only fixtures needed by tests with the tag (or other fixtures that are needed)
 * allow for retries
 * allow tests to be skipped (preconditions)
 * support JSON output
 * Support running the same tests with several sets of parameters (test cases)
 * support for cookie jars
 * allow test for interactions spanning multiple sites
 * support HTML output
 * parallel test execution

Requirements that seem particular to (or especially important for) the Wikimedia use case:

 * HTTP-centric paradigm, focusing on specifying the headers and body of requests, and running assertions against headers and body of the response.
 * Declarative test definition, no conditionals or loops. Keeps tests honest, enables mapping to other frameworks.
 * Support for running assertions against parts of a structured (JSON) response (JSON-to-JSON comparison, with the ability to use the more human friendly YAML syntax)
 * filtering by tags (because we expect to have a large number of tests)
 * parallel execution (because we expect to have a large number of tests)
 * yaml based declarative tests and fixtures: tests should be language agnostic, it should be easy to write tests for people involved with different language ecosystems and code bases. This also avoids lock-in to a specific tool, since yaml is easy to parse and convert.
 * generalized fixture creation, entirely API based, without the need to write "code" other than specifying requests in yaml.
 * randomized fixtures, so we can create privileged users on potentially public tests systems.
 * control over cookies and sessions
 * ease of running on in dev environments without the need to install additional tools / infrastructure (this might by a reason to switch to python for implementation; node.js is also still in the race).
 * discovery of tests defined by extensions.

Usage of variables and fixtures:

 * have a well known root user with fixed credentials in config.
 * name and credentials for that user are loaded into variables that can be accessed from within the yaml files.
 * yaml files that create fixtures, such as pages and users, can read but also define variables.
 * Variable values are either random with a fixed prefix, or they are extracted from an http response.
 * Variable value extraction happens using the same mechanism we use for checking/asserting responses

Functional requirements:

 * Run the test suites in sequence, according to the order given on the command line
 * Execute the requests within each suite in sequence.
 * The runner can be invoked from the command line
 * Required input: the base URL of a MediaWiki instance
 * Required input: one or more test description files.
 * The test runner executes each test case against the given API and reports any failures to comply with the expected results
 * human readable plain text output
 * Support regular expression based value matching

Implementation notes:

 * The test runner should be implemented in PHP. Rationale: It is intended to run in a development environment used to write PHP code. Also, we may want to pull this into the MediaWIki core project via composer at some point.
 * Use the Guzzle library for making HTTP requests
 * The test runner should not depend on MediaWiki core code.
 * The test runner should not hard code any knowledge about the MediaWiki action API, and should be designed to be usable for testing other APIs, such as RESTbase.
 * The test runner MUST ask for confirmation that it is ok for any information in the given target API to be damaged or lost (unless --force is specified)
 * No cleanup (tear-down) is performed between tests.

Step 3: Try writing basic tests
The tests below should work on the MVP, so they must not require variables. So no login.

For MediaWiki, relevant stories to test are:
 * fixture that consists of a page with two revisions
 * anonymous page creation and editing, verify change in content (verifying access to old revisions requires variable support) via API:Edit
 * re-parse of dependent pages (red links turning blue, missing templates getting used after being created) via API:Edit and then fetching the rendered page from the article path (index.php?title=xyz)
 * page history with edit summary, size diff (testing minor edits and user names requires variables for login) see API:Revisions
 * recent changes with edit summary, size diff, etc API:RecentChanges
 * renaming/moving a page (basic) via API:Move
 * pre-save transform (PSR) (via API:Edit)
 * template transclusion via API:Parsing wikitext
 * parser functions via API:Parsing wikitext (some)
 * magic words via API:Parsing wikitext (some)
 * diffs (use relative revision ids) via API:Compare
 * fetching different kinds of links / reverse links (see API:Query)
 * listing category contents (see API:Query)

In addition, it would be useful to see one or two basic tests for Kask and RESTbase.

It would be nice to see what these tests look like in our own runner (phester) and for some of the other candidates like tavern, behat, codeception, or dredd. It's not necessary to write all the tests for all the different systems, though.

Step 4: Try writing tests using variables
Without support for variables in phester, these tests have to be written "dry", with no way to execute them.


 * site stats (may need support for arithmetic functions)
 * watchlist (watch/unwatch/auto-watch)
 * changing preferences
 * minor edits
 * bot edits
 * auto-patrolling
 * patrolling, patrol log
 * page deletion/undeletion (effectiveness)
 * page protection (effectiveness, levels)
 * user blocking (effectiveness, various options)
 * user script protection (can only edit own)
 * MediaWiki namespace restrictions
 * Site script restrictions
 * remaining core parser functions
 * remaining core magic words (in particular REVISIONID and friends)
 * newtalk notifications
 * user preference changes
 * Signatures (PST)
 * undo
 * rollback
 * diffs with fixed revision IDs (test special case for last and first revision)
 * listing users
 * media file uploads (need to be enabled on the wiki)
 * renaming/moving a page (with or without leaving a redirect, also OVER a redirect, with subpages, with talk page, etc)
 * listing user contributions

It would be nice to see what these tests look like in our own runner (phester) and for some of the other candidates, like tavern, behat, codeception, or dredd.

Step 5: Pick a test runner (our own, or an existing one)
See: https://docs.google.com/spreadsheets/d/1G50XPisubSRttq4QhakSij8RDF5TBAxrJBwZ7xdBZG0

Candidates:


 * Phester
 * strst
 * tavern
 * behat
 * codeception
 * cypress.io
 * RobotFramework
 * SoapUI
 * dredd

Criteria:


 * runtime
 * execution model
 * ease of running locally
 * easy of running in CI
 * test language
 * ease of editing
 * easy of migration
 * scope/purpose fit


 * stability/support
 * cost to modify/maintain
 * control over development
 * documentation
 * license model
 * license sympathy


 * recursive body matches
 * regex matches


 * variables
 * variable injection


 * global fixtures


 * JSON output


 * scan for test files
 * filter tests by tag


 * parallel execution



Step 6: Add any missing functionality to the selected runner.
If we decide on continuing phester, this means implementing the features listed as "required to fulfill this project's goals" above.

Step 7 (stretch/future): Write comprehensive suite of tests covering core actions that modify the database
Any API module that returns true from needsToken

Step 8 (stretch/future): Write comprehensive suite of tests covering core query actions
And API module extending ApiQueryModule