Parsoid/Round-trip testing

The Parsoid code includes a round-trip testing system that tests code changes, composed of a server that gives out tasks and presents results and clients that do the testing and report back to the server. The code is in the testreduce repo which is an automatic mirror of the repo in gerrit. The roundtrip testing code has been fully puppetized.

There's a private instance of the server on  that currently tests a representative (~160000) set of pages from different wikipedia languages. You can access access the web service at https://parsoid-rt-tests.wikimedia.org/

Private setup
The instructions to set up a private instance of the round-trip test server can be found here. A MySQL database is needed to keep the set of pages and the testing results.

RT-testing setup
Coordinator runs on. RT-testing clients run on. You need access to a bastion server on wikimedia.org to access. See SSH configuration for access to production on wikitech. These clients access Parsoid REST API that runs on scandium.

The clients are managed/restarted by systemd and the config is in. Please do not modify the config on testreduce1001 directly (they will be overwritten by puppet runs every 30 minutes). Any necessary changes should be made in puppet and deployed.

To {stop,restart,start} all clients on a VM (not normally needed):

Client logs are in systemd journals and can be accessed as: In the current setup, the testreduce clients talk to a global parsoid service that runs on. So, look at the logs of the parsoid service on scandium to find problems / bugs with the parsoid code being tested. These logs are also mirrored to Kibana which you can find on this dashboard.

Starting a test run
It's probably best to check that we're not currently running tests on a parsoid commit. Use the  command on   to verify that it says "The server does not have any work for us right now".

To start rt-testing a particular parsoid commit, run the following bash script on your local computer from your checked-out copy of Parsoid:

This updates the parsoid checkout on  and , and restarts the parsoid-php and parsoid-rt-client services.

Updating the round-trip server code
The rt-server code lives in You can run   on testreduce1001 if you need to update node modules.

Running the regression script
After an rt run, we compare diffs with previous runs to determine if we've introduced some new semantic differences. However, since the runs happen on different dates and the production data is used, there's going to be some natural churn to account for. The regression script automates the process of rerunning the rt script on a handful of pages to determine if there are any true positives Note that the script will checkout the specified commits while running but that it doesn't do anything for dependencies -- Parsoid will be running in integrated mode with the latest production mediawiki version(s) and their corresponding  packages (depending on the   field used in the request). So, at present, it isn't appropriate when bumping dependency versions in between commits. The  ("known good") and   ("to be tested") can be anything that git recognizes, including tag names -- they don't necessarily have to correspond to the hashes you provided to the rt server, although usually that's what you'll use. Crashers prevent the script from running and may need to be pruned. They can then be tested individually as follows on. It's also a good idea to check on the parsoid-tests dashboard for notices and errors.

Running Parsoid tools on scandium
Parsoid will run in integrated mode on scandium from  but it requires use of the   wrapper in order to configure   properly. More information on  is at Extension:MediaWikiFarm/Scripts. A sample command would look like: Parts of that command are often abbreviated as an helper alias  in your shell to make invocations easier.

Todo / Roadmap
Please look at the general Parsoid roadmap.

Server UI and other usability improvements
We recently changed the server to use a templating system to separate the code from the presentation. Now other improvements could be done on the presentation itself.

Ideas for improvement:

 * Improve pairwise regressions/fixes interface on commits list . Done!
 * Flag certain types of regressions that we currently search for by eye: create views with
 * Regressions introducing exactly one semantic/syntactic diff into a perfect page, and
 * Other introductions of semantic diffs to pages that previously had only syntactic diffs.
 * Improve diffing in results views:
 * Investigate other diffing libraries for speed,
 * Apply word based diffs on diffed lines,
 * Diff results pages between revisions to detect new semantic/syntactic errors,
 * Currently new diff content appears before old, which is confusing; change this.