Parsoid/Round-trip testing

From MediaWiki.org
Jump to navigation Jump to search

The Parsoid code includes a round-trip testing system that tests code changes, composed of a server that gives out tasks and presents results and clients that do the testing and report back to the server. The code is in the testreduce repo which is an automatic mirror of the repo in gerrit. The roundtrip testing code on scandium has been fully puppetized.

There's a private instance of the server on scandium that currently tests a representative (~160000) set of pages from different wikipedia languages. You can access by setting up a ssh tunnel to scandium as follows: ssh -L 8003:localhost:8003 USERID@scandium.eqiad.wmnet will let you access the web service at http://localhost:8003 locally on your computer.

Private setup[edit]

The instructions to set up a private instance of the round-trip test server can be found here. A MySQL database is needed to keep the set of pages and the testing results.

RT-testing setup[edit]

Coordinator runs on scandium. RT-testing clients run on scandium and commit suicide when the revision of that checkout changes. You need access to bastion on wikimedia.org to access scandium.

The clients are managed/restarted by systemd and the config is in /lib/systemd/system/parsoid-rt-client.service. Please do not modify the config on scandium directly (they will be overwritten by puppet runs every 30 minutes). Any necessary changes should be made in puppet and deployed.

To {stop,restart,start} all clients on a VM (not normally needed):

sudo service parsoid-rt-client stop
sudo service parsoid-rt-client restart
sudo service parsoid-rt-client start

Client logs are in systemd journals and can be accessed as:

### Logs for the parsoid-rt-client service
# equivalent to tail -f <log-file>
sudo journalctl -f -u parsoid-rt-client
# equivalent to tail -n 1000
sudo journalctl -n 1000 -u parsoid-rt-client

### Logs of the parsoid-rt testreduce server
sudo journalctl -f -u parsoid-rt

### Logs for the parsoid service
sudo journalctl -f -u parsoid

In the current setup, the testreduce clients talk to a global parsoid service that runs on scandium. So, look at the logs of the parsoid service to find problems / bugs with the parsoid code being tested. These logs are also mirrored to Kibana which you can find on this dashboard.

Updating the code to test (and being run by the clients)[edit]

To update rt-testing code, run the following on scandium:

update_parsoid.sh

This updates the parsoid checkout, restarts the parsoid service, and the parsoid-rt-client service.

In order to kick off a new round of testing, edit /srv/parsoid-testing/tests/testreduce/parsoid.rt-test.ids and add a new line with a new entry. To ensure the test run starts right away (vs after the clients notice the change), you can sudo service parsoid-rt-client restart. Then watch the logs for parsoid-rt service (see the intro in this section with info about logs).

Updating the round-trip server code[edit]

The rt-server code lives in /srv/testreduce and runs off the ruthenium branch.

After review/merge on master, checkout the ruthenium branch locally. Merge master into scandium. If you need to update node_modules/ do that as well, and commit it, then push the branch to gerrit (you should have push rights).

cd /srv/testreduce
## Please verify you are in the ruthenium branch before the git pull
git pull
sudo service parsoid-rt restart

Running the regression script[edit]

After an rt run, we compare diffs with previous runs to determine if we've introduced some new semantic differences. However, since the runs happen on different dates and the production data is used, there's going to be some natural churn to account for. The regression script automates the process of rerunning the rt script on a handful of pages to determine if there are any true positives

# on local machine
# Setup an ssh tunnel to get the results of the rt run
ssh -L 8003:localhost:8003 scandium.eqiad.wmnet
# Copy the regressions from the following sources to ~/f on scandium
# http://localhost:8003/commits
# http://localhost:8003/rtselsererrors

# on scandium
# Confirm that an rt run isn't in progress
sudo service parsoid-rt-client status
sudo service parsoid-rt-client stop
# Run the script
cd /srv/parsoid-testing
node tools/regression-testing.js --proxyURL http://scandium.eqiad.wmnet:80 --parsoidURL http://DOMAIN/w/rest.php -f ~/f -o oldcommit -c newcommit

Note that the script will checkout the specified commits while running but that it doesn't do anything for dependencies. So, at present, it isn't appropriate when bumping dependency versions in between commits. Crashers prevent the script from running and may need to be pruned. They can then be tested individually as follows.

cd /srv/parsoid-testing
git checkout somecommit
node bin/roundtrip-test.js --proxyURL http://scandium.eqiad.wmnet:80 --parsoidURL http://DOMAIN/w/rest.php --domain en.wikipedia.org "Sometitle"

It's also a good idea to check on the parsoid-tests dashboard for notices and errors.

Running Parsoid tools on scandium[edit]

Parsoid will run in integrated mode on scandium from /srv/parsoid-testing but it requires use of the MWScript.php wrapper in order to configure mediawiki-core properly. More information on mwscript is at Extension:MediaWikiFarm/Scripts. A sample command would look like:

$ echo '==Foo==' | sudo -u www-data php /srv/mediawiki/multiversion/MWScript.php /srv/parsoid-testing/bin/parse.php --wiki=hiwiki --integrated

Parts of that command are often abbreviated as an helper alias mwscript in your shell to make invocations easier.

Todo / Roadmap[edit]

Please look at the general Parsoid roadmap.

Server UI and other usability improvements[edit]

We recently changed the server to use a templating system to separate the code from the presentation. Now other improvements could be done on the presentation itself.

Ideas for improvement:[edit]
  • Improve pairwise regressions/fixes interface on commits list bug 52407. Done!
  • Flag certain types of regressions that we currently search for by eye: create views with
    • Regressions introducing exactly one semantic/syntactic diff into a perfect page, and
    • Other introductions of semantic diffs to pages that previously had only syntactic diffs.
  • Improve diffing in results views:
    • Investigate other diffing libraries for speed,
    • Apply word based diffs on diffed lines,
    • Diff results pages between revisions to detect new semantic/syntactic errors,
    • Currently new diff content appears before old, which is confusing; change this.