Parsoid

The Parsoid team is developing a wiki runtime which can translate back and forth between MediaWiki's wikitext syntax and an equivalent HTML / RDFa document model with better support for automated processing and visual editing. Its main use currently is the VisualEditor project. A major (and not easy) requirement is to avoid 'dirty diffs' or information loss in the conversion. A good overview can be found in this blog post. Our roadmap describes what we are currently up to.



Getting started
For a quick overview, you can test drive Parsoid using a node web service. Development happens in the Parsoid extension in Git (see ). If you need help, you can contact us in #mediawiki-parsoid or the wikitext-l mailing list.

Parsoid setup
If you want to do an anonymous checkout. git clone https://gerrit.wikimedia.org/r/p/mediawiki/extensions/Parsoid.git

Or if you plan to hack Parsoid, then please follow the Gerrit 'getting started' docs and use an authenticated checkout url instead, such as

git clone ssh://USERNAME@gerrit.wikimedia.org:29418/mediawiki/extensions/Parsoid.git

Use node.js 0.8 or 0.10
The developers use node.js 0.8. We have recently merged a patch to support node 0.10, but (as of 2013-08-14) we do not use it in production. We do not support any version of node below 0.8.

Use or to confirm that you are running node 0.8 or 0.10.

(Optional) Install nave
If you are not running an appropriate version of node, or would like to more easily test Parsoid with different versions of node, one parsoid developer recommends using nave. This lets you easily switch between different versions of node with  or.

(Optional) Install node.js 0.8
Alternatively, if you would like to install node 0.8 system-wide, the following should work.

On Debian:

On the Wikimedia Ubuntu Precise Labs machines, you can use:

On other Ubuntu machines, you should use Chris Lea's launchpad repository with nodejs 0.8: (See https://chrislea.com/2013/03/15/upgrading-from-node-js-0-8-x-to-0-10-0-from-my-ppa/)

Install dependencies
First, install the npm dependencies:

cd Parsoid/js npm install

Note that some users report problems with express 2.5.x, see bug 52840 for details.

Configuration
If you would like to point the Parsoid web service to your own wiki, go to the  directory and create a   file based on. Use  to point to the MediaWiki instance(s) you want to use.

Optionally, you can enable debugging by setting:

Run the server
You should be able to run the Parsoid web service using:

cd Parsoid/js node api/server.js

This will start the Parsoid HTTP service on port 8000. To test it, point your browser to http://localhost:8000/.

Converting simple wikitext
You can convert simple wikitext snippets using our parse.js script: cd Parsoid/js/tests echo 'Foo' | node parse

More options are available with

node parse --help

Running the tests
To run all parser tests:

cd Parsoid/js npm test

parserTests has quite a few options now which can be listed using.

An alternative wrapper taking wikitext on stdin and emitting HTML on stdout is modules/parser/parse.js:

cd Parsoid/js/tests  echo '' | node parse.js 

This example will transclude the English Wikipedia's en:Main Page including its embedded templates. Also check out  for options.

You can also try to round-trip a page and check for the significance of the differences. For example, try

cd Parsoid/js/tests node roundtrip-test.js --wiki mw Parsoid

This example will run the roundtripper on this page (the one you're reading, including all of this text) and report the results. It will also attempt to determine whether the differences in wikitext create any differences in the display of the page. If not, it reports the difference as "syntactic".

Finally, if you really wanted to hammer the Parsoid codebase to see how we're doing, you can try running the roundtrip testing environment on your computer with a list of titles.

As if that weren't enough, we've also added a --selser option, with multiple related options, to the parserTests.js script. The way it works:

cd Parsoid/js/tests node parserTests.js --selser

You can also write out change files, read them in, and specify any number of iterations of random changes to go through. There's also a plan to pass in actual changes to the tests, but those plans are still in progress.

Monthly high-level status summary
(See all status reports)

Todo
Our big plans are spelled out in some detail in our roadmap. Smaller-step tasks are tracked in our bug list.

If you have questions, try to ping the team on, or send a mail to the wikitext-l mailinglist. If all that fails, you can also contact Gabriel Wicke by mail.

Architecture
The broad architecture looks like this:

| wikitext V PEG wiki/HTML tokenizer        (or other tokenizers / SAX-like parsers) | Chunks of tokens V Token stream transformations | Chunks of tokens V HTML5 tree builder | HTML 5 DOM tree V DOM Postprocessors | HTML5 DOM tree V (X)HTML serialization |    +--> Browser |    V Visual Editor

So basically a HTML parser pipeline, with the regular HTML tokenizer replaced by a combined Wiki/HTML tokenizer with additional functionality implemented as (mostly syntax-independent) token stream transformations.


 * 1) The PEG-based  produces a combined token stream from wiki and html syntax. The PEG grammar is a context-free grammar that can be ported to different parser generators, mostly by adapting the parser actions to the target language. Currently we use pegjs to build the actual JavaScript tokenizer for us. We try to do as much work as possible in the grammar-based tokenizer, so that the emitted tokens are already mostly syntax-independent.
 * 2) Token stream transformations are used to implement context-sensitive wiki-specific functionality (wiki lists, quotes for italic/bold etc). Templates are also be expanded at this stage, which makes it possible to still render unbalanced templates like table start / row / end combinations.
 * 3) The resulting tokens are then fed to a  (currently the 'html5' node.js module), which builds a HTML5 DOM tree from the token soup. This step already sanitizes nesting and enforces some content-model restrictions according to the rules of the HTML5 parsing spec.
 * 4) The resulting DOM is further manipulated using postprocessors. Currently, any remaining top-level inline content is wrapped into paragraphs in such a postprocessor. For output for viewing, further document model sanitation can be added here to get very close to what tidy does in the production parser.
 * 5) Finally, the DOM tree can be serialized as XML or HTML.

Technical documents

 * Parsoid/Roadmap: What we are up to.
 * Parsoid/MediaWiki DOM spec: Wiki content model spec using HTML/XML DOM and RDFa. The external interface for Parsoid, and designed to be useful as a future storage format.
 * Parsoid/limitations: Limitations in Parsoid, mainly contrived templating (ab)uses that don't matter in practice. Could be extended to be similar to the preprocessor upgrade notes.
 * Parsoid/Round-trip testing: The round-trip testing setup we are using to test the wikitext -> HTML DOM -> wikitext round-trip on actual Wikipedia content.
 * /test cases: Please add interesting snippets or pages.
 * If you feel masochistic, check out our broken wikitext tar pit.
 * Minimization of DOM tags primarily used for minimizing nesting of inline tags (bold and inline primarily).