Parsoid/Roadmap

After the successful December release and some more clean-up work following up on that we are now considering the next steps for Parsoid. The medium-term plan for the summer is to support VisualEditor (VE) in becoming the default editor on all wikis.

The main tasks we see on the Parsoid side to get there until July are:


 * Performance improvements: Loading a large wiki page through Parsoid into VisualEditor can currently take over 30 seconds. We want to make this instantaneous by generating and storing the HTML after each edit. This requires a throughput that can keep up with the edit rates on major wikis (~10 Hz on enwiki).


 * Features and refinement: Localization support will enable the use of Parsoid on non-English wikis. VisualEditor needs editing support for more content elements including template parameters and extension tags. As usual, we will also continue to refine Parsoid's compatibility in round-trip testing and parserTests.

Apart from these main tasks closely connected to supporting VE, we also need to look at the longer-term Parsoid and MediaWiki strategy. Supporting VE on small wiki installations without requiring a Parsoid install is one aspect. Another is evolving MediaWiki's templating facilities towards better support for visual editing and smarter caching. In short, this is about integrating some of Parsoid's advances back into MediaWiki core. The clean and information-rich HTML-based content model in particular opens up several attractive options.

Features: Editing support for images and categories, localization
Editing support for images needs to be implemented, categories are already supported but need testing.

Localization support for non-English wikis (namespaces, magic words, link trails and -prefixes, language variants etc) will be developed. Per-wiki configuration information is retrieved through the API (implemented). The HTML DOM interface abstracts the localization issues for the VisualEditor.

Performance: Generate and store HTML DOM on edit
Instead of converting a (potentially large) wiki page to HTML when a user loads the page into the VisualEditor, we will do so in the background after each edit. The result will be stored in the database, which will make loading a page into the VisualEditor practically instantaneous since no more conversion needs to be performed.

The HTML/RDFa DOM content model we developed is aiming to be an equivalent representation of the content. It can contain fully expanded templates while still providing the metadata needed to re-expand a template later. This makes the HTML DOM an equivalent representation of a revision with the added capability to persistently cache template expansions and extension output inline. The inline cache enables further performance improvements for subsequent edits and refreshLink jobs, which we describe further down in this document.

Adding HTML storage will probably involve adding an additional text table and adapting the regular Revision storage logic to optionally use this. Storage space itself does not seem to be an issue (todo: double-check with ops!). The same text id as the corresponding wikitext can be used in the HTML table to avoid any schema changes in the revision table.

Testing / good to have: Start recording performance data from round-trip testing
For capacity planning and optimization progress tracking we need performance information on as many pages as possible. It should not be too hard to extend our round-trip testing infrastructure to collect this information. We will probably not have time for this project ourselves, but it is quite self-contained and well-suited as a project for an external contributor.

Features: Editing support for citations, template parameters and tag extensions
The main focus is on making citations and their associated templates editable, so that VisualEditor users can properly reference their sources. We will rework our version of the Cite extension to support dynamic re-expansion of the references tag. This will be needed both on the server side (for incremental updates) and the client side (inside the VisualEditor, potentially).

Template parameter editing and extension tag editing will be wikitext-based. This accommodates unbalanced template parameters, which are sadly relatively common in existing content. Both parameters and extension tag bodies will restricted syntactically, so that wikitext edits in these cannot affect other parts of the page.

Features: HTTP API to render extension tags directly
We currently use an action=parse API hack to expand extension tags to HTML. Instead of this hack, we want to add a dedicated extension tag expansion end point that can also be used by the VisualEditor to update / insert extension tags inline.

Performance: Incremental re-parsing after wikitext edit
After an edit to a wiki page using the wikitext UI, we currently re-parse the entire page. In most cases only a small part of the page was actually modified, so a full re-parse is not really needed.

Using the DSR (DOM source range) information stored in the HTML DOM, we can match the position of a wikitext diff to a containing HTML DOM structure and re-parse only the modified version of that node. This would normally be a top-level element like a paragraph, which does not depend on nested parser state for correct rendering. Expensive operations like template expansions would normally not need to be re-performed, which would make parse times proportional to the edit size rather than the page size.

Research / prototype: HTML-only wiki support
The Parsoid web service adds a complex dependency to MediaWiki installations, which is problematic for simple MediaWiki installations that just want to use the VisualEditor. Wikis interested in editing through the VisualEditor exclusively don't necessarily need wikitext-based storage. Instead, they could use HTML storage natively. We already intend to add the capability for HTML storage in MediaWiki, which makes the storage part relatively easy.

In addition, HTML-only wikis will need a HTML-based diff implementation similar to the one in localwiki or the XyDiff XML diff algorithm to replace the wikitext source-based diff.

We will investigate which other issues we need to solve to make an HTML-based wiki possible.

Research / prototype: DOM-based templating
HTML-only wikis might want to provide similar templating functionality as the existing wikitext-based template system. This could be DOM-based.

The main things we need are
 * Expressions: provide access to modules and logic, but cannot define infinite loops or variables
 * Iteration: Iterate over finite data structures (JSON objects for example)
 * Conditionals: Include / evaluate a sub-DOM depending on an expression
 * Variable interpolation in attributes and text content

This functionality is pretty simple to implement on the DOM (possibly using JS/JQuery or XPath). It would provide an opportunity to define very minimal service-like (RESTful for example) extension interfaces, which extensions could port to for a gradual transition.

One popular option is to embed control structures in attributes similar to TAL, Distal or Genshi. Another option is to provide a separate binding to a plain HTML document as in Pure. Templates themselves would still be valid HTML, which might make it possible to implement some sort of visual editing mode for templates. Any serious logic would live in Lua or JavaScript modules working on DOM fragments, JSON objects or strings instead of being embedded in the template itself. The limited expressions supported by ESI might serve as an inspiration.

Features: Support HTML-only wikis without Parsoid
We will support simple HTML-only wikis with VisualEditor front-end without the need for a Parsoid installation.

Performance: Fragment caching and incremental updates
Parsoid encapsulates parts of the DOM generated from template expansions, extensions etc. It can be used to classify current templates in those emitting self-contained DOM output (properly nested) and those emitting just a start or end tag (table start / row / end templates for example). Fortunately, most templates produce properly nested output. Those that don't can be marked with a flag in the database, after which proper nesting can be enforced for all other templates from there on. Unbalanced templates are encapsulated in a combined DOM block, which is then properly nested again. This can also be enforced when re-expanding the combined block of templates.

With proper nesting enforced and all template parameters available re-rendering a template will only swap out a DOM subtree. This makes it possible to cache fast-changing templates or extension output (WikiData infoboxes for example) as a fragment in the edge caches, the DB or update it dynamically in clients.

With more per-fragment metadata (reference counted links and list of recursively used templates), the LinkUpdate jobs can be restricted to a re-expansion of the affected template transclusions rather than the full page. The general idea is to collect all dependencies during evaluation, and encode this information efficiently (likely outside the DOM) to enable quick dependency and validity checks.

Some extensions like Cite use global state, for example to number citations. Sadly, this complicates independent re-expansions. It does however seem to be possible to implement numbering and similar page-wide operations using CSS and/or JS, which would also benefit the VisualEditor. Most other extensions used by the WMF like math, poem, timeline etc are order-independent, so this seems to be a solvable issue for the extensions we currently care about.

Parsoid can (and does in the VE deployment) use the PHP preprocessor via the 'expandtemplates' web API method. This lets it fully support parser functions, internal PHP interfaces, Lua scripting etc without having to re-implement this functionality. The result is pre-expanded wikitext, which is then parsed and encapsulated in Parsoid. Tag extensions are expanded independently (via an action=parse API call currently).

In the longer term, we could extend the PHP API to provide more dependency information along with the expanded output. A list of templates, parser functions and Lua scripts used in the expansion would provide pretty complete dependency information for caching / incremental update purposes.

Treating the PHP preprocessor and its associated extensions as a self-contained 'legacy' component side-steps the problems associated with the wikitext-centric interfaces used. Emulating wikitext-based parameters and frame objects passed to (for example) Lua from a token-based parser will probably never work perfectly and involve a lot of work. Performance of template expansions should not matter that much with incremental updates, as they would be relatively rare. For new pages, all template expansions can be performed in parallel (Parsoid currently sends one parallel API request per transclusion), which could be refined with some batching to amortize fixed connection overheads.

If still necessary for performance: Fast and integrated C++ implementation
The original plan was to speed up and integrate Parsoid by moving the implementation to C++. This implementation would provide parallel template expansions complete enough to be a drop-in replacement for the PHP preprocessor. Having such an implementation with its raw efficiency and integration potential is still very desirable (and fun to write!), but would also come at the cost of a long delay in Parsoid development with the currently available resources. Tackling the C++ port, tweaks to the existing JS implementation for the VE and HTML DOM storage and related optimizations in parallel does not appear realistic unless there is a sudden surge in manpower.

If we reach our goal of having the VE as the default editor on all Wikipedias this summer, demand for VE-powered MediaWiki installs will probably be high outside Wikipedia too. If we make good progress on a HTML DOM based infrastructure in the meantime, HTML-only wikis with VE could be a possibility by then.

The role of Parsoid would probably change to a conversion tool and wikitext editor for HTML content at some point, for which very high optimization might not be necessary any more. Some of the ideas above also show ways to make the existing JavaScript implementation fast enough by being smart about avoiding unnecessary repeated work.