Parsoid/Roadmap

After the successful December release and some more clean-up work following up on that we are now considering the next steps for Parsoid. The continued work with the JavaScript implementation gave us some new information and ideas which might influence our priorities a bit.

Storing HTML DOM
The HTML/RDFa DOM spec we developed is already very close to an equivalent representation of the content which is easier to work with than pure wikitext. It can contain fully expanded templates while still providing the metadata needed to re-expand a template later.

In the shorter term, storing / re-generating this HTML DOM after each edit would usually avoid the wait time currently experienced by VisualEditor users on large articles. In the longer term, storing the HTML DOM can enable several interesting options:

Fragment caching and incremental updates
Parsoid encapsulates parts of the DOM generated from template expansions, extensions etc. It can be used to classify current templates in those emitting self-contained DOM output (properly nested) and those emitting just a start or end tag (table start / row / end templates for example). Fortunately, most templates produce properly nested output. Those that don't can be marked with a flag in the database, after which proper nesting can be enforced for all other templates from there on. Unbalanced templates are encapsulated in a combined DOM block, which is then properly nested again. This can also be enforced when re-expanding the combined block of templates.

With proper nesting enforced and all template parameters available re-rendering a template will only swap out a DOM subtree. This makes it possible to cache fast-changing templates or extension output (WikiData infoboxes for example) as a fragment in the edge caches, the DB or update it dynamically in clients.

With more per-fragment metadata (reference counted links and list of recursively used templates), the LinkUpdate jobs can be restricted to a re-expansion of the affected template transclusions rather than the full page. The general idea is to collect all dependencies during evaluation, and encode this information efficiently (likely outside the DOM) to enable quick dependency and validity checks.

Some extensions like Cite use global state, for example to number citations. Sadly, this complicates independent re-expansions. It does however seem to be possible to implement numbering and similar page-wide operations using CSS and/or JS, which would also benefit the VisualEditor. Most other extensions used by the WMF like math, poem, timeline etc are order-independent, so this seems to be a solvable issue for the extensions we currently care about.

Parsoid can (and does in the VE deployment) use the PHP preprocessor via the 'expandtemplates' web API method. This lets it fully support parser functions, internal PHP interfaces, Lua scripting etc without having to re-implement this functionality. The result is pre-expanded wikitext, which is then parsed and encapsulated in Parsoid. Tag extensions are expanded independently (via an action=parse API call currently).

In the longer term, we could extend the PHP API to provide more dependency information along with the expanded output. A list of templates, parser functions and Lua scripts used in the expansion would provide pretty complete dependency information for caching / incremental update purposes.

Treating the PHP preprocessor and its associated extensions as a self-contained 'legacy' component side-steps the problems associated with the wikitext-centric interfaces used. Emulating wikitext-based parameters and frame object interfaces passed to (for example) Lua from a token-based parser will probably never work perfectly and involve a lot of work. Performance of template expansions should not matter that much with incremental updates, as they would be relatively rare. For new pages, all template expansions can be performed in parallel (Parsoid currently sends one parallel API request per transclusion), which could be refined with some batching to amortize fixed connection overheads.

HTML-only wikis
New wikis using the VisualEditor UI exclusively could avoid the need for external dependencies by storing HTML exclusively. This will require a HTML-based diff implementation similar to the one in localwiki or XML diff algorithms like XyDiff to replace the wikitext source-based diff.

This diff algorithm and UI could also be applied to old wikitext-based page revisions by converting those revisions to HTML on demand (or in a background job).

DOM-based templating
HTML-only wikis might want to provide similar templating functionality as the existing wikitext-based template system. This could be DOM-based.

One popular option is to embed control structures in attributes similar to TAL or Genshi.

The main things we seem to need are
 * Expressions: provide access to modules and logic, but cannot define infinite loops or variables
 * Iteration: Iterate over finite data structures (JSON objects for example)
 * Conditionals: Include / evaluate a sub-DOM depending on an expression

This functionality is pretty simple to implement on the DOM (possibly using JS/JQuery, XPath and/or XSLT?). It would provide an opportunity to define very minimal service-like (RESTful for example) extension interfaces, which extensions could port to for a gradual transition. Templates themselves would still be valid HTML, which might make it possible to implement some sort of visual editing mode for templates.

Incremental re-parsing after wikitext edit
After an edit to a wiki page using the wikitext UI, we currently re-parse the entire page. In most cases only a small part of the page was actually modified, so a full re-parse is not really needed.

Using the DSR (DOM source range) information stored in the HTML DOM, we can match the position of a wikitext diff to a containing HTML DOM structure and re-parse only the modified version of that node. This would normally be a top-level element like a paragraph, which does not depend on nested parser state for correct rendering. Expensive operations like template expansions would normally not need to be re-performed, which would make parse times proportional to the edit size rather than the page size.

Fast and integrated C++ implementation
The original plan was to speed up and integrate Parsoid by moving the implementation to C++. We planned to provide parallel template expansions with equivalent functionality as the PHP preprocessor.

Advantages:
 * Raw efficiency and performance- ASIO event loop with parallel worker pool, C++ memory management, opportunities to optimize behind sane C++ interfaces.
 * Opportunities for integration with other libraries (Lua, DOM etc) and PHP

Disadvantages:
 * Adds a compiled library dependency to MediaWiki, unclear what the strategy for simple 'shared hosting' wikis would be
 * Despite the prototype work already done in C++, several months of rewrite time
 * Highly optimized parser pipeline might not actually be needed in the longer term, especially with incremental update optimizations described above

Where to start
With unlimited resources, the ideas above could simply be combined. Since we don't have unlimited resources, it seems to make sense to prioritize parts of the system that will provide most benefit in the longer run.

Moving towards HTML DOM as the master storage format is quite radical. It does however seem to help with a lot of issues we currently face, and can coexist with wikitext storage or at least a wikitext UI practically forever. The biggest technical question marks we have identified are currently order dependencies of extensions similar to Cite.

If we decide to go for the HTML-based storage vision, Parsoid would eventually become a wikitext UI for HTML-based wiki content. With the optimizations described above performance would likely be adequate already (HTML to wikitext serialization is relatively fast).

DOM-based templating, incremental updates and fragment caching on the other hand would be used in the longer term for both mixed HTML/wikitext and HTML-only wikis, and could benefit from optimization and careful design.