Last update on: 2014-12-monthly


A dedicated project was created for Parsoid. Status updates prior to this date were included in the Visual editor updates.

2012-08-20 (MW 1.20wmf10)[edit]

The Parsoid team worked on the final tasks in the JS prototype, in preparation for the C++ port. The port will allow an efficient integration with PHP and Lua, improve performance and allow the parallelization of the parser in the longer term in preparation for production use.

An important milestone we reached is the implementation and verification of the template DOM range encapsulation algorithm, which now identifies all template-affected parts of the DOM for round-tripping and protection in the VisualEditor. We are currently implementing template round-tripping based on this. Other new features include oldid support so that previous versions of pages can be edited, rather than just the current one, and more complete error reporting in the web service. Wikitext escaping in the serializer is much improved, and now also handles interactions across multiple DOM nodes. An ongoing task has been improving test coverage to enable us to refactor code with more confidence and also help test the correctness of the C++ port.

Most details of the C++ port were researched. A basic build system including the selected libraries was set up, and design work on the basic data structures has started, ahead of full porting which we expect to start next iteration.

The full list of Parsoid bugs closed in the last two weeks is available in Bugzilla.


The Parsoid team reached a major milestone in August by implementing a template output encapsulation algorithm, and started to use it to support expanded template round-tripping. In parallel with this and the usual smaller tweaks, work on a C++ port of the parser was started. The port is expected to allow an efficient integration with PHP and Lua, improve performance and allow the parallelization of the parser in the longer term.

2012-09-03 (MW 1.20wmf11)[edit]

The Parsoid team reached a major milestone with basic round-tripping of expanded templates and the Cite extension. This includes the protection of closely coupled and unbalanced table start / row / end templates, which makes it possible to protect and later edit these in the Visual Editor.

On the C++ side, the work has now started to port the existing JavaScript code, starting with the Tokenizer. Basic token data structures and a reference counting scheme are implemented. The integration of the boost.asio event loop for asynchronous and parallel operations and the adaptation of the libhubbub HTML5 tree builder and libxml2 DOM are next steps.

The full list of Parsoid commits is available in Gerrit.


In the JavaScript Parsoid implementation, we further improved support for round-tripping of templates and numerous other constructs. We now have an additional thirty parser tests and a similar number of round-trip tests passing. We started work on automated round-trip testing on dumps to provide a benchmark for progress and to identify the most important problem areas to focus on. We also added edit support for behavior switches and category links. To support selective serialization of the edited sections of the document without dirty diffs in unmodified sections, we are now associating DOM nodes with the source wikitext that produced that DOM.

On the C++ port, the data structures and synchronization / queueing strategies are now nearly complete. The tokenizer can handle very simple content (mainly headings) and populate the data structures. We started work on TokenTransformManagers. However, due to resource constraints, the C++ is currently a part-time effort, with most effort going into the temporary JavaScript implementation as the safest bet for the December release.

The full list of Parsoid commits is available in Gerrit.


The Parsoid team spent September improving the JavaScript prototype to get it ready for the December release, and improving the C++ port for longer-term deployment. The original plan to finish the C++ port before the December release looks very risky with the limited resources available, so the plan is to release the JavaScript prototype instead.

On the JavaScript side, the focus was on round-tripping of templates and other constructs such as the Cite extension, support for category links and "magic words". Many parser tests were added, and a new milestone of 603 passing round-trip tests (with 218 to go) was reached. First steps towards round-trip testing on a full dump were taken.

In the C++ implementation, the tokenizer can now support very simple content and use it to populate the internal data structures. Basic interfaces for asynchronous and parallel processing were defined. An XML DOM abstraction layer was introduced to make DOM-related algorithms independent of the used DOM library. The focus on the JavaScript prototype for the release limited the progress on the C++ implementation.


JavaScript implementation:

  • Many improvements to template round-tripping and DOM source range calculations
  • Reworked paragraph wrapping to be more bug-for-bug compatible with the PHP parser
  • Many small tokenizer and round-trip fixes
  • Added many new parser tests
  • 603 round-trip tests passing, 218 to go

C++ implementation:

  • Basic token transformer skeletons with boost.asio integration is hooked up
  • New XML DOM abstraction interface for the separation of DOM-based code from used DOM library; Using PugiXML DOM backend for performance and memory footprint
  • Takes back seat to JavaScript prototype implementation due to resource constraints

The full list of Parsoid commits is available in Gerrit.


JavaScript implementation:

  • Many improvements to template round-tripping and DOM source range calculations
  • Added many new parser tests
  • Test runner now runs various round-trip test modes based on parser tests
  • Wikitext to wikitext round-trip tests up to 618 from 608. Total 1343 tests passing
  • Set up continuous integration with Jenkins, runs parser tests on separate virtual machine on each commit
  • Created round-trip test infrastructure on full dumps with classification into syntactic-only / semantic diffs, adding distributed client-server mode to speed it up
  • Big articles like Barack Obama are now close to round-tripping without semantic differences

C++ implementation:

  • Generalized pipeline interfaces
  • Implemented HTML5 tree builder with XML DOM backend
  • Designed and implemented token stream transformer APIs with usability improvements on the JavaScript version
  • Added Scope class (~preprocessor frame) and simplified expansion logic vs. JavaScript implementation
  • Parses simple wikitext all the way to XML DOM

The full list of Parsoid commits is available in Gerrit.


The Parsoid team focused on testing the JavaScript prototype parser against a corpus of 100,000 randomly-selected articles from the English Wikipedia. A distributed MapReduce-like system, which uses several virtual machines on Wikimedia Labs, constantly converts articles to HTML DOM and back again to wikitext using the latest version of the Parsoid. For a little over 75% of these articles, this results in exactly the same wikitext, as we intend. For another 18% of these articles, there are some differences in the wikitext, but these are so minor that they don't result in any differences in the produced HTML structure when it is re-parsed. In the production version of Parsoid which will attempt to retain original wikitext as far as possible, these minor differences will only show up, if at all, around content that the user edited. Finally, just under 7% of articles still contain errors that change the produced HTML structure. These issues are the focus of the current work in preparation for the December release.


Most of the work has been on the JavaScript implementation and testing infrastructure, in preparation for the December release. The automated testing of wikitext->HTML->wikitext now has 75.8% of articles returning exactly the same, and 94.5% with changes that do not change the nature of the page (the additional ~19% have changes in source wikitext). This is up from about 85% two weeks ago (rather than 93% as reported in the previous report -- we discovered a bug in the error accounting process which we discovered and fixed). The Barack Obama article now round-trips without any diffs.

A first iteration of the selective serialization algorithm is in development. This algorithm will hide purely syntactic differences in unmodified parts of the page by using the original wikitext for those. It heavily relies on our calculation of source ranges for each DOM element.

The full list of Parsoid commits is available in Gerrit.


In preparation for the upcoming deployment on the English Wikipedia, the Parsoid team concentrated on the preservation of existing content. Automated round-trip testing on 100,000 randomly chosen pages from the English Wikipedia using distributed test runners helped to identify many issues, which were fixed and often resulted in new minimal test cases being added to the parser test suite. Currently, 79.4% test articles (up from about 65% last month) round-trip without any differences at all, an additional 18% round-trip with only minor (whitespace, quote style etc) differences, and the remaining 2.6% of pages have differences that still need fixing (down from about 15% last month). Selective serialization will further avoid dirty diffs in unmodified parts of a page by using the original wikitext for those. This will help further fix the 20% of pages that had any kind of difference in wikitext. The implementation of this algorithm is currently being finalized.


The Parsoid project reached a major milestone with its first deployment to the English Wikipedia along with the VisualEditor. This was a major test for Parsoid, as it needed to handle the full range of arbitrary and complex existing wiki content including templates, tables and extensions for the first time.

As witnessed by the clean edit diffs, Parsoid passed this test with flying colors. This represents very hard work by the team (Gabriel Wicke, Subramanya Sastry and Mark Holmquist) on automated round-trip testing and the completion of a selective serialization strategy just in time for the release.

After catching their breath, the team now has its sights on the next phase in Parsoid development. This includes a longer-term strategy for the integration of Parsoid and HTML DOM into MediaWiki, performance improvements and better support for complex features of wikitext.


In January, the Parsoid team did some Spring cleaning and bug fixing. The serialization subsystem was overhauled: it now features simpler and more robust separator handling. Selective serialization was rewritten to deal with content deletions. It also features DOM diff-based change detection that does not rely on client-side change marking. Support for non-English wikis and local configurations was also improved a lot, and will likely stabilize in the next weeks.

The team also discussed and documented the longer-term Parsoid / MediaWiki strategy in the Parsoid roadmap. The performance-oriented C++ port was deprioritized in favor of DOM-based performance improvements and HTML storage. The basic idea behind storing (close to) fully processed HTML is to speed things up by doing no significant parsing on page view at all. In the longer term, VisualEditor-only wikis can avoid a dependency on Parsoid by switching to HTML storage exclusively. Overall, the plan is to leverage the Parsoid-generated HTML/RDFa DOM format inside MediaWiki core to enable better performance and editing capabilities in the future.


The Parsoid team continued to improve support for non-English wikis. This involved exposing more configuration information through the MediaWiki API and using it throughout Parsoid. The support is now reasonably complete, but needs testing. The round-trip testing framework needs to be adapted to support running tests on pages from multiple wikis.

A new contributor, C. Scott Ananian, improved Parsoid's performance by switching the DOM library from JSDom to Domino. He also improved image handling and contributed numerous other patches.

The tokenizer was modified to parse one top-level block at a time, which helps to spread out API requests and minimize the number of tokens in flight. The serializer is in the process of being rewritten to work on DOM input to benefit from the context provided by the DOM. This rewrite is expected to simplify the logic significantly, and help fix some more selective serialization issues that are blocking a deployment to production.

We also used the ops and core hackathon to discuss and refine our storage plans. Finally, we wrote a blog post about Parsoid on the WMF tech blog.


In March, the Parsoid team continued with improvements to internationalization, serialization, and extension handling.

The parser test framework now supports language-specific tests, which required support for loading language-specific default setting in Parsoid.

The serializer is now fully DOM-based and uses constraint-based newline / white-space separator handling, which will make the serializer less sensitive to newlines and whitespace in HTML. Round-trip test results of 82% (pages without any diffs) and 98% (pages without semantic diffs) indicates that the new serializer is on par with the old serializer currently deployed on production.

Extension content is now parsed all the way to DOM, which enforces proper nesting. The generic support for balanced fragment parsing will later also be applied to templates. Parsing of transclusion directives (includeonly and friends) has also been improved and simplified.

The DOM specification for images and templated / extension content was fleshed out in preparation for full editing support.

Late in March, C. Scott Ananian joined us as a contractor. Welcome!


In April, the Parsoid team successfully deployed the cumulative work done over the last four months. This includes support for non-English wiki configurations, a rewritten serialization subsystem based on server-side DOM diffs, category link and basic template parameter editing support and a long list of fixes and improvements.

Several other features for the July release are on track. The specification for extensions containing templates and templates containing extensions were fleshed out and are currently being implemented. Similarly, our specs for images and thumbnails were vastly improved so that we will soon support full editing for all parameters.

We also improved our code quality and testing infrastructure.

In preparation for the July release, we did more benchmarking and capacity planning. A caching strategy that avoids overwhelming the API with requests was developed, hardware to run Parsoid was ordered and work on the implementation started.


In May, the Parsoid team implemented several new features, as well as important performance optimizations in preparation for the July VisualEditor release.

A major image handling overhaul enabled rendering and editing of all image-related parameters with a relatively simple DOM structure. Template and extension editing was improved to support editing of templates within extensions. This lets editors modify and add templated citations in VisualEditor, an important feature to improve the quality of articles in Wikipedia.

On the performance front, we are now reusing expensive template, extension and image expansions from our own previous output to avoid most API queries after an edit. This is necessary to avoid overloading the API when tracking all edits on Wikimedia projects. A cache infrastructure with appropriate purging was set up and will be tested at full load through June.

At the Amsterdam hackathon, we helped developers leverage our rich HTML+RDFa DOM output for projects like a Wikipedia-to-SMS service or the Kiwix offline Wikipedia reader.


Early this month, we deployed Parsoid to the new cluster and started to track all edits and template / image updates from all Wikipedia sites, which is close to the full load we'll see when VE is deployed to all of them. Our earlier optimization work paid off as the Parsoid cluster and the associated Varnish caches are handling the load very well. The extra load we put on the API cluster is low enough to not cause a problem. As expected, the VisualEditor deployment to the English Wikipedia hardly showed up in the load graphs.

Despite being very short-staffed this month (only two full-time developers), the absence of performance issues left us enough time for a lot more polishing before the VisualEditor release on July 1. As a result, the release went very well with clean diffs on almost all pages.

While more work is left to do, it is now clear that we have fundamentally achieved our goal of a clean translation between WikiText and HTML + RDFa. This does not only enable visual HTML editing, but also makes Wikipedia's content easily accessible in a standardized format. It also opens up new opportunities for MediaWiki's core architecture, which we'll pursue this fiscal year.


In July, the Parsoid team supported the deployment of VisualEditor as default editor on eight Wikipedias, continuing to monitor bug reports, feedback pages, and village pump and fixed a number of bugs to eliminate instances of dirty diffs and other corruption that were reported. An absence of performance issues let us focus our attention on functionality and dirty-diff related bugs. This continued to be the primary focus of our work this month.

On the staffing side, C. Scott Ananian joined the Parsoid team as a full-time employee -- he has been working with us since earlier this year, first as a volunteer and then as a contractor. Marc Ordinas i Llopis from Spain and Arlo Breault from Canada joined the Parsoid team as contractors this month.


In August, the Parsoid team continued to polish compatibility with existing wikitext. User feedback after the July VisualEditor release was instrumental in the identification of issues and the development of support for important use cases of creative templating.

The increased team size also allowed us to perform some long-standing code cleanup, make Parsoid compatible with Node 0.10, and improve testing. The round-trip testing infrastructure received a much-needed overhaul. The storage back-end switched from SQLite to MySQL, which improved throughput a lot and is allowing us to test new code far more quickly than before. Performance statistics are now recorded, which will let us identify performance bottlenecks as well as catch performance regressions.

During Wikimania, the Kiwix team used Parsoid output to create an offline copy of Wikivoyage. With standard HTML libraries and the rich RDFa information in the Parsoid DOM, downloading and modifying the HTML representation was done in about 1000 lines of JavaScript.


We fixed a few bugs reported in production, added performance stats to our RT-testing framework (and discovered a couple bugs and fixed them as a result) and did some long-standing cleanup work in our codebase. September also saw the all-staff meeting at the WMF offices in San Francisco which gave us the opportunity to work in person and discuss some proposals. We planned out an implementation strategy for language variant support, and started researching and experimenting with HTML storage options which is required for a number of projects in our roadmap.


In October, the Parsoid team continued to refine the parser behavior in edge cases. Performance was improved by increasing the parallelism of API requests and separating page updates from dependency-triggered updates in the job queue. The round-trip testing server performance was improved so that we can now run round-trip tests on 160,000 pages over night. Support for private wikis was also added this month.

We also made additional progress on Rashomon, the revision storage service based on Apache Cassandra. Rashomon is initially going to be used for implementing HTML and metadata storage for Parsoid output. Rashomon was deployed on a test cluster and import/write tests were performed.


November saw the deployment of major changes to the DOM spec in coordination with the VisualEditor team. Link types are now marked up by semantics rather than syntax, interwiki links are detected automatically, categories are marked as page properties and more. During the deployment, we found that the newer libraries used by the web service front-end were buggy. We reverted the library upgrade and contributed fixes upstream. This incident prompted us to work on tests for the HTTP web service to catch issues like this in continuous integration.

After these issues were sorted out, we continued with continuous improvement and fixes. Editing support for magic words and categories was improved, several dirty diff issues were fixed and the API was refined for page-independent wt2html and html2wt conversion. See our deployment page for details.

Cassandra load testing for the Rashomon storage service continued and uncovered several issues that were reported back upstream. With Cassandra 2.0.3 the 2.0 branch is now stabilizing in time to make deployment in December feasible. Cassandra is now stable at extremely high write loads of around 900 revisions per second, which is more than 10 times the load we experience in production.


In December, the relentless Parsoid team continued squashing bugs and incompatibilities; see our deployments page for details. During the node 0.10 migration, we ran into some issues caused by changed garbage collector behavior, and rolled back to 0.8. We spent some time investigating and fixing this; initial testing on our round-trip testing setup indicates that this is now fixed.

Our testing infrastructure is now exercising the entire stack including the web server, which will help to make sure that we also catch issues in HTTP libraries before deployment.

We wrote several RFCs about embracing a service architecture, PHP bindings for services, a general-purpose storage service based on our Rashomon revision store, and a public content API based on this.

Part of the team worked on a new PDF rendering infrastructure using Parsoid HTML, node and PhantomJS. Part of the team has also been mentoring two Outreach Program for Women (OPW) interns.


In January, the Parsoid team did a lot of bug fixing around images, links, references and various other areas. See the deployment page for a summary.

Part of the team has been mentoring two Outreach Program for Women (OPW) interns. Others are mentoring a group of students in a Facebook Open Academy project to build a Cassandra storage back-end for the Parsoid round-trip test server.

We also participated in the architecture summit, where our RFCs about embracing a service architecture, PHP bindings for services, a general-purpose storage service based on our Rashomon revision store, and a public content API based on this were well received.

Following up on this, we started Debian packaging for Parsoid, which will soon make the installation of Parsoid as easy as apt-get install parsoid.


In February, the Parsoid team continued with bug fixes and improved image support. See the deployment page for a summary of deployments and fixed bugs in February.

Part of the team has continued to mentor two Outreach Program for Women (OPW) interns. This internship ends mid-March. Others are mentoring a group of students in a Facebook Open Academy project to build a Cassandra storage back-end for the Parsoid round-trip test server.

We have a first version of a Debian package for Parsoid ready. This package is yet to find a home base (repository) from which it can be installed. This will soon make the installation of Parsoid as easy as apt-get install parsoid.


Presentation slides from the Parsoid team's quarterly review meeting on March 28

March saw the Parsoid team continuing with a lot of unglamorous bug fixing and tweaking. Media / image handling in particular received a good amount of love, and is now in a much better state than it used to be. In the process, we discovered a lot of edge cases and inconsistent behavior in the PHP parser, and fixed some of those issues there as well.

We wrapped up our mentorship for Be Birchall and Maria Pecana in the Outreach Program for Women. We revamped our round-trip test server interface and fixed some diffing issues in the round-trip test system. Maria wrote a generic logging backend that lets us dynamically map an event stream to any number of logging sinks. A huge step up from our console.error based basic error logging so far.

We also designed and implemented a HTML templating library which combines the correctness and security support of a DOM-based solution with the performance of string-based templating. This is implemented as a compiler from KnockoutJS-compatible HTML syntax to a JSON intermediate representation, and a small and very fast runtime for the JSON representation. The runtime is now also being ported to PHP in order to gauge the performance there as well. It will also be a test bed for further forays into HTML templating for translation messages and eventually wiki content.


In April, the Parsoid team continued to fix bugs and tweak code. Two areas in particular received a lot of attention: template encapsulation and link handling. We ironed out a whole bunch of edge case handling in template encapsulation code and its interaction with fostered content from tables (caused by misnested tags in tables). We also fixed many unhandled scenarios and edge cases parsing and serializing links.

In addition to bug fixes, we also improved the performance of the parsing pipeline; some pages like Barack Obama should now parse 30% faster than before. We continued migrating our debugging and tracing code to use our new logger. April also saw additional progress providing support for visual editing of transclusion parameters; this should land on master soon.


In May, the Parsoid team continued with ongoing bug fixes and bi-weekly deployments. Besides the user-facing bug fixes, we also improved our tracing support (to aid debugging), and did some performance improvements. We also finished implementing support for HTML/visual editing of transclusion parameters. This is not yet enabled in production while we finish up any additional performance tweaks on it.

GSoC 2014 also kicked off in May; we have one student working on a wikilint project to detect broken/bad wikitext in wiki pages.

We also started planning and charting goals for 2014/2015.


In June, the Parsoid team continued with ongoing bug fixes and bi-weekly deployments; the selective serializer, improving our parsing support for some table-handling edge case, nowiki handling, and parsing performance are some of the areas that saw ongoing work. We began work on supporting language converter markup.

We added CSS styling to the HTML to ensure that Parsoid HTML renders like PHP parser output. We continued to tweak the CSS based on rendering differences we found. We also started work on computing visual diffs based on taking screenshots of rendered output of Parsoid and PHP HTML. This initial proof-of-concept will serve as the basis of more larger scale automated testing and identification of rendering diffs.

The GSoC 2014 LintTrap project saw good progress and a demo LintBridge application was made available on wmflabs with the wikitext issues detected by LintTrap.

We also had our quarterly review this month and contributed to the annual engineering planning process.


In July, the Parsoid team continued with ongoing bug fixes and bi-weekly deployments.

With an eye towards supporting Parsoid-driven page views, the Parsoid team strategized on addressing Cite extension rendering differences that arise from site-messages based customizations and is considering a pure CSS-based solution for addressing the common use cases. We also finished work developing the test setup for doing mass visual diff tests between PHP parser rendering and Parsoid rendering. It was tested locally and we started preparations for deploying that on our test servers. This will go live end-July or early-August.

The GSoC 2014 LintTrap project continued to make good progress. We had productive conversations with Project WikiCheck about integrating LintTrap with WikiCheck in a couple different ways. We hope to develop this further over the coming months.

Overall, this was also a month of reduced activity with Gabriel now officially full time in the Services team and Scott focused on the PDF service deployment that went live a couple days ago. The full team is also spending a week at a off-site meeting working and spending time together in person prior to Wikimania in London.


In August, we wrapped up our face-to-face off-site meetup in Mallorca and attended Wikimania in London, which was the first Wikimania event for us all. At the Wikimania hackathon, we co-presented (with the Services team) a workshop session about Parsoid and how to use it. We also had a talk at Wikimania about Parsoid.

The GSoC 2014 LintTrap project wrapped up and we hope to develop this further over the coming months, and go live with it later this year.

With an eye towards supporting Parsoid-driven page views, the Parsoid team worked on a few different tracks. We deployed the visual diff mass testing service, we added Tidy support to parser tests and updated tests, which now makes it easy for Parsoid to target the PHP Parser + Tidy combo found in production, and continued to make CSS and other fixes.


In September, we continued to fix bugs, upgraded libraries, and made additional progress towards improving compatibility with PHP parser + Tidy rendering. Specifically, Parsoid's paragraph wrapping now targets the PHP parser + Tidy output rather than PHP parser output. We also continued to update Parsoid's CSS / rendering to more closely match the current rendering. We also improved Parsoid's robustness handling edge case scenarios (pathological backtracking, parsing of very large wikitext tables). Part of the Parsoid team was also busy with launching the PDF rendering service which was successfully launched end of September.


October has seen a lot of maintenance-type work in Parsoid land. We updated our logging infrastructure to send our event logs to LogStash, which makes it far simpler to identify any production errors. We also started a round of code cleanup and improving readability / maintainability to moving to Promises API which eliminates the callback mess common with node.js code. We were involved in debugging some Varnish-related issues that were causing excess timeouts than needed; the issue is now fixed, and we improved the efficiency of queued Parsoid parsing jobs. At the end of October, we also initiated an upgrade of the Parsoid production cluster from node 0.8 to node 0.10.

In non-maintenance work, we continue to fix bugs and compatibility issues with the current default rendering. We've also been working to find entirely CSS-based solutions to citation customizations. This CSS-based approach will simplify customizability, and also free clients from having to worry about MediaWiki site messages. This is still work in progress as we are finding gaps and fixing them.


In November, the Parsoid team continued to work through the big blockers to using Parsoid HTML for read views. We made further progress in customizing the Cite extension via CSS, and started work on supporting templates that Parsoid does not handle properly yet. These templates used on a subset of pages on Wikipedia generate attributes of a table as well as content of the table and do not fit well within the DOM-based model that Parsoid works with. We expect both these blockers to be lifted by early January which significantly furthers our goal of serving read views via Parsoid's HTML. Besides this, we continued to ongoing code cleanup, maintenance, bug fixes, and regular deployments.


In December, we finished work supporting templates that generate attributes of a table as well as content of the table and do not fit well within the DOM-based model that Parsoid works with. Besides that, we improved error reporting when handling images that lets clients like Flow and VE handle them better. With a view to reducing the HTML size that needs to be loaded and parsed by clients, we stripped the private data-parsoid attribute from templated content since it is unnecessary. We also continued with code cleanup and pay back technical debt. Specifically, we did a bunch of fixes in our nowiki handling when HTML is serialized to wikitext. We improved robustness, correctness, and reduced the number of scenarios where nowikis are needed for quotes. We also made it simpler to detect nowiki scenarios for other wikitext constructs and specifically applied it to links of all flavors.