Parsoid/Todo

If you would like to hack the Parsoid parser, these are the tasks we currently see ahead. Some of them are marked as especially well suited for newbies. If you have questions, try to ping gwicke on #mediawiki or send a mail to the wikitext-l mailinglist. If all that fails, you can also contact Gabriel Wicke by mail.

June release tasks
We plan to release a basic demo of the Visual editor and Parsoid working in tandem in June. Apart from UI improvements, this demo will have the ability to edit basic content in a test wiki. Parsoid parses wikitext to HTML DOM, which is then edited on the client and finally posted back to Parsoid for serialization. For now Parsoid will simply return the serialized wikitext, which the client can then save to the wiki by reusing its authentication session.

In preparation for this release, the following tasks are important:


 * fix round-tripping for
 * strip the first paragraph tag in a list item or table cell context if there are no attributes on it and stx:html is not set
 * escape wikitext-like text content using nowiki -- Gabriel Wicke (GWicke) (talk) 22:28, 18 June 2012 (UTC)
 * duplicate  and a few other bugs in  and

Tokenizer
Earlier minor syntactical changes in Tim's preprocessor rewrite:
 * Tim's Preprocessor ABNF
 * User documentation for preprocessor rewrite

Low-hanging fruit
Simple and fun tasks for somebody wishing to dive into the tokenizer.
 * magic words (the __UNDERSCORED__ variant) : extraction mostly implemented by Ori and Adam, see for example https://gerrit.wikimedia.org/r/4050. Actual behavior still needs to be implemented.
 * ISBN, RFC, PMID - implemented by Ori
 * Language variants ('-{')
 * Add tokenizer support for Template:(( et al (see 'see also' section in that template's documentation section).

Clean-up

 * Move list handler from tokenizer to sync23 phase token transformer to support list items from templates. (Stabs being made by Adam: https://gerrit.wikimedia.org/r/3729) - still some interaction with quotes (test '5 quotes, code coverage +1 line' changed output after list patch) to investigate, but landed

Round-trip info

 * Add more round-trip information using the dataAttribs object property on tokens. This is serialized as JSON into a data-mw attribute on DOM nodes.
 * HTML vs wiki syntax
 * Try hard to preserve variable whitespace: Search for uses of the space production (or equivalent) in the grammar and capture the value into round-trip info.
 * Add source offset information to most elements to support selective re-serialization of modified DOM fragments to minimize dirty diffs

Extension tags
Make sure that (potential) extension end tags are always matched, even if parsing the content causes a switch to a plain-text parsing mode. An example would be an unclosed html comment ( [Main Page]]

These cannot generally be supported without stripping comments before parsing. Even if parsed, this type of comment could not be represented in the DOM. Before deployment, we should check if this is common enough to warrant an automated conversion. Grepping a dump works well for this check.

Mis-nested parser functions
The grammar-based tokenizer assumes some degree of sane nesting. Parser functions can return full tokens or substrings of an attribute, but not the first half of a token including half an attribute. Similar to the issues above, this limitation could be largely removed by dumbing down the tokenizer and deferring actual parsing until after template exansions at the cost of performance and PEG tokenizer grammar completeness. Mis-nested parser functions are hard to figure out for humans too, and should better be avoided / removed if possible.

Example: search for 'style="}}' in Template:Navbox. There are a total of 12 templates and no articles matching that string in the English Wikipedia. (used the following statement: )

expands to font-weight: bold">Some text in the PHP parser, but expands to {{#if:foo| Some text in Parsoid. This can be fixed by modifying the source to  . Hopefully this is rare enough to allow fixing this manually. Reliable detection of this case is needed to analyze how common this is.

Token stream transforms
See the recipe map in mediawiki.parser.js for the current parser transforms and their phases.

Cache transforms per token type, and add more specific 'any tag' transform category
TokenTransformManager.prototype._getTransforms in mediawiki.TokenTransformManager.js could be sped up a bit by caching the mix with 'any' tokens. This cache needs to be cleared or reconstructed whenever transform registrations change. Additionally, it would be useful to add a separate 'tag' '*' category that matches any tag token, but no text. The AttributeExpander would be a good use case for this.

Sanitizer
Filter attributes, convert non-whitelisted tags into text tokens. See ext.core.Sanitizer.js for an outline, should be a relatively straightforward port from the PHP version. Good task if you'd like to dive into the JS parser.

Internal links, categories and images
The tokenizer is still independent of configuration data, so it does not pay attention to a wiki link's namespace. This means that image parameters are not parsed differently from normal link parameters, leaving specialized treatment to the LinkHandler token stream transformer. For images, arguments need to be separated from the caption. Full rendering requires information about the image dimensions, which needs to be retrieved from the wiki using either the generic fall-back described in Parsoid/Interfacing with MW API, or a specialized Image-specific API method. For action=parse, templates and -arguments in image options need to be fully expanded using the AttributeExpander before converting options back to wikitext. The (mostly)plain-text nature of options makes this quite easy fortunately. External link tokens produced by a link= option need to map back to the plain URL.

The LinkHandler transformer also needs to handle the Help:Pipe_trick while rendering. Also, it should learn about mw:Manual:$wgCapitalLinks.

Parser functions and magic words
Some implementation and lots of stubs (FIXME, quite straightforward!) in ext.core.ParserFunctions.js. Many magic words in particular depend on information from the wiki. Idea for now is to fall-back to action=parse api for extensions and other unsupported constructs. Basically build a page of unsupported elements in document order with each element prefixed/postfixed with unique (non-wikisyntax) delimiters. Then extract results between delimiters. See Parsoid/Interfacing with MW API and Wikitext_parser/Environment.

HTML5 microdata
Track provenance on all tokens, preserve this information while building the tree and convert it into microdata / RDFa markup. This also needs to handle foster parenting. See

Miscellaneous

 * Handle dynamically generated nowiki sections: . Template arguments are already tokenized and expanded before substitution, so we'd need to revert this. Idea: Re-serialize tokens to original text using source position annotations and other round-trip information. Icky, but doable. Try to structure HTML DOM to WikiText serializer around SAX-like start/end handlers, so that the same handlers can serialize the token stream back to wikitext.

DOM tree builder
Generally we would like to avoid any changes to the default HTML5 tree builder algorithm, as this would allow us to use the built-in HTML parser in modern browsers, or unchanged libraries. There are however some tasks seem to be hard to solve otherwise, and require only small changes to the tree builder. These tasks all have to do with unbalanced token soup, which should be confined to the server.

Self-closing tags like meta and text are never stripped, but might be subject to foster-parenting. The relative order between text and self-closing tags is stable, and attributes on self-closing tags are preserved. document.body.innerHTML = ' '; console.log( document.body.innerHTML); -> foo 
 * Spurious end-tags are ignored by the tree builder, while (some) are displayed as text in current MediaWiki. Text display is helpful for authors. The necessary change to the html tree builder to replicate this would be small, but is not possible if a browser's built-in parser is used. The visual editor hopefully reduces the need for this kind of debugging aid in the medium term.
 * Propagate attribute information for end tag tokens (especially source information for tokens originating from templates) to matching start tag, to make sure that the full scope of template-affected subtrees is captured. This is hard to do without a modification to the tree builder. Only needs to be performed on the server side. Relatively simple modification.
 * Idea for a possible solution:

Before tree building: number all start- and self-closing tokens in an attribute if first template (or extension,..) token is not a start- or self-closing token: insert meta tag with info and unique id (can be token number) else: add info and id as attrib to first token insert meta tag with ref to first's id after last token or text add number of preceding space chars to it for foster parenting handling

On DOM: traverse while looking for custom attribs on end meta tag with ref: find matching start and propagate info up the DOM subtree to level of     highest element with counter start use token counter attribute to figure out content between start and end of template / extension block

Inline element nesting minimization
Consider this wikitext example:. There are two distinct DOMs that we can parse this into:


 * Non-minimal DOM: .  Serialized wikitext of this DOM =
 * Minimal DOM: .  Serialized wikitext of this DOM =

If the 5 leading apostrophes are parsed as  followed by   we get the first result. But, if it is parsed as  followed by   we get the second form. So, given the above wikitext, we have two possible DOMs. Clearly, the minimal DOM is the desirable DOM in this example. While it might seem that we might be able to pick the right parse order, there is no context-free way of determining the right parse order. To illustrate this, consdier a different wikitext example:. The minimal parse order here requires us to parse the 5 leading apostrophes as  followed by   which is the flip of the first example. Since we can have arbitrary wikitext between the first 5 apostrophes and the closing apostrophes, we will need to look ahead as far as necessary to match up apostrophes appropriately.

So, we have to use a deterministic parsing order (always parse 5 apostrophes as  followed by   OR the other way around -- it shouldn't matter for this problem). When than have three possible strategies for generating a minimal DOM:
 * Transform the token stream to reorder tokens appropriately.
 * Generate a DOM and process the DOM to generate a minimal DOM for I and B HTML tags.

It seems simpler to use the second strategy because we can recognize this DOM structural pattern:   and reorganize it using extremely simple rules. The first strategy (token stream transformation), in the general case, will effectively require a deep stack to push the dom-subtrees which is wasteful from a performance standpoint, since we are half-building the DOM only to reorder tokens and then discarding it altogether. Unless there is information in the token stream that lets us extract this information without a stack, the second strategy is desirable. (Original Gabriel text: Minimization involves opening the inline element with the longest span first, so requires look-ahead. There is code in mediawiki.DOMConverter.js that extracts run lengths of inline elements that could be used as a starting point for a DOM minimization pass.)

Note that this pass needs to be run on both the DOM produced by the parser, and the DOM returned by the editor for serialization back to wikitext. The DOM returned by the editor needs to be minimized in case it introduced excess tags, the user added explicit HTML tags, used wikitext of the form above, etc. This serialization will be run only on the modified parts of the DOM that the editor returns (the editor is responsible for marking modified DOM subtrees).

However, note that running this minimization pass always will introduce dirty diffs in certain scenarios (Ex: Consider the wikitext ). This kind of content seems to be relatively rare, and a simplification / minimization should be desirable in the longer run.

Also note that the problem is broader than just the I and B tags. This minimization routine will be run on a larger set of inline tags. Ex: I, B, U, span are definite candidates.

Misc

 * Some document model enforcement on HTML DOM to aid editor, should be able to run either on server or client.
 * Longer term fun project: move DOM building and transformations to webworker to provide fast Lua-extension-like or DOM/Tal/Genshi template functionality and multi-core support. See some ideas.

Parser web service
Create a node web service that returns the parsed HTML DOM given a page title (rest-style /parse/Title or similar). Multi-process workers using the cluster module would be nice to have, but not mandatory. For now, this could perform pretty much the same actions as in parse.js, and simply pass in. Fetching the source would be even better (see TemplateRequest), as this would handle noinclude in the page correctly. It might be worth moving that class to a separate API module.

Add saving support to the web service
Accept a posted HTML DOM, the page title and base revision from the editor, serialize it back to wikitext and save it through the API.

There is a good opportunity to collaborate with Ashish Dubey on the server portion (he is working on collaborative editing as GSoC project).

Alternative web service variant we do not use for now
Shell-out to parse.js.
 * Neil created an API wrapper for parse.js. Would need to be modified to use the HTML DOM serialization instead of JSON. IIRC there were some issues with WikiDom vs. the data model the API expects.
 * Reconstruction of the tokenizer takes a few seconds for each request, would need caching
 * parse.js and node would need to be installed on the full cluster. It is more practical at least for development to run the parse service on a labs VM that we can quickly update.

Wikitext serializer
Basic idea: ( HTML DOM -> ) tokens -> SAX-style serializer handlers -> wikitext
 * needs to use round-trip data
 * Can be based on serializer code in the editor. Mainly need to add handling of round-trip data and convert handlers to support per-token (or SAX event style) serialization. Per-token handlers allow us to use the same handlers for DOM or token stream serialization.
 * Will introduce some normalization- at the very least the tree builder has to fix up stuff when building a tree from tag soup, so the full round-trip cannot be 100% perfect for broken inputs.

Serializing only modified parts of a page

 * The round-trip info on elements contains source offsets in original wikitext (very incomplete currently)
 * The editor marks modified parts of the DOM
 * The serializer splices original source of unmodified DOM parts with serialization of modified subtrees. This avoids dirty diffs from normalization in unmodified parts of the page.

VE's serializer wishlist

 * Fix round-tripping of underscores vs spaces in internal link targets (right now all underscores are converted to spaces)
 * Fix round-tripping of preformatted text, currently  becomes   instead of   (notice the space being before not after the \n)
 * Feature request: for round-tripping of internal link targets, let sHref (in data-mw) override href. Right now the target of an internal link is built by taking the href attribute and removing the first character (without checking that it's a slash as expected!). sHref is very useful for us so we can display the target of a link, but it's ignored by the serializer.
 * Support round-tripping of templates, right now round-tripping a template call produces the results of the template expansion rather than the template call itself
 * When adding round-trip info for this, make it so that VE can easily tell whether a given element was generated by a template
 * It would be nice to have span-wrapping for inline template calls. For example,  should produce something like   rather than marking the entire paragraph as template-generated.
 * Add round-tripping of category links and the like, right now these are lost
 * Fix the newline-at-the-end-of-an-li-or-before-a-ul behavior such that
 * the parser doesn't output newlines before each  and before each  -within-a-
 * the serializer doesn't depend on these newlines to output correct wikitext
 * (newline handling in general is slated to be revamped but I wanted to document this case specifically because VE works around it)
 * Feature request: the first  inside an   should be ignored, whereas every subsequent   should be treated as if it had stx=html (the latter is already done). This means that   should be serialized to
 * This is because the parser doesn't wrap the text in a list item in a paragraph (i.e. the text is directly in the list item) whereas VE's linear model does wrap it in a paragraph because listItem nodes can't contain text directly. The HTML->linmod converter can deal with adding the paragraphs quite easily and cleanly, but removing these paragraphs in the linmod->HTML converter with the conditions being this specific is a pain (we currently do do this as a workaround, but it's ugly). So we can tolerate input that doesn't have wrapped first paragraphs, but Parsoid doesn't tolerate input that does have wrapped first paragraphs; it would make our lives easier if it did

Testing
See tests/parser, in particular parserTests.js.

parserTests

 * Set up a more complete testing environment including the time, predefined images and so on (see phase3/tests/parser/parserTests.inc).

Round-trip tests
There is a dumpReader in tests/parser, which can be used to run full dumps through the parser. For round-tripping the WikiText serializers needs to be ported and extended from the client code. See.

Pre-save transform functionality
Much of the current parser's pre-save transform functionality can be moved to the editor, but there are also cases where this won't be possible. An example of the latter is the self-preserving en:Template:Selfsubst template.