Parsoid/Todo

If you would like to hack the Parsoid parser, these are the tasks we currently see ahead. Some of them are marked as especially well suited for newbies. If you have questions, try to ping gwicke on #mediawiki or send a mail to the wikitext-l mailinglist. If all that fails, you can also contact Gabriel Wicke by mail.

Tokenizer
General tokenizer support for larger structures is relatively good already, but some details are still missing. A few simple things are completely missing, but easy to add:
 * magic words (the __UNDERSCORED__ variant)
 * signatures and timestamps
 * ISBN, RFC, PMID
 * language conversion syntax ('-{')
 * Add more round-trip information using the dataAttribs object property on tokens. This is serialized as JSON into a data-mw-rt attribute on DOM nodes.
 * HTML vs wiki syntax
 * Try hard to preserve variable whitespace: Search for uses of the space production (or equivalent) in the grammar and capture the value into round-trip info.
 * Add source offset information to most blocks
 * Source offset/range annotation for elements to support selective re-serialization of modified DOM fragments to minimize dirty diffs

Issues:
 * Make sure that (potential) extension end tags are always matched, even if parsing the content causes a switch to a plain-text parsing mode, for example after encountering.
 * Configuration-dependent syntax. It would be nice to keep the tokenizer independent of local configurations. This appears to be difficult at least for url protocols recognized in links. Most other configuration-dependent things including extensions can however be handled in token stream transforms.
 * Comments in arbitrary places (e.g., ) cannot generally be supported without stripping comments before parsing. Even if parsed, this type of comment could not be represented in the DOM. Before deployment, we should check if this is common enough to warrant an automated conversion. Grepping a dump works well for this check.
 * Noinclude spanning template or link parameters:
 * Simple attribute-only solution: Keep arguments in a single token chunk, and split only after expansion.
 * Wider issue: Noinclude/extension blocks that start inside something normally parsed as attribute and end outside of it (or the other way around).
 * 98% solution: Don't try to handle this, as it is too rare (check!). Try to detect and warn about it to allow source fixing. Can still try to handle some common special cases, but still warn so that things can be fixed over time.
 * Very invasive, but more powerful (100%?) option: Make the tokenizer less eager to figure out higher-level structures (produce a flat and relatively verbose token stream). Moves much more parsing work into early (synchronous) token transform stage. Can handle pretty much everything if taken to the extreme, but makes the grammar even less expressive and requires more custom token stream transform porting.
 * Nowiki is handled in the tokenizer, so does not have this issue

Things to check:
 * Tim's Preprocessor ABNF
 * User documentation for preprocessor rewrite

Token stream transforms

 * More complete implementation of Parser functions and magic words. Some implementation and lots of stubs (FIXME, quite straightforward!) in ext.core.ParserFunctions.js.
 * Fall-back to action=parse api for extensions and other unsupported constructs. Basically build a page of unsupported elements in document order with each element prefixed/postfixed with unique (non-wikisyntax) delimiters. Then extract results between delimiters. See ParserNotesExtensions and Wikitext_parser/Environment.
 * Filter attributes, convert non-whitelisted tags into text tokens. See ext.core.Sanitizer.js for an outline, should be a relatively straightforward port from the PHP version. Good task if you'd like to dive into the JS parser.
 * Internal links: handle images, files, categories. The tokenizer classifies everything after the first pipe as description. Image parameters fortunately end up in a plain text token and can be easily re-parsed in a token stream transformer.
 * Generic attribute expand to support templates and template arguments in them: Expand all non-string arguments, presumably convertible to plain text after phase 2. Use AttributeTransformManager, and move its use out of TemplateHandler. Might be cleaner to split attribute expansion into phase 1 / 2 instead of calling both from AsyncTokenTransformer (phase 2). Improvement: only expand branches selected by parser functions.
 * Map template attributes to HTML5 microdata.
 * Fix-ups for things documented in the following parser tests: 'External links: wiki links within external link (Bug 3695)'
 * Optimize token representation: Plain string for text, objects with appropriate constructor for others. Basically eliminate the type attribute.
 * Handle table foster-parenting with round-tripping by reordering and marking tokens
 * Handle dynamically generated nowiki sections: . Template arguments are already tokenized and expanded before substitution, so we need to revert this. Idea: Re-serialize tokens to original text using source position annotations and other round-trip information. Icky, but doable. Try to structure HTML DOM to WikiText serializer around SAX-like start/end handlers, so that the same handlers can serialize the token stream back to wikitext.
 * Validate bracketed external link target. Removed validation in tokenizer to support templates and arguments in link target. Links with non-validating targets need to be turned back into plain text (including the brackets). Need to investigate if there are cases where a non-valid link target would change parsing with surrounding structures, e.g. by matching up the closing bracket with an earlier opened bracket. If so, can these be handled in the token stream?
 * Handle named template parameters from a template or template arg:  should set the argument named 'style' on the template transclusion of Template:Foo. Mark as being in attribute position and re-parse after expansion in AttributeTransformManager?

Not really a token stream transform, but closely related:
 * Port the visual editor WikiText serializer to work on HTML DOM, and convert handlers to support per-token (or SAX event style) serialization. Per-token handlers allow us to use the same handlers for DOM or token stream serialization.

DOM tree builder
Generally we would like to avoid any changes to the default HTML5 tree builder algorithm, as this would allow us to use the built-in HTML parser in modern browsers, or unchanged libraries. There are however some tasks that are very hard to solve otherwise, and require only small changes to the tree builder. These tasks all have to do with unbalanced token soup, which should be confined to the server.


 * Spurious end-tags are ignored by the tree builder, while (some) are displayed as text in current MediaWiki. Text display is helpful for authors. The necessary change to the html tree builder to replicate this would be small, but is not possible if a browser's built-in parser is used. The visual editor hopefully reduces the need for this kind of debugging aid in the medium term.
 * Propagate attribute information for end tag tokens (especially source information for tokens originating from templates) to matching start tag, to make sure that the full scope of token-affected subtrees is captured. This is hard to do without a modification to the tree builder. Only needs to be performed on the server side. Relatively simple modification.

DOM postprocessing

 * Some document model enforcement on HTML DOM to aid editor, should be able to run either on server or client.
 * Longer term fun project: move DOM building and transformations to webworker to provide fast Lua-extension-like or DOM/Tal/Genshi template functionality and multi-core support. See some ideas.

Middlewarish bits

 * Wikitext source / modified DOM serialization splicing
 * set up a basic web service wrapper - Neil mostly did this already by hooking parse.js up to the MW API. Will need a longer-running node server at some stage to support splicing and avoid the reconstruction of the tokenizer for each request (takes a few seconds). Also needs to be modified to use the HTML DOM serialization instead of JSON.
 * clean-up of round-trip data-wiki* attributes for pure view html