Parsoid/Language conversion/Preprocessor fixups

The Problem
LanguageConverter markup was not well-integrated with the wikitext parser (or with subsequent additions to wikitext), resulting in a number of unusual corner cases when pages contain  markup. T54661 tracks a number of these, which have been steadily fixed without much issue.

The remaining bug is 146304: the wikitext preprocessor doesn't understand language converter markup, so it splits template arguments incorrectly when language converter markup is present. For instance:

This is interpreted as two arguments to the  template:   and , instead of one argument using language converter markup.

The most straight-forward fix makes the preprocessor recognize  ...   constructs in all wikitext, since preprocessor operation is not dependent on page language, although actual "conversion" of   ...   constructs still only occurs on pages whose page language defines variants. This is mostly safe, but there exists markup such as the following, from en:Amoxicillin on enwiki: which breaks under the new preprocessor rules. The  sequence in this wikitext begins a new "language converter" construct, and although we have preprocessor rules to handle "unclosed constructs" and precedence, they kick in only when the template is finally closed with a   sequence. All the template arguments between the  and   are still swallowed up as apparent arguments to the language converter construct, and thus the template appears broken.
 * IUPAC_name = (2S,5R,6R)-6-{[(2R)-2-amino-2-(4-hydroxyphenyl)-acetyl]amino}-3,3-dimethyl-7-oxo-4-thia-1-azabicyclo[3.2.0]heptane-24-carboxylic acid

The fix is simple:  sequences need to be  'ed, either by wrapping   around an entire phrase or argument, or simply by separating the   and the   like so:

Occasionally the  sequence appears in URLs, and in that case one of the characters needs to be URL encoded, such as en:Alan Turing:
 * Alan Turing RKBExplorer

How do we tell how widespread this is?
Used 2017-03-20 dumps of all wiki projects with >1,000,000 articles, plus  since I stumbled across some problematic markup there.

Command used: for wiki in $(cat ~/DumpGrepper/wikis.txt ) ; do  bzcat ~/DumpGrepper/$wiki-20170320-pages-articles.xml.bz2 | \ node ./index.js --line -- '-[{]' '!' '<!--+[{]' '-[{][{]' | \ tee results/$wiki-results.txt done Using a fork of  which allows line-by-line searches and the equivalent of a   pipeline. The command above says, print all lines which contain  but not   (or  ) or , since the normal preprocessor precedence yields the expected (ie, non-language converter) result for those constructs.