User:HappyDog/WikiText parsing

This is a technical page describing how the media-wiki engine parses a page of wiki markup to create the page you see. I wrote it on my own wiki a long time ago in order to help me understand how the engine works, not to explain wiki-syntax. I have reposted it here in case it is of any use to anyone, in particular with regards to the Markup spec project. It may not be 100% accurate or complete, so no guarantees made!

This page describes the anatomy of the addWikiText function of the OutputPage class (instantiated as $wgOut in the code). The function translates the wiki markup it receives as an argument into HTML text, which it adds to the final page using the classes addHTML function. This description does not include any details about parsing external to this function, for example redirects, although these may be added at a later date.

The separating and recombining of, $$ and  tags (Steps 1 to 3 and the final step) are carried out by the addWikiText function, whilst the rest of the parsing is carried out by doWikiPass2, a separate function of the Output object. doWikiPass2 calls several other class functions to convert the text, and these are indicated where relevant.

The information on this page is based on an unmodified MediaWiki version 1.3.10 (I think!).


 * 1)    content is separated out.
 * 2) *Neither the opening nor closing  tag are case sensitive, and they may contain whitespace between the word nowiki and the brackets.  No whitespace is allowed between the opening bracket and the slash on the   tag, however.
 * 3) *A closing  tag is not required.  If it is missing then the rest of the supplied text is treated as nowiki.
 * 4) *Text within the nowiki tags has all backslashes and triangular brackets  replaced by the appropriate HTML entity code.  This means that HTML markup won't work within the nowiki tags.
 * 5) *No further parsing is done on the text within the tags.
 * 6) If TeX support is enabled for maths functions then it is separated out and rendered.
 * 7) *Maths content is specified using the and tags.
 * 8) *The tags are processed in the same was as the  tags, specifically:
 * 9) **Neither the opening nor closing tag are case sensitive, and they may contain whitespace between the word math and the brackets. No whitespace is allowed between the opening bracket and the backslash on the $$ tag, however.
 * 10) **A closing tag is not required. If it is missing then the rest of the supplied text is treated as part of the maths mark-up.
 * 11) *The contents of the math tag are rendered by calling renderMath. The details of this function are not yet included on this page.
 * 12) *If TeX support is disabled (global variable $wgUseTeX == false) then any math markup (including the tags themselves) is treated as normal wiki code.
 * 13) '''Any text enclosed by  tags is separated out.
 * 14) *Text within  tags is treated exactly the same as text within   tags (including how the tags are parsed and how the text is treated) with the following minor differences:
 * 15) **Any maths content within a  tag will already have been separated out and rendered.
 * 16) **The  tags are retained, and continue to enclose the text, whereas the   tags are removed from the final output.
 * 17) HTML tags are validated (using function removeHTMLTags)
 * 18) *This is quite a complex procedure, which I may go into more detail on on a separate page, but for the moment it can be summarised as follows:
 * 19) **HTML comments are removed.
 * 20) **Any tags that are not allowed by the software (e.g. tags) are replaced by HTML entitities, so they display as literals and are not treated as HTML by the browser.
 * 21) **Any badly formed tags (e.g. nested tags that shouldn't be nested,  tags outside a   tag, etc.) are also replaced by HTML entitities so they are not treated as HTML.
 * 22) **Any attributes that are not allowed by the software (e.g. onMouseOver) are removed from otherwise valid tags.
 * 23) **A small amount of minor source formatting is applied (basically, the removal of unnecessary whitespace).
 * 24) **A closing tag is added at the end for all tags that are not closed properly. Note that some tags (e.g.   ) don't need to be closed.
 * 25) Built-in wiki variables are replaced (using function replaceVariables)
 * 26) Horizontal lines are generated
 * 27) *Any occurence of four or more hyphens at the start of a line are replaced by an html  tag.
 * 28) *Any capitalised  tags are made lower case.
 * 29) Bold and italic formatting is applied (using function doAllQuotes)
 * 30) Headings are formatted (using function doHeadings)
 * 31) Lists and indentation formatting is applied (using function doBlockLevels)
 * 32) If dynamic dates are enabled, dates are reformatted appropriately (using function $wgLang->replaceDates)
 * 33) *Dynamic dates allow users to select a custom date format in the preferences section, and are enabled using the global variable $wgUseDynamicDates.
 * 34) *If dynamic dates are disabled then no replacement is made.
 * 35) External wiki links are created (using function replaceExternalLinks)
 * 36) Internal wiki links are created (using function replaceInternalLinks)
 * 37) ISBN numbers are made into links (using function magicISBN)
 * 38) RFC numbers are made into links (using function magicRFC)
 * 39) *This function currently does nothing. I am assuming it will eventually turn RFC numbers into links in a similar manner to the way ISBN numbers are handled.
 * 40) Headings are formatted (using function formatHeadings)
 * 41) The text is passed to the Skin object for formatting (using the function transformContent in the user's Skin class)
 * 42) *This allows the skin to add any of it's own formatting that may be required, e.g. table background colours.
 * 43) *The default skin does not make any alterations at this stage.
 * 44) Finally, the pre, math and nowiki content are recombined with the fully rendered wiki code and the whole HTML text is output