User:OrenBochman/ParserNG

The pages are an attempt at documenting my Antler based Parser Speck to create a new efficient analysis chain for indexing wikisource.

1. The ANTLR spec needs to be able to tokenize wikisource. 2. the tokens should then be tagged. 3. next the tokens can be filtered. 4. the token that are not removed by the filter may be augmented by payloads.

Specs
However ANTLR can generate PHP and JavaScript code.
 * I no longer see a need for a monolithic parser in ANTLR.
 * Lucene loves  based analysers chains. Each step in the chain consumes the predecessor's input.
 * these ANTLR grammars because they do little more than document the wiki sytax while creating a parse tree.

Analysis Chain
They specs are planned as parts of a parser chain.


 * 1) preprocessor
 * 2) comments
 * 3) templates
 * 4) magic words
 * 5) parser functions
 * 6) core
 * 7) extensions
 * 8) maths
 * 9) date
 * 10) etc
 * 11) tables
 * 12) links, images
 * 13) other simple syntax
 * 14) formatting
 * 15) extensions tags
 * 16) cite.
 * 17) others


 * ideally the parser should be able to transform input to output format.
 * for search however the overridin concerns is to capture the search points in thier context together with any boosting factors.

To build fully functional output would necessitate: transliteration tables.
 * 1) a mechanism to resolve
 * 2) parser functions,
 * 3) magic words,
 * 4) extensions action (not realistic) unless they can be invoked fast from a mock PHP Doc interface.
 * 5) globalization of information
 * 6) a mechanism to resolve transclusion of
 * 7) templates,
 * 8) non template NS content
 * 1) parser specs
 * 2) integration with the parser specs
 * 3) transform the pharse tree to the output tree using a  tree grammars.
 * 4) use StringTemplate file to construct the output.
 * 1) use StringTemplate file to construct the output.

an analysis of the input On the other hands produce a basic parse tree.


 * [[Manual:Extending_wiki_markup|]

Current Specs

 * WikiTable
 * Preprocessor
 * Translator
 * Awk ANTLR
 * Sanitizer Antlr Scrubber

Parsing Options
Goal: specify the parser in Antlr
 * would provide documentation
 * would be more efficient and robust.
 * would simplify other parsing effort
 * can produce different language targets php,JavaScript,java,c++,python for use by many tools
 * can be used to migrate, translate to a better format.
 * can be extended

Challenges of Parsing MediaWiki Syntax
Based on: and
 * 1) The set of all Input is not fixed.
 * 2) External references:
 * 3) *templates
 * 4) *transclusion
 * 5) *extensions
 * 6) Command order mis marthc
 * 7) *output is a single file. input can a recursive set of files.
 * 8) *templates require out-of-order processing and extensions too.
 * 9) the lexer is context sensitive lexer?
 * 10) Need to look forward, and backwards too some times.
 * 11) backwards to determine curly construct meaning. (till end of file)
 * 12) the same goes for include-only, no-include, comments and no wiki.
 * 13) The languages is big, the statement (magic words can be changed externally)
 * 14) some language statements are very similar
 * 15) * [ in .|... can mean several things. (internal link, external link, audio, picture, video etc)
 * 16) * { in can mean several things.
 * 17) * ' in 'x' can mean several things: ' +  or  + '
 * 18) White space adds some complexity.
 * 19) * TOC placement
 * 20) * indentations does matter
 * 21) * single vs multiple new lines matter too.
 * 22) Optional case sensitivity in literals first letter but not in commands.
 * 23) Error recovery is important
 * 24) Good reporting is not
 * 25) Poor documentation.
 * 26) * The language is not well-defined and is sparsely documented;
 * 27) * It was hacked for ages like by non-language designers?
 * 28) * The only definition is in the working code of the above hacks.
 * 29) The Translator should be fast and modular.
 * 30) * However the current parser is very slow.
 * 31) * it would be hard to be slower
 * 32) * extensive caching compensates for slowness in many situations
 * 33) * modularity and simplicity are more important.
 * 34) content has comments and markup that can occur anywhere in the input and need to go out into the output at proper locations.
 * 35) multiple syntax for features:
 * 36) * tables
 * 37) * headers, bold italic can be wiki or html based
 * 38) * output need not be human editable
 * 39) input size - can be massive, e.g. wikibooks.
 * 40) * imposes limits on # of passes.
 * 41) * imposes limits on viability of memorization.

Open Questions

 * 1) what are and what should be the parser's
 * 2) * error handling.
 * 3) * error recovery capability.
 * 4) Is a major move to simplify the language being considered?
 * 5) *reducing construct ambiguity.
 * 6) *reducing context dependency.
 * 7) **Links, images etc in
 * 8) *simple is not necessarily weaker.
 * 9) how does/should the extension mechanism interact with the parser.
 * 10) * protect the parser from extension's bugs.
 * 11) * give extension's services.
 * 12) * separate implementation.
 * 13) is the Antlr backend for PHP or JavaScript good enough to generate the parser with?
 * 14) what is the importance of semantics on parsing media wiki content, as opposed to parsing just the syntax?
 * 15) templates seem important
 * 16) can the parser's complexity be reduced if had access to semantic metadata.
 * 17) scoping rules (templates, variables, references)
 * 18) * are the required variable defined already
 * 19) * when does a definition expire

enhancements

 * 1) dynamic scoping of template args
 * let the template called see named variables defined in their parent's call
 * as above but with name munging like super.argname


 * 1) parser functions which evaluate
 * (mathematical) expressions within variables.

Existing Documentation

 * Preprocessor
 * Markup Speck
 * Alternative_parsers
 * Parser Testing script + Test Cases
 * Extending_wiki_markup Parser hooks for the extension mechanism
 * hooks:

 Category:ParserBeforeStrip extensions rely on the ParserBeforeStrip hook. Category:ParserAfterStrip extensions rely on the ParserAfterStrip hook. Category:ParserBeforeInternalParse extensions rely on the ParserBeforeInternalParse hook. Category:OutputPageBeforeHTML extensions rely on the OutputPageBeforeHTML hook. Category:ParserBeforeTidy extensions rely on the ParserBeforeTidy hook. Category:ParserAfterTidy extensions rely on the ParserAfterTidy hook. 

Missing Specks

 * Language conversion {- -} syntax
 * sanitation
 * Operator precedence
 * Error recovery

Tools

 * Mediawiki\maintenance\tests
 * Parser Playground gadget

Antlr

 * How to remove global backtracking from your grammar
 * look ahead analysis
 * (...)? optional sub-rule
 * (...)=> syntactic predicate
 * {...}? hoisting disambiguating semantic predicate
 * {...}?=> gated semantic predicate

Java Based Parsers
the last is the most promising!
 * http://code.google.com/p/gwtwiki/
 * http://rendering.xwiki.org/xwiki/bin/view/Main/WebHome
 * http://sweble.org/wiki/Sweble_Wikitext_Parser

Todo

 * 1) finish the dumpHtmlHarness class.
 * 2) add more options.
 * 3) bench marking.
 * 4) log4j output.
 * 5) implement extension tag loading mechanism.
 * 6) implement magic word (localised) loading mechanism.
 * 7) input filter support.
 * 8) different parser implementation via dependency injection
 * 9) write a JUunit test which runs the tests in Mediawiki\maintenance\tests\parser\parserTests.txt
 * 10) write a JUunit test which runs real page content.
 * 11) get the lot into Jenkins CI.
 * 12) fix one of the above parser
 * test the ANTLR version.