WTTP Client

From mediawiki.org

This page describes how a (theoretical) WTTP client may work, both internally and externally.

Note: This is a general idea of how a client may work eventually. It is to evolve as the WTTP spec evolves.

General[edit]

When a user opens a article in the client, many things happen. If it is a new server, the client logs in to that server. A request is made to the server to retrieve the article. The article's wikitext is sent back, which the client begins to parse as it comes in. As it parses, the client makes a list of requests needed to finish parsing. This may include:

  • Templates
  • Whether or not a page exists
  • URLs of images
  • The wiki's configuration
  • UI messages

When all is done, or as it is working, the article is displayed to the user formatted. The user can then read it or edit it or whatever.

Architecture[edit]

Parsing is done asynchronously from requests. When the parser finds something needing a request, it adds the request to a queue. When a given request is finished, it would be best to update the appropriate part of the document immediately (instead of waiting until the parser finishes the primary page.

Internal parser[edit]

The parser mentioned above is a two pass parser. Pass one is performed as the page is being received. There are as few look-aheads as possible. Anything that cannot be done without another request is flagged, with the request noted and queued.

Pass two, which may be done before pass one finnishes, inserts and modifies elements of the article that needed a second request to complete.

Templates[edit]

Templates need special consideration for two reasons:

  • They may finish incomplete wikitax
  • They require both passes of the parser again.

One way to make sure all is correct is to parse and display elements that do not contain templates (or contain pre-defined templates (DATE, TIME, SERVER, NAMESPACE, PAGENAME, PAGENAMEE, localurl, etc.), since these can be determined without another request), wile flagging incomplete ones. When the templates are retrieved, the elements that contain it are then parsed and displayed.

Extensions[edit]

XML-style extensions also need special care (This excludes internal tags, such as <nowiki> and <gallery>). Standard extensions (<math>, <hiero>, <timeline>, etc.) can be handled by the client. Non-standard extensions require the server to process them, which means that another request must be made, and is treated as such.

Media[edit]

The URL of the content of media (including images) requires only a HEAD request and the x-wiki-media-url header.

Request mechanism[edit]

The requesting mechanism is just a standard HTTP library with a thin layer to handle WTTP-specific headers and meta-data.

Caching[edit]

All request results are cached. If a cache has not expired, then the request returns immediately with the cached results. If is has expired, then the request is added to the queue to be processed.

Idea: If a cache has expired, can/should a HEAD request be performed next or immediately to see if it has changed? What is the overhead for a HEAD request vs. GET? Are there any performance gains from just sending the headers?

Display mechanism[edit]

For the immediate future, clients may use the MonoBook skin and a browser engine, such as Gecko or the WebBrowser control (gasp!), to create pages and modifying them through DOM. Other clients which are exporters or converters may use another library to accomplish this. (Note: This is written Windows-centric because of the ease of using the WebBrowser control (the Internet Explorer engine) or OLE and the object model of Word/Office.)

Eventually, specific display code will be developed to improve performance and to simplify display. Since the top/side bars remain fairly static, a full browser engine is not needed to render them. For the article itself, it will likely