User:Daniel Kinzler (WMDE)/Frontend Architecture

This document describes how MediaWiki's user interface should function. It is intended to provide constraints and guiding principles for feature development.

The front-end architecture defines the flow of information and control between the client device and the application servers (or, more generally, the data storage layer). In practice, this mostly mostly means defining when, where and how HTML is generated, manipulated, and assembled.

Goal
''Note: this is a straw-man vision. It's here for the sake of discussion.''

Provide two user experiences:

1) A (mostly) data driven single page web application that heavily relies on client side JS and uses a modern framework for state management and template rendering.This should be the default view for both desktop and mobile clients.

2) A (mostly) static view for use by client software without the necessary level of JS support for the single page App. This may still have some optional bits of "old school" JS. The static view is served as a full HTML page which is rendered server side based on the same templates also used on the client side by the single page experience.

Both views share the same URLs, and the appropriate view is selected by detecting client capabilities. This could perhaps be done by initially serving the static view, which then gets replaced with the single page app if possible.

Turn MediaWiki from a monolith into a language-agnostic framework:

Through the use of API routing, dependency injection, server side template rendering and page composition at the edge, eventually allow APIs, HTML output and internal services to be implemented in PHP, JS, or some other language. The freedom this provides however has to be balanced against the overhead of crossing the language barrier, against the requirements of the installation environment, as well as the complexity of managing the deployed application. That is, we want to use this freedom, but use it sparingly.

Assumptions
These are straw-man assumptions, presented here to be challenged!
 * 1) The basic version of MediaWiki has to be installable on a plain LAMP stack without root access (shared hosting use case).
 * 2) We cannot reply on being able to run PHP and JS code in the same process / without the need for communication via the network stack (no v8js in PHP). This implies that we can't call out from PHP to JS code fro template rendering, nor can we call PHP code from JS for localization.
 * 3) We cannot rely on Service Workers as a mature technology.
 * 4) We can rely on Web Workers being available on the client. It would be OK to treated clients that do not support Web Workers as having no JS support at all.
 * 5) We want to allow fully data driven UIs (meaning that all HTML is generated on the client). This requires all functionality to be available via an API.
 * 6) We support consumption of all content, including meta-data, on HTML-only devices with no JS support.
 * 7) Not all user workflows have to be supported on no-JS clients.
 * 8) Editing of text-based page content needs to be possible on no-JS clients.
 * 9) Basic curation needs to be possible on no-JS clients (watch, undo/rollback, delete/undelete, protect, block).
 * 10) We do not want to server different content to different kinds of clients. We may however need the ability to serve a "static" or a "dynamic" page depending on client capabilities.
 * 11) We want the ability to serve different renderings/presentations of content depending on the user's preferences (including information from cookies for anons). In particular, we want to be able to serve multi-lingual pages rendered for different target languages. URLs for different renderings (languages) of content should be different, to allow explicit linking to a specific rendering.
 * 12) However, we also want the ability to serve reactive content to the client which adapts to the target device's capabilities.
 * 13) We cannot assume that all functionality can be converted to a data driven approach right away, it will take some time for e.g. all extensions to be converted to a data driven approach. We need to cater to transitional stages. We aim to make the most common workflows available via APIs soon.
 * 14) We'd like the ability to render declarative templates on the client as well as on the server.
 * 15) We want consistent localization when rendering on the client or on the server, including message strings, parameter substitution, plural handling, and formatting for numbers and dates. This probably means maintaining a full JS port of the relevant formatting code, since proper data-driven UIs are blocked on this (they need client side formatting of data values).
 * 16) We won't render actual Content (like wikitext) on the client.
 * 17) We want single-page applications to be possible based on our APIs.
 * 18) A single-page approach may become the default for JS enabled clients, while no-JS clients would use the static page views. Static page views and "dynamic" views for the same page should have the same URL, and looks much the same (graceful degradation). This should be achieved by re-using the template code used on the client for the dynamic views to generate the static view on the server.
 * 19) The web interface (JS client) should be using the same APIs as the native clients (apps).
 * 20) The web interface (JS client) should be the same for desktop and mobile devices. We expect the line between mobile and desktop to blur and finally disappear over the next 5 years.
 * 21) We want full Multi-Data-Center support. This means all information that is needed to decide whether a request needs to be routed to the master DC needs to be in the request. For the application servers, this mostly means "don't write to the database in GET requests".
 * 22) We want a single source of truth for rendering wikitext (and any other content type) as HTML.
 * 23) We want to expose narrow, stable interfaces for client-side customization (gadgets)
 * 24) It should become possible to implement an API module and an associated special page purely in JS.
 * 25) It should remain possible to implement an API module and an associated special page purely in PHP.

Components
The following classes of things generate HTML and need to be considered below: Skin, ContentHandlers, input forms for Special pages and action handlers, results (listings) on for Special pages and action handlers, dynamic content for some page types (file pages, category pages).
 * Page composition at the network edge (probably using ESI). API for serving HTML snippets (some cacheable globally, some per user, some not at all)
 * This should be used at least to combine the parts of the skin with the page content.
 * Usage of this mechanism is optional. The index.php entry point still needs to deliver a fully composed page per default, so MW is usable without an ESI capable caching layer.
 * This requires a dependency tracking and puring engine (using Kafka as a bus and some graph database for storing the dependency graph). Used to re-generate HTML snippets (and other derived artifacts).
 * It would perhaps be useful to support additional massaging/hydration, beyond what ESI supports. This would allow use to do localization here, as well as adapt for client devices.
 * ESI requires that the caching layer needs to be able to predict what kind of content it will be getting by looking at the request. This means e.g. that Special pages that serve non-HTML output would not be possible or would have to be especially registered.
 * Unified Dependency Tracking and Purging based on an Event Bus.
 * This makes it easy to introduce new kinds of artifacts or change granularity, without having to implement a tracking and updating mechanism for each use case. This is particularly important for the HTML snippets / ESI, as well as for allowing caching for the APIs that suppoort a data driven UI.
 * Template engine with a JS and a PHP implementation.
 * Only if we have localization implemented in both JS and PHP, templates can be fully data driven.
 * If we have PHP code as the single source of truth for i10n handling, templates can:
 * A) Use pre-formatted data: this would probably require an API mode that doesn't return abstract JSON nor rendered HTML, but some kind intermediate form (which may be annotated HTML). This could be interpreted as being a "view-model" in the sense of an MVVM architecture.
 * B) Call back to PHP for formatting. That's probably too slow, but might be possible in some cases with appropriate batching.
 * Rendering Special pages can be HTML based (legacy) or data driven (modern):
 * HTML based special pages should generate annotated "semantic" HTML, somewhat similar to the output of Parsoid, that allows easy massaging for different target devices.
 * Data driven Special pages are just glue that applies template rendering to data returned by an API call. The template rendering would happen on the server or the client, as need be.
 * A common REST API interface for functionality implemented in PHP, JS, or whatever other language.
 * Clients should not be aware of where and how an API is implemented.
 * Routing should be possible in the CDN layer / load balancer.
 * Routing should also be possible inside MW core, so it is available without a CDN layer.
 * The new API should map to the existing action API for most if not all cases, to we don't have to re-implement all API functionality.
 * Endpoint for serving HTML snippets (for use by ESI).
 * Must at least serve all bits needed for the skin and rendered page content (including special pages, action handlers, etc)
 * Could also serve bits of composite page content (e.g. infoboxes)
 * Blurry distinction from template rendering API
 * JS framework that maps between a data model and the DOM, and manages API calls to the backend (MVC/MVVM).
 * This kind of framework is designed for a fully data driven environment, with all rendering done in JS. However, we will still have HTML snippets coming from the backend. At the very least for rendered page content. These snippets need to be integrated into the DOM, and they may need massaging/hydration.
 * This requires JS template rendering, see there for i10n issues.
 * A single REST API for accessing rendered page content (most importantly for for wikitext), which yields output similar to the output of Parsoid.
 * This should probably be backed by a PHP-based parsoid port, to avoid calling back from JS to PHP code for each template, parser function, etc.
 * A rest API for asset delivery
 * For JS and CSS resources, associated icons
 * For localization resources (message bundles)
 * Should make aggressive use of caching on all levels.
 * Not for embedded media (probably)

TBD:
Show which assumption informs which component design.

Layers of output synthesis:
 * 1) Full HTML page / DOM
 * 2) HTML snippets (exposed by API, used JS and ESI)
 * 3) Pre-formatted data (view-model, exposed by API, used for template rendering)
 * 4) external data model (JSON, exposed by API, needs i10n-aware formatting)
 * 5) internal data model (PHP, not exposed)