Manual:MediaWiki architecture

This page contains the actual text of the MediaWiki architecture document project. This content will be merged into the appropriate mediawiki.org pages once the specific formatting required for inclusion in the AOSA book is not needed any more.

We are tentatively using an "architecture through history" approach, where MediaWiki's history is told chronologically, and its different parts presented along a timeline, surrounded by the context (reasons, assumptions, constraints) in which they were introduced.

Introduction: Wikipedia's software

 * context
 * Wikipedia and sister sites
 * optimization and performance because of requirements of high-profile sites like those operated by the WM


 * software originally written for a very specific purpose: serving a community for the creation & curation of knwoledge; not the case of most CMS.
 * constant design decisions that never come up in other CMSes
 * openness of wikis; technology response: abusefilter, etc.
 * community defining roles, processes, to deal with vandalism, for example. That doesn't come up in other contexts: technological answers that don't come up with regular CMS. other CMSes: corporate customers, existing workflows
 * needs of the community
 * collaboration
 * the community and its need evolved over time; it wouldn't have been possible to foresee the features the community would need
 * interplay of editing community and development

the architecture of MW has been driven many times by initiatives started of requested by the community. e.g. creation of Wikimedia Commons, FlaggedRevs, Wiki loves monuments


 * Performance
 * Security
 * Languages
 * open-source


 * Regular releases and installer. Making sure the software was easy to set up was a BIG factor in getting more volunteer developers involved, both directly (people already possibly interested wanting a small impedence to start working) and indirectly (easier for 3rd-party usage, leading to people sending fixes and customizations upstream).


 * Deploy-then-release


 * endless compromise between performance and features. e.g. template system degraded performance. One possibility is to turn them into dedicated extensions


 * hurt performance: awful awful syntax making it harder to plug in better-performing parser etc bits


 * Factors that contribute to a 'performance culture':
 * We have a few very expert people on board who know a lot about performance optimization (DB performance is a big chunk of it, but the rest is important too)
 * MediaWiki must run on a top-ten web site. Things that would not scale to that size are fixed, reverted, or put behind a config var
 * MediaWiki must run on a top-ten web site operated on a shoestring budget, which has additional implications for performance and caching


 * couple slides from talk at CCC in 2004

Phase I: UseModWiki
Wikipedia was launched in January 2001. At the time, it was mostly an experiment, to try and boost the production of content for Nupedia, a free-content, but peer-reviewed, encyclopedia created by Jimmy Wales. Because it was an experiment, Wikipedia was originally powered by UseModWiki, an existing GPL wiki engine written in Perl, using CamelCase and storing all pages in individual text files.

It soon appeared that CamelCase wasn't really appropriate to name encyclopedia articles. In late January 2001, UseModWiki developer and Wikipedia participant Clifford Adams added a new feature to UseModWiki: free links, e.g. the ability to link to pages with a special syntax (double square brackets), instead of automatic CamelCase linking. A few weeks later, Wikipedia enabled the new version of UseModWiki supporting free links, and enabled them.

While this initial phase isn't about MediaWiki per se, it provides some context and shows that, even before MediaWiki was created, Wikipedia started to shape the features of the software that powered it. UseModWiki also influenced some of MediaWiki's features, for example its basic text formatting syntax.

Phase II: the PHP script
In 2001, Wikipedia was not yet a top 10 website; it was an obscure project sitting in a dark corner of the Interwebs, unknown to most search engines, and hosted on a single server. Still, performance was already an issue, notably because UseModWiki stored its content in a flat file database. At the time, Wikipedians were worried about being "inundated with traffic" following articles in the New York Times, Slashdot or Wired.

In Summer 2001, Wikipedia participant Magnus Manske (then a university student) started to work on a dedicated wiki engine for Wikipedia in his free time. The goal was to improve Wikipedia's performance using a database-driven software, but also to be able to develop Wikipedia-specific features that couldn't be provided by a "generic" wiki engine. Written in PHP and MySQL-backed, the new engine was simply called the "PHP script", "PHP wiki", "Wikipedia software" or "phase II".

The "PHP script" was made available in August 2001, shared on SourceForge in September, and tested until late 2001. As Wikipedia suffered from recurring performance issues because of increasing traffic, the English language Wikipedia eventually switched from UseModWiki to the PHP script in January 2002. Other language versions also created in 2001 were slowly upgraded as well, although some of them would stay powered by UseModWiki until 2004. An automated program, called "User:Conversion script", converted the last version of the existing articles to the phase II format; previous revisions of the articles' history pre-January 2002 were partly restored by developer Brion Vibber in September 2002.

As a PHP software using a MySQL database, the PHP script was the first iteration of what would later become MediaWiki. Not only that, but it also introduced many critical features still in use today, like namespaces to organize content (including talk pages), skins, and special pages (including maintenance reports, contributions list and user watchlist).

Phase III: MediaWiki
Despite the improvements from the PHP script and database back-end, the combination of increasing traffic, expensive features and limited hardware continued to cause performance issues. In 2002, developer Lee Daniel Crocker rewrote the code again, calling the new software "Phase III". Because the site was experiencing frequent difficulties, Lee thought there "wasn't much time to sit down and properly architect and develop a solution", so he "just reorganized the existing architecture for better performance and hacked all the code".

The Phase III software kept the same basic interface, and was designed to look and behave as closely to the Phase II software as possible. A few new features were also added, like a new file upload system, side-by-side diffs of content changes, and interwiki links.

It was deployed to the English Wikipedia in July 2002, along with a hardware move to a new (but still single) server. Other features were added over 2002, like new maintenance special pages, or the "edit on double click" option. Performance issues quickly reappeared, though: view count and site stats, which were causing two database writes on every page view, were for example disabled in November 2002, then re-enabled. The site would occasionally be switched to read-only mode to maintain the service for readers, and expensive maintenance pages would be disabled during high-access times because of table locking problems.

In early 2003, developers discussed whether they should properly re-engineer and re-architect the software from scratch, before the fire-fighting became unmanageable, or whether they should rather continue to tweak and improve the existing code base. The latter solution was chosen, mostly because most developers were sufficiently happy with the code base, and confident enough that further iterative improvements would be enough to keep up with the growth of the site.

In June 2003, a second server was added, serving as a database server separate from the web server. The database server also served as the web server for non-English Wikipedia sites. Load-balancing between the two servers would be set up later that year. A new page caching system was also enabled, that used the file system to cache rendered, ready-to-output pages for anonymous users.

June 2003 is also when Jimmy Wales created the Wikimedia Foundation, a nonprofit to support Wikipedia, and manage its infrastructure and day-to-day operations. The "Wikipedia software" was officially named "MediaWiki" in July, as a word play on the Wikimedia Foundation's name. What was thought at the time to be a clever pun would confuse generations of users and developers.

New features were added in July, like the automatically-generated table of contents, and the ability to edit page sections, both still in use today. The first release under the name "MediaWiki" happened in August 2003, concluding the long genesis of a software whose overall structure would remain fairly stable from there on.

The Coming of age
2003
 * December 2003: MediaWiki 1.1: User-editable interface messages through "MediaWiki namespace" and Magic words

2004
 * January 2004: still performance problems
 * January 29th, 2004: new features: Edit toolbar "can be enabled in prefs (works perfectly in IE, near perfect in Mozilla, not so great in most others); will be refined and possibly made default in the future", Extended image syntax "allowing automatic generation of small versions of images, and alignment of images without HTML"
 * January 30, 2004: Nine new Wikimedia servers, purchased using about $20K of the money generously donated in the December/January fundraising drive. Brought online over February 2004
 * Late May 2004: MediaWiki 1.3
 * December 12, 2004: Spam blacklist


 * Categorization system
 * 1.3: The MonoBook skin, categories, templates and extensions.
 * 1.5 schema refactor; Proposed Database Schema Changes/October 2004

2005
 * January 2005: MediaWiki 1.4 on the English Wikipedia
 * Hooks!
 * Major database redesign decoupling text storage from revision tracking
 * Page content must be encoded in UTF-8.

2006
 * ParserFunctions extension
 * API?

2007
 * Gadgets extension

2008
 * FlaggedRevisions extension
 * May: CentralAuth & Unified login
 * 1.12 - preprocessor rewrite (Tim); improvements to template performance

2009-2010
 * ResourceLoader; beginning of strong JavaScript module APIs
 * Usability initiative (Vector skins/WikiEditor extension etc.)

Use

 * reusability of MediaWiki by other people, not just for our purposes; used to be difficult to install (command line installer, many references to Wikipedia, hardcoded paths); now easier with the web-based installer
 * installer. a quick note about the installer would be good. During the early days (ask Tim Starling), you add to run a shell script to install MW. Then it was made to just upload file and run the /config/ wizard.
 * Experimental web-based installer since 1.2 (March 2004)
 * Making it FLOSS since the very beginning was very important for its popularity. MediaWiki has become the 800-lb gorilla of wiki software, and it wouldn't have happened with a closed development model.
 * Reusing MediaWiki to build commons: not adapted to handling millions of media files.

backwards compatibility

 * Some aspects such as hooks or configuration variables, remain very stable for a long time. When they change, they typically go through a slow deprecation process to allow users and extension authors to catch up.
 * However our internal apis change all the time which can be frustrating to extension authors (and even core devs!)
 * I think this will improve in the coming releases though. A lot of our "omg rewrite" situations in the past ~2 years have been to bring ancient code into the 21st century.


 * Inconsistent. Such is the result of having many volunteer developers with many different opinions on this fraught issue.


 * Most old methods / functions are kept in the code, nowaday they are marked as deprecated and removed after 2 or 3 releases. We still support the 10 years old skins.
 * The wikitext parser and render still supports hacks we really want to remove for performances reasons.

PHP

 * Unprefixed class names.
 * PHP core and PECL developers have the attitude that all class names that are English words inherently belong to them, and that if any user code is broken when new classes are introduced, it is the fault of the user code.
 * e.g. 12294 Namespace class renamed to MWNamespace for PHP 5.3 compatibility, fixed in 1.13
 * Prefixing e.g. with "MW" would have made it easier to embed MediaWiki inside another application or library.


 * We try to do things cleanly if there are benefits to it (e.g. separation of logic and output in the architecture) but at the same time we're not afraid to ignore standards or rules if that's better for us (e.g. not fully complying with stupid provisions of HTML4, denormalizing the DB schema where that brings performance benefits)


 * MediaWiki seems to have been started mostly by people who weren't really very expert in their field, and as a result a lot of ugly old code is lying around that lacks proper logic/view separation and has other nasty issues

MediaWiki grew organically and is still evolving. It's hard to criticise the founders for not implementing some abstraction which we now find to be critical, when the initial codebase was so small, and the time taken to develop it so short.


 * relying on PHP/MySQL: probably not the best choices for performance. But very popular and facilitates recruitment of new devs
 * hurt: PHP has not benefited from performance improvements that some other dynamic languages have seen in recent years (eg JavaScript VMs now have aggressive JITs etc, but Zend's PHP still doesn't ship an opcode cache, much less try to actually compile anything)
 * Obviously using Java would have been much better for performance, and scaling up the execution of backend maintenance tasks would have been simpler.

Configuration

 * Globals for configuration -- partially this is because we started out in PHP4 world, but it has really hurt 3rd parties over time and made the software seem rather difficult to configure/maintain.


 * Configuration variables are placed in the global namespace.
 * This had serious security implications with register_globals.
 * It limits potential abstractions for configuration, and makes optimisation of the startup process more difficult.
 * The configuration namespace is shared with variables used for registration and object context, leading to potential conflicts.


 * the configuration system hurt MediaWiki's performance.

Structure and classes

 * main classes

We've seen major new architectural elements introduced to MediaWiki throughout its history, for example:


 * The Parser class
 * The SpecialPage class
 * The Database class
 * The Image class, then later the media class hierarchy and the filerepo class hierarchy
 * ResourceLoader
 * The upload class hierarchy
 * The Maintenance class
 * The action hierarchy

MediaWiki started without any of these things, despite the fact that all of them support features that have been around since the beginning. Many developers are driven primarily by feature development -- architecture is often left behind, only to catch up later as the cost of working within an inadequate architecture becomes apparent.


 * Internal API


 * Special pages
 * June 2003: PageHistory:Page name, UserContributions:User name, BackLinks:Page name

Database
MediaWiki has been using a relational database back-end since the Phase II software.

The default (and best supported) DBMS for MediaWiki is MySQL, which is the one used by all Wikimedia sites. Because alternate DBMSes (e.g. PostgreSQL, Oracle, SQLite) are not a priority for Wikimedia, support has been long to come and inconsistent. For example, PostgreSQL support was "largely working" in MediaWiki 1.4, dropped in 1.5, experimental in 1.7, and fully restored in 1.8. In recent years, volunteer developers have made efforts to support alternate DBMSes, to cater for the needs of third-party users. The DBMS can be chosen by the system administrator during installation, and MediaWiki provides both a database abstraction and a query abstraction layer, that simplify database access for developers.

The database went through dozens of schema changes over the years, the most notable being the decoupling of text storage and revision tracking in MediaWiki 1.5. This change, aiming to make better use of the database cache and disk I/O, resulted in significant performance boosts for some operations, like rename and delete operations on pages with very long edit histories.

The current layout contains dozens of tables, many of which related to the wiki's content (e.g.,  ,  ,&  ). Other tables include data about users, media files , caching and internal tools (  for ResourceLoader,   for the job queue), among others.

Because of the size of Wikimedia sites, their databases, and their readership, database handling in MediaWiki is extremely optimized for performance. As of 1.4.5 (2005), MediaWiki can store page text externally on a separate database server, and old page revisions can be compressed to reduce the size of the database. Indices and summary tables are used extensively in MediaWiki, as SQL queries that scan huge numbers of rows can be very expensive, particularly in the context of Wikimedia. As a matter of fact, unindexed queries are usually discouraged in MediaWiki.

MediaWiki has built-in support for load balancing, added as early as 2004 in MediaWiki 1.2 (when Wikipedia got its second server — a big deal at the time). Load balancing is now a critical part of Wikimedia's infrastructure, which explains its influence on some algorithm decisions in the code. Also visible in the code is the influence of Wikimedia's multi-database environment, where writes are made to a unique master database, and then replicated to read-only slaves. Native management of the replication lag is one of the features offered by MediaWiki that few software developers usually have to worry about.


 * hurt performance: MySQL has had a few specific areas it's lagged in that have been problematic:
 * lack of full native UTF-8 (this is finally in in the latest versions, but you have to jump through some hoops and we have years of legacy databases)
 * no or limited online alter table makes even simple schema changes painful to deploy, slowing some development
 * data dump format is very hard to parallelize well; even with few changes to the database it takes forever to build one due to the compression.

Workflow of a request

 * index.php dispatches to MediaWiki class; SpecialPage class; page, revision, & user tables, Title & WikiPage classes

Caching
MediaWiki itself is improved for performance because it plays a central role on Wikimedia sites, but it is also part of a larger operational ecosystem that has influenced its architecture. Wikimedia's caching infrastructure has imposed limitations in MediaWiki; developers worked around the issues, not by trying to shape Wikimedia's extensively optimized caching infrastructure around MediaWiki, but rather by making MediaWiki more flexible, so it could work within that infrastructure, without compromising on performance and caching needs.

On Wikimedia sites, most requests are handled by reverse caching proxies (Squids), and never even make it to the MediaWiki application servers. Squids contain static versions of entire rendered pages, served for simple reads to users who aren't logged in to the site. MediaWiki natively supports Squid and Varnish, and integrates with this caching layer by, for example, notifying them to purge a page from the cache when it has been changed.

For logged-in users, and other requests that can't be served by Squids, Squid forwards the requests to the web server (Apache), adding the "X-Forwarded-For" header containing the remote address; preserving the client's address is important since MediaWiki uses IPs for a variety of purposes, e.g. to block users and to credit edits made without a user account.

The second level of caching happens when MediaWiki renders and assembles the page from multiple objects, many of which can be cached to minimize future calls. Such objects include the page's interface (sidebar, menus, UI text) and the content proper, parsed from wikitext. The in-memory object cache has been available in MediaWiki since the early 1.1 version (2003), and is particularly important to avoid re-parsing long and complex pages. The default object/parser cache uses memcached.

In 2011, Wikimedia developed a disk-backed object cache to supplement memcached, and increase the amount of space dedicated to caching parsed pages. The system uses a MySQL back-end, splitting the cache into several tables, to avoid table lock contention on servers with a high write load. This feature, now included in MediaWiki, tripled Wikimedia's parser cache hit ratio, now over 90%.

Login session data can also be stored in memcached, which lets sessions work transparently on multiple front-end web servers in a load-balancing setup (Wikimedia heavily relies on load balancing, using LVS with PyBal).

Since version 1.16, MediaWiki uses a dedicated object cache for localized UI text; this was added after noticing that a large part of the objects cached in memcached consisted of UI messages localized into the user's language. The system is based on fast fetches of individual messages, minimizing memory overhead and start-up time in the typical case.

The last caching layer consists of the PHP opcode cache, commonly enabled to speed up PHP applications. Compilation can be a lengthy process; to avoid compiling PHP scripts into opcode every time they're invoked, a PHP accelerator can be used to store the compiled opcode and execute it directly without compilation. MediaWiki will "just work" with many accelerators such as APC, PHP accelerator and eAccelerator.

Because of its Wikimedia bias, MediaWiki is optimized for this complete, multi-layer, distributed caching infrastructure. Nonetheless, it also natively supports alternate setups for smaller sites. For example, it offers an optional simplistic file caching system that stores the output of fully rendered pages, like Squid does. Also, MediaWiki's abstract object caching layer lets it store the cached objects in a number of different places, including the file system, the database, or the opcode cache.

ResourceLoader
Like in many web applications, MediaWiki's interface has become more interactive and responsive over the years, mostly through the use of JavaScript. Usability efforts initiated in 2008, as well as advanced media handling (e.g. online editing of video files), called for dedicated front-end performance improvements.

ResourceLoader was developed to optimize the delivery of JavaScript and CSS assets. Started in 2009, it was completed in 2011 and has been a core feature of MediaWiki since version 1.17. ResourceLoader works by loading JS and CSS assets on demand, thus reducing loading and parsing time for unused features, for example in older browsers. It also minifies the code, groups resources to save requests, and can embed images as data URIs.

ResourceLoader is a particularly interesting case, as it's one of the few core components of MediaWiki that benefited from proper architecting prior to development. The main reason is that its developers not only wanted to "do it right", but also had the opportunity to do so, as the Wikimedia Foundation grew its engineering staff and more developers were available.

Security

 * Strongly focusing on security by providing wrappers around HTML output and DB queries that handle escaping for you, and making their use pretty much mandatory. This means that everyone is expected to write secure code, while at the same time writing secure code is made easy so everyone can do it. Thanks mostly to Tim Starling, we have institutionalized a security-minded development culture, and I think that contributes to the low number of security flaws found in MediaWiki.


 * WebRequest / Sanitizer
 * user input sanitization. We have webRequest to grab parameters given by a user and make them safe. to avoid code injection

Languages

 * omnipresent in MediaWiki
 * We are fully committed to internationalizing our software in any imaginable language. This i18n support is quite pervasive and impacts many parts of MediaWiki, but despite that we stuck with it anyway, and we now have a very feature-rich i18n system.

some languages are really rare on the Internet; Wikimedia provides that, through mediawiki

reason why MW is so good with l10n = because content exists in so many languages


 * things we support that 'normal' people never think about:
 * internationalization in 350+ languages
 * the ability to have the interface and content in different languages
 * right-to-left languages
 * mixed directionality (i.e. interface language and content language have opposite directionality)

Content language

 * Using per language encoding led to a lot of issues. Eventually everything migrated to UTF-8 which makes things easier when you deal with hundreds of different languages. was eliminated in 1.5, along with the "big schema change".

All languages available in UTF-8 mode since MediaWiki 1.3 (August 2004), and made mandatory with 1.5 when Latin-1 support was dropped

Interface language

 * localization and internationalization
 * interface messages
 * 1.9: Localized special pages
 * namespaces, magic words, special pages name: other area of the l10n, so not as easy to translate. but translating those can be disruptive and break bots, etc.


 * l10n and JavaScript


 * 2003: providing PHP patches and sending them to wikitech-l

called "Database messages". performance hit
 * MediaWiki 1.1, December 2003: The MediaWiki namespace, a new MediaWiki feature for interface translation and customization by sysops, has been switched on.
 * on by default since 1.2 (March 2004)
 * end of 2006: moving consistently from patch submitting to onwiki contributions
 * problem: language files shipping with MediaWiki are no longer up-to-date, and translation needs to happen on every wiki


 * Users can select from the available localizations to override the default user interface language: since 1.4
 * Traditional/Simplified Chinese conversion support since 1.4


 * translatewiki.net & translate extension
 * translate extension also used to translate pages


 * documentation in qqq
 * message keys in qqx


 * a lot of different implementations when siebrand came in (esp. in extensions); worked to standardize, now only one standard
 * MediaWiki standard: PHP arrays of key-values pairs
 * legacy reasons + flexibility; e.g. plural forms for multiple variables (not in gettext for example)


 * influence of languages on caching layer
 * resolving speed in displaying the itnerface in the user's
 * at one time, about 30% of cluster CPU was i18n
 * CDB database (constant database): 1.16
 * which caches messages either in .cdb files or in the database.


 * LocalisationUpdate extension
 * compares the messages used on a local wiki to those used in the svn repo, and updates all the messages whose English version hasn't been customized
 * activated in Sep. 2009
 * when implemented, it really made a difference; the WMF had stopped running trunk, and l10ns weren't getting upated; translators were getting dissatisfied.
 * now translations are live within 24 hours

e.g. until recently RTL & bidirectional support was pretty weak (strange overlaps, etc.). changes to improve this (in 1.18)

until 2-3 months ago, MW didn't have any dedicated language support dev. the WMF took it upon itself to start the i18n/l10n team to make the next big push

2 types of technologies: pushing in the coming months)
 * for the reader: WebFonts; if a webpage contains a script you can't display, it will automatically download a font so you can display it
 * InputMethod / key mapping functionality: Narayam


 * fallback languages


 * Parameters and switches
 * Plural, gender, grammar
 * $1

Authentication

 * A cleaner accounts system that spanned multiple sites from the beginning would have saved lots of trouble; CentralAuth is still a bit hacky to work with.

Permissions & blocks

 * Lack of a unified, pervasive permissions concept.
 * This stymied the development of new user rights and permissions features, and led to various security issues.


 * User rights management within the wiki: since 1.2 (March 2004)
 * 1.8: Allow blocks on anonymous users only.

Content structure

 * Page title: CamelCase, then free links. Allowed for greater flexibility in link text and page names and made it less confusing. Free links have since become the de-facto standard for internal links in most wiki software now
 * subpages (/talk subpages initially)
 * namespaces, parentheses in titles; related to the abandon of subpages decided by Larry Sanger in November 2001 after intense debate.


 * categories: since 1.3 (August 2004)

Namespaces

 * namespaces
 * The flat namespace for articles is too simple: for Wikipedia it encourages overly long pages (leads to performance problems as we have to parse and copy around huge chunks of text that will not usually get read all at once, and makes it harder to navigate to relevant, more digestable chunks of data). For other sites like wikibooks, wikisource, wikiversity, heck even mediawiki.org we could benefit a lot from more structured entities that consist of multiple subpages. It also means it's harder to separate draft or provisional pages from the published article space.


 * Magnus Manske implemented them in the first PHP script, then they were reimplemented a few times
 * way to separate the different spaces for the community to evolve
 * created the necessary preconditions for the community for meta-level discussions, community processes, user profile

Wikitext & Parser

 * tokenizer to parse wikitext (JeLuF wrote it). Unfortunately lack of performances with PHP array memory allocations led to a revert after 3 days of having it running on live site. We are back to the huge pile of regexp since them.
 * Cleaner markup syntax near the beginning would simplify our lives a lot with template & editing stuff


 * The parser wasn't formally spec'd from the beginning--it just morphed and evolved as needs have demanded. This makes it difficult for alternative parsers to exist and has made changing the parser hard. The parser's spec is whatever the parser spits out, plus a few hundred test cases.


 * The parser has to remain very very stable. Hundreds of millions of wikipages worldwide depend on the parser to continue outputting HTML the way it always has. It makes changing the parser difficult.

Making wikitext such a complex and idiosyncratic language that parsing it with 'normal' parsers is very hard was definitely a bad move, and we're feeling the pain now


 * Editing toolbar for learning wiki syntax: since 1.2 (March 2004)

1.12: preprocessor rewrite: "The main motivation for making these changes was performance. The new two-pass preprocessor can skip "dead branches" in template expansion, such as unfollowed #switch cases and unused defaults for template arguments. This provides a significant performance improvement in template-heavy test cases taken from Wikipedia.

Editing

 * metadata (categories, interwiki) in the body of text. This should really have been coded in a different table / interface.


 * The visual editor project is way overdue. We're fixing it now, and that's good, but it's kind of ridiculous that, in 2011, the main interface of one of the largest sites on the web is still a  from the 90s

Templates
1.3 (August 2004) Templates have been expanded with parameters, and separated from the MediaWiki: localization scheme. 1.6: default values for template parameters


 * While we have plenty of things to whinge about in the syntax, management etc, the ability to create partial page layouts and reuse them in thousands of articles with central maintenance has been a big boon.


 * The template argument default parameter feature was ultimately the most costly feature in terms of performance. It enabled the construction of a functional programming language implemented on top of PHP, starting with.

ParserFunctions

 * ParserFunctions never should've seen the light of day. Granted we were responding to the needs of the time, but if we did it over, we probably should've taken the time to solve the problem properly rather than putting PFuncs as a stopgap measure. This page has some interesting history on the subject (Also this and also older versions of this page)

Collaboration and wiki features

 * Automatic merging of edit conflicts when possible since 1.3 (August 2004)
 * Notifications: RSS, e-mail
 * RSS 2.0 & Atom 0.3 feeds for Recent Changes, New Pages since 1.3; upgraded to to Atom 1.0 in MW1.6
 * RSS for watchlist since 1.16


 * Other features related to collaboration, and how they influenced the architecture of MW ?


 * spam, logs, vandalism
 * 'Recentchanges Patrol' to mark new edits that haven't yet been viewed.: MW1.4
 * rel="nofollow" activated and applied to all external links to prevent linkspam since 1.4


 * 1.5: "Updates to user talk pages and watchlist entries can optionally send e-mail notifications."
 * 1.9: Undo
 * 1.10: cascading protection
 * 1.14: edit notices

Media

 * DB & filesystem storage layout for media files is very awkward, with a number of problems that hinder our ability to mirror, cache, and do major online maintenance.


 * Image resizing and thumbnail generation: since 1.2 (March 2004)


 * File repository code could've been done slightly differently. Ideally wikis	 should be able to upload *to* foreign repos, rather than just read from. Also, most of the code assumes a local filesystem or NFS which isn't very flexible--other backends like the database, Swift, etc. shouldn't be so hard to add.


 * Replication support


 * SVG rasterization support since 1.4
 * thumbnails for DJVU files and multipage DJVU display support since 1.8


 * InstantCommons & ForeignRepo
 * External images

Beyond MediaWiki

 * Several levels:


 * System administrator:


 * Wiki administrator:


 * Common.js, Common.css, and skin-specific counterparts
 * Gadgets (per wiki, but can be enabled/disabled by each user)
 * New developments on gadgets (central repo, UI improvements)


 * User:


 * User:Foo/vector.css, User:Foo/common.js

Side programs

 * interaction with other pieces of software (i.e. LaTeX, ImageMagick, etc.)
 * January 6, 2003: Inline TeX math formulas are now supported

API

 * w:User:Yurik/Query API
 * API:Wikimania 2006 API discussion
 * 1.9: "experimental machine API interface" enabled by default, read-only
 * The creation of api.php, and the addition of write actions (including edit) to it

Skins & extensions

 * hooks; http://thread.gmane.org/gmane.science.linguistics.wikipedia.technical/14227
 * Extension architecture. fairly flexible infrastructure which has helped us to make specialized code more modular, keeping the core software from expanding (too) much and making it easier for 3rd-party reusers to build custom stuff on top.


 * The skin system has been terrible since the beginning. It's damn near impossible for 3rd parties to write their own custom skins without reinventing the wheel.


 * Extension registration based on code execution at startup rather than cacheable data.
 * Limits abstraction and optimisation

"Optional modules" in 1.3: WikiHiero, Timeline


 * 1.4: "Skins system more modular: templates and CSS are now in /skins/ New skins can be dropped into this directory and used immediately."
 * "More extension hooks have been added. Authentication plugin hook."


 * Extensions are sometimes moved to Core


 * the extension registration systems hurt MediaWiki's performance.

Gadgets and site/user JS/CSS

 * hugely impactful, this has greatly increased the democratization of MediaWiki's software development. Individual users are empowered to add features for themselves; power users can share these with others both informally and through globally-configurable admin-controlled systems.


 * Using jQuery for javascript.

Future

 * Current RfCs, etc.
 * New parser (specification), wiki dom, Visual editor


 * MW is a tool that's used for very different purposes: media categorizations, mgmt of structured data, curation of content, patrolling of content, corporate CMS
 * we'll see an increased need for specialized uses
 * ex: MW used for Commons. Not built for that. Benefits for doing that, but not built for that. So needs improvements for that specific use case
 * other example: structured data
 * ways to make MW better : look at the workarounds (toolserver, bots, etc.)