Requests for comment/Services and narrow interfaces

Problem statement
MediaWiki's codebase has mostly grown organically, which led to wide or non-existing internal interfaces. This makes it hard to test parts of the system independently and couples the development of parts strongly. Reasoning about the interaction of different parts of the system is difficult, especially once extensions enter the mix. Fault isolation is less than ideal for the same reason: a fatal error in a minor feature can bring down the entire system.

Additionally, the way clients request data from MediaWiki is changing. Richer clients request more information through APIs, which should ideally perform well even with relatively small requests. New features like notifications require long-running connections, which are difficult to implement in the PHP request processing model. New technologies like node.js have been developed that fit some of these applications well, and it would be nice if we had a way to leverage them.

Another problem is organizational. We now have several teams at the Wikimedia foundation working on new features. Currently each team needs to handle the full stack from front-end through caching layers and Apaches to the database. This tends to promote tight coupling of storage and code using it, which makes independent optimizations of the backend layers difficult. It also often leads to conflicts over backend issues when deployment is getting closer.

Using services to solve some of these issues
- modular / narrow interfaces; good for - reasoning about dependencies and interaction - testing - reuse outside of MW   - choice of tech - reuse - distribution - scalability - fault isolation - model state as resources, HTTP verbs - REST - caching