Requests for comment/Structured logging

This is a request for comment about adding Structured logging to MediaWiki. It specifies a data model for MediaWiki log messages and an interface for generating log messages that conform to the model.

By "data model" we simply mean an agreed-upon set of fields containing metadata that describes the context in which the log message was generated. The model specifies the name of each field and the value it can hold. Log messages generated via the interface that we propose below would conform to this model, allowing them to be serialized to a machine-readable format.A standard for machine-readable metadata common to all log messages would make it possible to query, collate, and summarize operational data in ways that are currently very difficult to achieve.

We think that the ability to cross-reference logs and query by context will make troubleshooting bugs easier. We also think that ongoing analysis of aggregated log data would reveal which files, interfaces, and code paths are especially prone to bugs or poor performance, and that this information would help us make MediaWiki more reliable and performant.

Problems with the current interface
Most operational logging in MediaWiki is done via  calls. Messages logged via  specify a topic name (or a log bucket). This name usually identifies the name of the component that is emitting the log message. Some parts of the code which generate different kinds of log messages have compound topic names that describe the type of log message being logged, usually in terms of severity ("memcached-serious", for example). The ad hoc overloading of the log group property to encode severity is a good example of existing usage that is twisting the interface to overcome its limitations. This is a good indication that the current interface is inadequate.

Because there is no established standard for encoding severity, the density of logging calls in MediaWiki code varies greatly. Access to the production logs is limited, and many developers will only ever review logs generated on their development instance and consequently fail to appreciate the cost of excessively verbose logging at scale. The instrumentation of code is often pitched to its initial development rather than its ongoing maintenance.

To keep chatty code from drowning out important log data, the logging setup on the Wikimedia production cluster does not automatically transmit all logging topics to the log processor. A developer must first manually enroll the log bucket by adding it to the  configuration var. The problem with this approach is that the absence of log data from a particular component is typically noticed when it is needed the most: that is, when the component is suspected of misbehaving in ways that are difficult to reproduce or reason about. Thus log buckets are usually enabled to help solve a particular bug, and they are commonly left enabled long after the motivation for enabling them has ceased to be relevant. The overall effect is that Wikimedia's logs are curated on the basis of historic interest rather than abiding relevance.

Design principles
Filtering logs by severity and grouping logs by attributes only works if log messages are uniform in structure and content.

What we would need:
 * tools for wider audience
 * aggregation
 * de-duplication
 * cross system correlation
 * alerting
 * reporting

This is not a wholly new idea. Let's look at what's out there and see if we can find a solution or at least borrow the best bits.

Current logging

 * wfDebug( $text, $logonly = false )
 * Logs developer provided free-form text + optional global prefix string
 * Possibly has time-elapsed since start of request and real memory usage inserted between prefix and message
 * Delegates to


 * wfDebugMem( $exact = false )
 * Uses  to log "Memory usage: N (kilo)?bytes"


 * wfDebugLog( $logGroup, $text, $public = true)
 * Logs either to a custom log sink defined in  or via
 * Default
 * Prepends "[$logGroup] " to message
 * Custom sink
 * May log only a fraction of occurrences via  sampling
 * Prepends  to message
 * Delegates to  to actually write to sink


 * wfLogDBError( $text )
 * Enabled/disabled with  sink location
 * Logs "$date\t$host\t$wiki\$text" via  to    sink
 * Date format is  with possible custom timezone specified by


 * wfErrorLog( $text, $file )
 * Writes  to either a local file or a UDP packet depending on the value of
 * UDP
 * If  ends with a string following the host name/IP it will be used as a prefix to
 * The final message with optional prefix added will be trimmed to 65507 bytes and a trailing newline may be added
 * FILE
 * will be appended to file unless the resulting file size would be >= 0x7fffffff bytes (~2G)


 * wfLogProfilingData
 * Delegates to  using   sink
 * Creates a tab delimited log message including timestamp, elapsed request time, requesting IPs, and request URL followed by a newline and the profiler output.
 * Date is from


 * Recent changes logging
 * Transport and serialization format may be specified via
 * Various implementations in, including IRC, UDP & Redis.

Serialization Format
Rather than dive down a rabbit hole of trying to find a universal spec for log file formats let's just keep things simple. PHP loves dictionaries (well they call them arrays but whatever; key=value collections) and has a pretty fast json formatter. So the simplest thing that will work reasonably well would be to keep log events internally as PHP arrays and serialize them as json objects. This will be relatively easy to recreate on other internally developed applications as well with the possible exception of apps written in low level languages such as C that don't have ready made key=value data structures.

Data collected
Here's a list of the data points that we should definitely have:
 * timestamp
 * Local system time that event occurred either as UNIX epoch timestamp or ISO 8601 formatted string


 * host
 * FQDN of system where event occurred


 * source
 * Name of application generating events; correlates to APP-NAME of RFC 5424


 * pid
 * Unix process id, thread id, thread name or other process identifier


 * severity
 * Message severity (RFC 5424 levels)


 * channel
 * Log channel. Often the function/module/class creating message (similar to  groups)


 * message
 * Message body

Additionally we suggest adding a semi-structured "context" component to logs. This would be a collection of key=value pairs that the developers determine to be useful for debugging. There should be two different methods available to add such data. The first is as an optional argument to the logging method itself and the second is a global collection patterned after the Log4J Mapped Diagnostic Context (MDC).

The local collection is useful for obvious reasons such as attaching class/method state data to the log output and deferring stringification of resources in the event that runtime configuration is ignoring messages of the provided level.


 * file
 * Source file triggering message


 * line
 * Source line triggering message


 * errcode
 * Numeric or string identifier for the error


 * exception
 * Live exception object to be stringified by the log event emitter


 * args
 * key=value map of method arguments

The global collection is very useful for attaching global application state data to all log messages that may be emitted. Examples of data that could be included:
 * vhost
 * Apache vhost processing request


 * ip
 * Requesting ip address


 * user
 * Authenticated user identity


 * req
 * Request ID; UUID or similar token that can be used to correlate all log messages connected to a given request

API
The developer facing API is the PSR-3 logging interface standard with the possibility for MediaWIki specific extensions.

MWLogger also provides two static methods:

The  method is the means by which most code would acquire an MWLogger instance. It will in turn delegate the creation of MWLoggers to a class implementing the

This service provider interface will allow the backend logging library to implemented in multiple ways. The  global provides the class name of the default   implementation. This can be altered via the normal means. Alternately  can be invoked early in the application setup to inject an alternate SPI implementation.

See for a full proof of concept implementation including a   class that creates   instances backed by the monolog logging library. Additional SPI implementations may follow if desired by the community.

The proof of concept code also demonstrates the use of a  feature flag that configures the legacy global logging methods to emit logging events via.

Managing third-party libraries
The use of PSR-3 and monolog introduces the need to manage third-party code dependencies for MediaWiki core. Although there are some third-party components in includes/libs, to my knowledge this is the first large scale use of external PHP code by MediaWiki core.

In a more perfect world, MediaWiki would already be a system that assembled a collection of libraries using a well defined dependency management system. This has actually been envisioned in at least two RFCs (../MediaWiki libraries/ and ../Third-party components/).

The proof of concept implementation hews close to the approach proposed in ../MediaWiki_libraries/ with the addition of Composer as a dependency management system.

A new  directory is used to isolate the Composer managed code from the rest of MediaWiki core. Within the  directory a   file defines the exact versions of external code to import:

The  is set to , meaning the directory that contains the   file. is run once to import the initial libraries, generate a  file recording the origin of those dependencies and create the   file that will be used to import the libraries. This entire collection of files is then committed as a patch to gerrit to become an integral part of the MediaWiki core repository along with a change to the  script to require the lib/autoload.php class autoloader script generated by Composer.

When new versions of the currently imported libraries are desired or additional libraries are needed for additional MediaWiki core components, Composer can be used to safely manage the change.
 * 1) Edit composer.json to add/update library dependencies.
 * 2) Run   inside the   directory.
 * 3) (Optionally) Remove tests, documentation, non-PHP 5.3 compatible files.
 * 4) Add and commit changes as a gerrit patch.
 * 5) Review and merge.

Since this  file is not in the root of the MediaWiki core project, it should not conflict with the ../Extension management with Composer/ RFC.

Implementation
An original proof of concept implementation was submitted as. The approach of using Composer to manage external dependencies was given a +1 by Tim. The monolithic patch was then split into four smaller patches for closer review and approval:


 * Add Composer managed libraries
 * Import Psr\Log and Monolog libraries to MW-Core in a new "libs" directory which is managed using Composer. The includes/AutoLoader.php script has been modified to require the lib/autoload.php class autoloader script generated by Composer.


 * Add a PSR-3 based logging interface
 * The MWLogger class is actually a thin wrapper around any PSR-3 LoggerInterface implementation. Named MWLogger instances can be obtained from the MWLogger::getInstance static method. MWLogger expects a class implementing the MWLoggerSpi interface to act as a factory for new MWLogger instances. A concrete MWLoggerSpi implementation using the Monolog library is also provided.


 * Enable MWLogger logging for legacy logging methods
 * Introduces the $wgUseMWLoggerForLegacyFunctions that enables the use of the MWLogger PSR-3 logger for legacy global logging functions. When enabled wfDebug, wfDebugLog and wfLogDBError will route their log messages to MWLogger instances.


 * Enable MWLogger logging for wfLogProfilingData
 * Output structured profiling report data from wfLogProfilingData when $wgUseMWLoggerForLegacyFunctions is enabled.