Analytics/Hardware

David Schoonover, March 2012

The Wiki Movement has a chronic need for analytics. We need it to understand our editors, to encourage growth, to engender diversity, to focus our resources, to improve our engineering efforts, and to measure our success. We are starved for it. This document describes the resources needed to sate our data-hunger; in addition, it outlines several operational proposals for partially feeding this need, though I suspect we will underestimate how hungry we are until after we get the first byte.

Overview
It is best to think of data processing in terms of searching for relationships. Relationships might not be fully realized within one record, so it is often necessary to scan the dataset multiple times, or to be computationally tricky. Searching a lot of data is hard. Searching for a complex relationship is hard. Searching a lot of data for a complex relationship is exponentially hard.

Mo data, mo processing2.

Available query capacity, on the other hand, scales linearly with the size of the cluster dedicated to stream and query processing. We can often efficiently partition a hard task, but this does not make it less hard; more machines can make a problem intractable rather than impossible, but not easy rather than hard.

With this in mind, I have broken down the hardware requirements into tranches using the proportion of the incoming data stream that could be processed, sometimes for only a subset of high-level metric-groups.

Tranche A: Incipient Data Services Platform

 * 30 commodity servers for stream and job processing.
 * 10 high-performance servers for the query workload and some stream processing.
 * 5-10 low-performance servers required for cluster support.
 * 3-5 commodity servers for test and staging cluster.

We strongly recommend this hardware profile, as it enables the Analytics team to meet all of its goals.

This would provide the processing, storage, and query capacity for:
 * All current request streams, including all wiki, media/upload, and mobile traffic.
 * Future data from click tracking, A/B testing, mobile instrumentation, MediaWiki internal instrumentation, and extension instrumentation via a MediaWiki internal API.
 * Up to 30% traffic growth.

This hardware profile would be capable of providing all typical web traffic analytics fully joined with wiki userdata classifiers, as well as campaign tracking, "clickstream" or referrer chain analysis, and conversion funnels. The same level of detail would be available for edit activity, combined with a limited amount of graph-processing on the article content dataset.

In addition, this cluster would support a public Query API. Sufficient hardware begets the apotheosis of analytics: a true data services platform, capable of providing realtime insight into community activity and a new view of humanity's knowledge to power applications, mash up into websites, and stream to devices.

Tranche B: Non-Media Analytics

 * 10 commodity servers for stream and job processing.
 * 10 high-performance servers for the query workload and some stream processing.
 * 5-10 low-performance servers required for cluster support.
 * 3-5 commodity servers for test and staging cluster.

To reduce hardware demands, we must reduce both the volume of data and the complexity of our processing. This profile retains the ability to provide the space of analytics queries described in tranche A, but it elides the Upload/Media data stream to reduce volume by approximately 55%. It could still handle projected 2013 traffic growth and new instrumentation as described, though it would be unwise to attempt extensive graph processing on the content dataset. It would not be capable of servicing a public API.

Tranche C: Web Analytics

 * 10 high-performance servers for all query, stream, and job processing.
 * 5 low-performance servers required for cluster support.
 * 3-5 commodity servers for test and staging cluster.

This profile represents the minimum necessary hardware allocation to perform the analytics tasks identified as high priority for the foundation. It would process only edit activity, and traffic from wiki, search, and mobile. Limited instrumentation could be serviced if campaigns are carefully scoped and monitored. Likewise, the full query space could still be explored, but ad-hoc queries would be executed in serial; only cached, materialized data would be available for exploration in a dashboard. It would not be capable of servicing a public API.

Extrapolated Cluster Workload
Requests represent a decent heuristic for the cluster workload, as each request becomes a single event entering the cluster. Events are buffered, bundled, and compressed before being written to the cluster ETL queue. This bundling reduces the volume of writes by the size of the buffer window (at least 10:1), and compression by about 3:1; the bundle is replicated at least three times.

Extrapolating from these numbers, we can expect approximately 23.235 GB/day of storage consumed by the current full event stream (about 30 GB/day with 30% growth). The instrumentation stream, estimated from the above, would add an additional 7-10 GB/day.

Using those estimates, the storage currently attached to the Cisco machines (16.2TB in 300GB SAS 10k drives) would be exhausted after approximately 11 months.

System Overview
"Analytics cluster" is something of a misnomer for sufficiently big data; most of the cleverness goes into machine coordination, and most of the engineering energy goes into fault tolerance. The tools we use to perform these tasks are not specialized for analytics in the way of most data warehousing servers of the past. That said, a well-built cluster is still tremendously complex (even if it is also a powerful computational platform). This section traces the oft-convoluted path of data and processing.

Data Import
There are two continuous vectors of data import:
 * Pixel Service endpoint, a REST API for submitting events.
 * Streaming Agents on web and cache servers.

Both import vectors converge on the Bundler Gateway, a cluster of stateless aggregators ("Bundlers") that shard and buffer incoming data before writing it as a page to the ETL queue.

Stream and Job Processing Workers
All workers are homogenous in their storage membership and replication duties, but are further partitioned for processing jobs into several virtual or logical clusters that are each optimized for different work profiles due to data locality.


 * ETL Cluster: The so-called "short-request" cluster, configured for a write-dominated (low-read) workload. This cluster is the target of the Bundler aggregations and is near-continuous in workload and aims to be realtime in execution profile. The ETL topology will continuously process new blocks into canonicalized form and write them back to the cluster, and the Blocks have a TTL set for future deletion. Standing queries beyond normal ETL will be serviced by additional Storm topologies running against this cluster. This cluster needs lots of compute, but not as much RAM (nearly no repeated reads).
 * Analytics Cluster: Configured for a moderately read-heavy load (probably 65-35, but it depends on the jobs), as this cluster will be the target of MapReduce jobs, Pig and Hive jobs, Storm topologies for CEP, and Storm DRPC topologies to service graph-traversal queries. This cluster needs lots of RAM and CPU, as read speed will depend on the hot-set, and write speed will depend on CPU.

ETL Phase (Hard Realtime)
The Extract-Transform-Load step parses events into a compact binary format, performs IP lookup for geo and/or mobile carrier along with subsequent anonymization, and canonicalizes verbose data (like User Agents) before writing the result back to the database. Additional high-priority, hard realtime standing queries (such as event-driven notifications or triggers) will run as independent-but-linked Storm topologies, triggered after the canonical ETL phase.

The data stream switches from push to pull at this point, like a waterfall terminating in a lake. Subsequent processing steps are batched by their staleness constraints.

Short-Batch Phase (Soft Realtime)
Jobs that desire an almost-realtime view (such as the index on edits, or events by geo/mobile) will run frequently to batch-process only new data. They are expected to be able to do their work on a subset of data, and to be able to terminate quickly. These jobs can be implemented on top of either Storm or Hadoop.

Batch
Jobs that require a full view of the data are relegated to MapReduce, and will use the normal Hadoop scheduler. Early on, this will likely include a variety of batch imports, such as MySQL exports, an XML article dump, fundraising data, community datasets (from, for example, Toolserver or stats.grok.se), historical logs, or 3rd party data (like WikiTrust or academic datasets).

Queries
Query Gateways will receive queries both internally and externally, handle API keys, perform usage-tracking and enforce rate-limiting. The internal query gateway can execute queries utilizing a custom Storm topology as DRPC for short-running scans or point-queries in addition to executing Hive/Pig queries and notifying the user on completion. External API queries will use Storm DRPC and be restricted in runtime and resource use.

Support Servers
Distributed computing clusters are aptly thought of as an ecosystem, and thus a number of exotic creatures occupy niche-but-necessary support roles to maintain the health of the whole:
 * 3-5 ZooKeeper nodes (used by Hadoop and Storm to manage cluster membership)
 * Hadoop Job Tracker
 * Storm Job Tracker (Nimbus)
 * JMX Monitoring Server (probably Zabbix; should be switch-local as it is very chatty)

Secondary Testing Cluster
A small (3-5 node) secondary cluster would be physically and logically distinct from production cluster, and be loaded with a subset of the data to reduce job times.

It would be aimed at providing:
 * Pre-deploy integration testing
 * Job testing and debugging
 * A platform for staging and utility jobs

Comparisons
Several large consumers of big-data have published detailed information about some of their data processing products and clusters. A few are presented here for reference.

Twitter Rainbird (Cassandra, 2011)
Analytics for the Promoted Tweets advertising platform.
 * 100,000s writes/sec
 * 10,000s reads/sec
 * 100TB+ storage
 * Extremely low latency: <100ms reads
 * Events are batched for ~60 seconds
 * Parsing and structuring performed by bundlers
 * Clients submitting events are Rainbird-aware

Facebook Insights (Hbase, 2011)
Social-plugin analytics for site owners.
 * 20 billion events/day
 * 200,000 events/second
 * <30s average delay before event surfaces in queries
 * 100+ metrics, but stored only as counters
 * Events are batched for ~1.5 seconds
 * Each node handling 10k writes/sec

Facebook Messages (Hbase, 2010)
The Facebook messaging system.
 * 135 billion messages/month (~4.5B/day)
 * 1.5M+ operations/second at peak
 * ~55% reads: 825,000 reads/sec
 * ~45% writes: 675,000 writes/sec
 * 2PB+ (petabytes) data in storage