Wikimedia Discovery/External search metrics

This is a brief writeup of possible metrics to use for the role of external search in driving traffic towards Wikimedia.

Background
We know from referer data that external search engines, particularly Google, play a large role in driving our traffic. We need to be quantifying and displaying this so that we can see the impact changes we make have on our prominence and traffic, and be aware at an early stage of the impact changes made by those search engines are having.

Referer count
The easiest metric to use is the count of how many pageviews come with referers from popular search engines - and what proportion of our pageviews that is. We're already part of the way towards tracking this, and have all the data we'd need to go ahead. This metric would then be generated each day, both overall and broken down by search engine, and visualised on a dashboard similar to the current suite of search dashboards.

A big advantage of this metric is that it gives us a lot of historical data - we can go all the way back to mid 2013, since we can use the sampled logs to backfill. It's also very direct: it represents traffic we actually got as a result of our search engine placement, rather than informing us as to that placement but not that equates to. A disadvantage is that we're dependent on browser-reported data, and the historic (pre-switchover) data for HTTPS requests is likely to be simply wrong.

Search rankings
We can also use our search rankings directly. There are various tools for calculating a site's pagerank or Bing equivalent, and we can use these to identify how we place in various prominent search engines and track it over time. The advantage there is that it would be trivial to implement - we'd make a simple API call to those services. The disadvantages are first, that it's not granular (we can have a high theoretical pagerank without placement on prominent queries), it ignores the interference of other UI elements (being second in a set of Google search results is nice, but doesn't necessarily convert to traffic if the knowledge graph box is taking all of the clicks), and it requires relying on 'black box' technology - in the sense that we're trusting whatever third party we're getting the data from to get things right.

Search engine-reported traffic
A third option is to use Google Webmaster Tools, and the equivalent for other search engines, to gather metrics. This isn't actually a metric in and of itself, more of a possible implementation - and it would let us look at clickthroughs, pagerank, session length, and unique users, for example.

The advantages here are the depth of the data: we cannot get session length or unique users en-masse with our existing infrastructure. The disadvantages are many-fold: we're dependent on the third-party implementations and assuming they are accurately representative of the actual traffic; we have to build API connectors or frameworks for as many search engines as we're interested in (where they have analytics tools); we end up with a much more nuanced, but much less simple, report.

Recommendation
My recommendation would be simply that these options are not mutually exclusive. We should use referer counts as the baseline metric, and then get other metrics such as prominence, clickthrough rate or source from the search engines.

Referer counts are relatively easy to implement, and will provide us with a way of easily identifying which search engines are providing the most traffic (and how much that is), and which search engines are relatively unstable - in other words, which vectors for user engagement are most important to keep an eye on.

With that, one of the big issues with using Search Engine-reported data - having to implement so many connectors - goes away. If we find out Google is our biggest source of traffic, for example, we can start with Google, building out a connector and dashboard incrementally until it's stable, and then move on down the list.