Search/Old

This page describes the Wikimedia Foundation's activities surrounding our sites' search functionality.

Rationale
The Wikimedia search infrastructure hasn't had significant development work for many years. The current system is based on homegrown layer on top of Lucene (lsearchd) that has since been tackled by much larger projects such as Solr. The search system frequently breaks in ways that are difficult to diagnose, and generally makes our Operations staff sad.

Goals for our current effort:
 * Make our existing tools more robust
 * Improve logging in our existing tools to make problems easier to diagnose
 * Migrate away from lsearchd to Solr (or something similar)

Our current search infrastructure is highly outdated and difficult to manage due to tons of custom code. We'd like to replace it with Solr (also based on Lucene), as it's very stable, contains many of the features we need, and doesn't require nearly as much custom code to support.

Solr implementation plan
We don't yet have a firm timeline for a Solr migration. A few considerations.


 * 1) Solr is web-based and has its own query syntax (Solr query syntax)
 * 2) We have a rather complex set of search modes that we support in lsearchd (user documentation)
 * 3) As an initial step, we need to decide how much of the lsearchd syntax we want to support in the Solr implementation and if we want to enhance it in some way to take advantage of newer Solr capabilities (e.g. RegEx search). This will have a strong impact on the rest of the architecture since it determines what indices are generated.
 * 4) Based on this, we need to map out how MWSearch extension needs to change for Solr.
 * 5) For a new implementation, some sort of incremental approach seems best where we deploy Solr for smaller wikis first, and learn from that experience for the larger wikis.

Requirements

 * A solid PHP library
 * Translation memory and GeoData both use Solarium, which is widely used and very robust.
 * Solr library in Pecl is poorly maintained, incomplete.

GeoData
The index is relatively small (so no need to make it distributed), but requires a lot of computational power to work with. Full-text search is not currently used. Currently, data from all the wikis is stored in the same core, in the future we will need to split data to many cores (the puppet changes for using multiple cores with shared configuration/schema are here, needs more work).
 * Load expectations: unclear, but will be high if we start using it heavily e.g. for maps display.
 * Backups: not really needed - if master is down just switch to a slave. If all servers are down, reindexing from scratch is quick.
 * Note: because GeoData's schema is very stripped-down, /admin/ping doesn't work - should be remebered if someone wants to rewrite the current monitoring.

Nice to haves

 * A pony

In progress

 * Setup enwiki in labs and play with it.
 * Waiting on Peter to hook me up with real hardware to try this out.
 * Let users select search backend using url parameter
 * https://gerrit.wikimedia.org/r/#/c/76950/
 * Configure CirrusSearch search option for mediawiki.org
 * https://gerrit.wikimedia.org/r/#/c/78083/
 * Highlighting regression tests
 * https://gerrit.wikimedia.org/r/#/c/78383/
 * Figure out monitoring.
 * Peter is working on this.
 * Figure out metrics gathering.
 * https://gerrit.wikimedia.org/r/#/c/78414/

Maybe

 * Figure out how we want to secure elasticsearch and do it.
 * Downgraded after talking with Peter

Done

 * Package JMXTrans
 * Puppetize JMXTrans
 * Pool Counter for Solr Updates
 * Give ElasticSearch a shot.
 * Search redirects to a page somehow.
 * Indexed in a separate field with highlighting, etc.
 * Work done in Elasticsearch branch.
 * Use Pool Counter for Searches
 * Using upstream deb files for installation and default configuration.
 * Plan machines running in beta.
 * Prefix search uses edgengrams
 * Puppetize installation of elasticsearch.
 * Index pages when their templates change.
 * Split data into two indecies - one for content and one for everything else.
 * Build puppet configuration for machines in beta.
 * Build puppet configuration for machines in production for mediawiki.org.
 * Configure accent squashing
 * Automated test suite

Rejected

 * Caching results from Solr.
 * We'll wait and see if we need this.
 * Accoring to the mailing list folks tend to cache Solr using Varnish. Lucky for us we understand and like Varnish.

Documents

 * Search documentation on Wikitech: Search
 * Ram's setup instructions: wikitech:User:Ram/Search
 * Some notes from Brion in 2008
 * The MWSearch extension provides a SearchEngine subclass which contacts Wikimedia's Lucene-based search server. This replaces the older LuceneSearch extension which reimplemented the entire Special:Search page.
 * /2013-02 discussion - Discussion with Rainman about how the current system works