Search/Old

This page describes the Wikimedia Foundation's activities surrounding our sites' search functionality.

Rationale
The Wikimedia search infrastructure hasn't had significant development work for many years. The current system is based on homegrown layer on top of Lucene (lsearchd) that has since been tackled by much larger projects such as Solr. The search system frequently breaks in ways that are difficult to diagnose, and generally makes our Operations staff sad.

Goals for our current effort:
 * Make our existing tools more robust
 * Improve logging in our existing tools to make problems easier to diagnose
 * Migrate away from lsearchd to Solr (or something similar)

Our current search infrastructure is highly outdated and difficult to manage due to tons of custom code. We'd like to replace it with Solr (also based on Lucene), as it's very stable, contains many of the features we need, and doesn't require nearly as much custom code to support.

Solr implementation plan
We don't yet have a firm timeline for a Solr migration. A few considerations.


 * 1) Solr is web-based and has its own query syntax (Solr query syntax)
 * 2) We have a rather complex set of search modes that we support in lsearchd (user documentation)
 * 3) As an initial step, we need to decide how much of the lsearchd syntax we want to support in the Solr implementation and if we want to enhance it in some way to take advantage of newer Solr capabilities (e.g. RegEx search). This will have a strong impact on the rest of the architecture since it determines what indices are generated.
 * 4) Based on this, we need to map out how MWSearch extension needs to change for Solr.
 * 5) For a new implementation, some sort of incremental approach seems best where we deploy Solr for smaller wikis first, and learn from that experience for the larger wikis.

Requirements

 * A solid PHP library
 * Translation memory and GeoData both use Solarium, which is widely used and very robust.
 * Solr library in Pecl is poorly maintained, incomplete.

GeoData
The index is relatively small (so no need to make it distributed), but requires a lot of computational power to work with. Full-text search is not currently used. Currently, data from all the wikis is stored in the same core, in the future we will need to split data to many cores (the puppet changes for using multiple cores with shared configuration/schema are here, needs more work).
 * Load expectations: unclear, but will be high if we start using it heavily e.g. for maps display.
 * Backups: not really needed - if master is down just switch to a slave. If all servers are down, reindexing from scratch is quick.
 * Note: because GeoData's schema is very stripped-down, /admin/ping doesn't work - should be remebered if someone wants to rewrite the current monitoring.

Nice to haves

 * A pony

Must

 * Handle "List Redirects"
 * Give ElasticSearch a shot.
 * Search redirects to a page somehow.
 * Currently it looks like they are indexed as a separate field with highlighting, etc.
 * Nik is on the case!
 * Use Pool Counter for Searches


 * Package Solr 4 for installation.
 * Secure Solr to only listen to localhost for administrative functions.
 * Puppetize installation of Solr 4.
 * Build Labs Solr machines out of puppet.
 * Plan machines running in beta.
 * Build puppet configuration for machines.
 * Puppetize zookeeper.
 * Work with Analytics on this, initial discussion suggested sharing ZK cluster


 * Write safe scripts arround creating and deleting and updating shards.
 * Write scripts to update the config.


 * Setup enwiki in labs and play with it.
 * Nik is restoring enwiki right now. It'll be a process though.

Should

 * Puppetize JMXTrans for Solr

Done

 * Package JMXTrans
 * Puppetize JMXTrans
 * Pool Counter for Solr Updates

Rejected

 * Caching results from Solr.
 * We'll wait and see if we need this.
 * Accoring to the mailing list folks tend to cache Solr using Varnish. Lucky for us we understand and like Varnish.

Documents

 * Search documentation on Wikitech: Search
 * Ram's setup instructions: wikitech:User:Ram/Search
 * Some notes from Brion in 2008
 * The MWSearch extension provides a SearchEngine subclass which contacts Wikimedia's Lucene-based search server. This replaces the older LuceneSearch extension which reimplemented the entire Special:Search page.
 * /2013-02 discussion - Discussion with Rainman about how the current system works