Extension:Lucene-search

Lucene-search is a search engine back end for large MediaWiki websites. It is the search engine used by Wikimedia wikis. (Smaller sites might want to consider SphinxSearch.) Lucene-search extends the Apache Lucene search API to rank pages based on number of backlinks, distributed searching and indexing, parsing of wiki text, incremental updates, etc.

Lucene-search requires a front-end extension to fetch the results from the search engine:
 * Extension:MWSearch (for MediaWiki version 1.13+)
 * Extension:LuceneSearch (for MediaWiki version prior to 1.13)

Versions

 * 2.1 (development) - used on all Wikimedia Foundation wikis
 * Features:
 * Result Highlighting
 * "Did you mean.." type query correction. AKA spell checking.
 * Advanced ranking capabilities based on term proximity, relatedness and anchor text.


 * 2.0.2 (stable) c.f. Extension:Lucene-search/2.0 docs''
 * Features:
 * Distributed search
 * Scalability
 * Basic ranking,
 * Accentless search

The following documentation is for the latest development version (2.1). The old documentation is at Extension:lucene-search/2.0 docs.

Requirements

 * Linux
 * Java 6+ Jdk (OpenJDK or Sun)
 * Apache Ant 1.6 (for building from source)
 * Rsync (required for distributed architecture)
 * Subversion client

Note to Windows users: From version 2.0 onward, the LSearch daemon doesn't support the Windows platform (since it uses hard and soft file links). You can still use the old daemon written in C#. See the installation instructions.

Single Host Setup (MediaWiki & Lucene-Search On The Same Host)
1. If using MediaWiki version 1.17 or before, ensure that AdminSettings.php is set up. AdminSettings.sample must be renamed AdminSettings.php, and modified so that it contains: $wgDBadminuser = "database_admin_username"; $wgDBadminpassword = "database_admin_password";

2. Get Lucene-search to
 * Download the binary release from and unpack it.
 * Download the source from subversion
 * run "ant" to build the jar. Bulbgraph.png recommended.

ant

3. Generate configuration files by running:

./configure 


 * This script will examine your MediaWiki installation, and generate configuration files to match your installation.  Before configure, you may customize some options in template/simple/lsearch-global.conf, for example language option. These options are explained below.

4. If everything went without exception, build indexes

./build


 * This will build search, highlight and spellcheck indexes from a xml database dump.
 * For small wikis, just put this script into daily cron and your installation is done (i.e. skip the ./update step below, since small wikis don't need the OAIRespository extension).
 * For larger wikis, install Extension:OAIRepository MediaWiki extension and after building the initial index use incremental updater:

./update


 * This will fetch latest updates from your wiki, and update various indexes with search, page links and spell check data. Put this into daily cron to keep the indexes up-to-date.

5. Start the daemon. Do this by running:

./lsearchd


 * Note: The Lucene-search daemon needs to be started in order for searching to work and does not install an init.d script to start the program automatically on boot. As noted in this post, an init.d script can be created manually and added to the startup queue with a separate command. This should not be version specific but has been tested to work in 10.04 and 12.04LTS. Alternately, this rc.local entry (tested in Ubuntu 12.04 LTS) can be used.


 * Use the optional command line parameter -configfile to specify the path the lsearch.conf file you wish to use. This is handy when using the absolute path to lsearchd.

/opt/lucene-search/lsearchd -configfile /opt/lucene-search/lsearch.conf

6. Install Extension:MWSearch and make sure to set $wgLuceneSearchVersion = 2.1.

7. Once the indexes have been built and MWSearch installed, run the daemon:

./lsearchd

The daemon will listen on port 8123 for incoming search requests from MediaWiki, and on port 8321 for incoming incremental updates for the index. MWSearch extension will reroute all search requests to this daemon.

You may simply test the search result by browsing to the HTTP URL like, http :// :8123/search/'' file. Each of these sections needs to be updated to use the correct host name and database.



[Database] Section

 * list the databases to be indexed in the [Database] section.
 * 1) * is the name set in $wgDBname in your MediaWiki LocalSettings.php file). or
 * 2) *  where  provide a uri to a database list.


 * are
 * 1) distributed index configuraion
 * 2) *  takes a value:
 * 3) ** single   - index is not distributed.
 * 4) ** mainsplit - two part index. mainspace with [0] namespace; restspace with all other namespaces. (recommended)
 * 5) ** split    - split ???
 * 6) ** nssplit  - split by name space list.
 * : true to optimize while indexing,  false to skip. (optional)
 * : the set size of document cache, default is 10. (optional)
 * : the set merge factor, default is 2. (optional)
 * : the set number index subdivisions, (required for nssplit)


 * 1) (language,en) default language and stemming type
 * 2) optional parameters:  (warmup, NUM) bootstrap after an index update using NUM queries. This enables smooth transition in performance by ensuring indexes are always well cached and buffered.

An Example
[Database] {file:///home/wikipedia/common/all.dblist} : (single,true,20,1000) (prefix) (spell,10,3) enwiki : (nssplit,2) enwiki : (nspart1,[0],true,20,500,2) enwiki : (nspart2,[],true,20,500) enwiki : (spell,40,10) (warmup,500)
 * 1) wikilucene : (single) (language,en) (warmup,100)
 * 2) wikidev : (single) (language,sr)
 * 3) splitLucene : (nssplit,3), (nspart1,[0]), (nspart2,[4,5,12,13]), (nspart3,[])
 * 4) wikilucene : (language,en) (warmup,10)


 * all the databases at  should be indexed
 * - declares that  is a single (nondistributed) index and that it should use English as its default language, and thus use English stemming. The optional   instructs Lucene to apply 100 queries to the index when an updated version is fetched to warm it up.
 * declares that  is a distributed index. With three parts
 * which will store the index for namespace 0
 * which will store the index for namespaces 4,5,12,13
 * which will store the index for the other namespaces

[Search-Group] Section
[Search-Group] ''' :  with your local host name.}}
 * [Search-Group] section: Map your server hostname to the database that's being searched and indexed.
 * 1) oblak : wikilucene wikidev+

[Index] Section
Change oblak to your host name like you did for [Search-Group].

[Namespace-Prefix] Section
Add customized user namespaces used in the wiki to this section.

For other properties you can leave default values.

Incremental updates
If you feel that periodically rebuilding the index puts too much load on your database, you can use the incremental updater. It requires some additional work:
 * 1) Install OAI Repository extension for MediaWiki. This extension enables the incremental updater to fetch the latest articles.  It is a fairly complex installation but it is the most practical way to keep your index up-to-date without causing serious performance issues.  This is used on Wikimedia servers.

Distributed architecture
A common distribution is many searcher/one indexer approach. By a quick look at global config file (lsearch-global.conf) it should be obvious how to distribute searching. You just need to add more host : dbname mappings and startup lsearchd at those hosts. However, searchers need to be able to fetch and update their index, so:
 * 1) Setup rsyncd.conf and start the rsync daemon (there is sample config file in SVN) on the indexer host
 * 2) Add rsync path on the indexer host to global configuration in Index-Path section.

Split index
If your index is too big, and cannot fit into memory, you might want to split it up in smaller parts. There are a couple of ways to do this. Simplest way is to do mainsplit. This is split index into two parts, one with all articles in main namespace, and one for everything else. You can also do a nssplit, which will let split index by any combination of namespaces. Finally, there is a split architecture which randomly assigns documents to one of the N index parts. From performance viewpoint it's best to split index by namespaces, if possible as mainsplit. This is best if we assume the user almost always wants to search only the main namespace.

If you split index to many hosts, the usage will be load-balanced. E.g. at every search different combination of hosts having the required index parts will be searched. The MediaWiki Lucene Search extension doesn't need to worry about this, just have to get the request to host that has some part of the index.

There are examples of using these index architectures in lsearch-global.conf in the package.

This extension supports all kinds of exotic options:
 * index updates with custom rotation exceptions,

Setting Up Suggestions for the Search Box
To enable his feature:

1. Modify the global settings:

[Database] yourwiki: (prefix)

2. Re-run the build script to build the prefix index as well.

3. Update the MediaWiki installation to use lucene as backend for prefix matches. Modify the LocalSettings.php:

It is tricky here. Do not use substitutes as localhost or 127.0.0.1 for wgLucenePrefixHost. This value would be injected in an AJAX JavaScript that is sent to the browsers of your clients. There is no way a client browser can figure out where your server is unless you tell it. So put the real IP or hostname of the server where Lucene is running.

Performance tuning
Default values for lucene indexer facilitate minimal memory usage and minimal number of segments. However, indexing might be very slow because of this. The default is 10 bufferred documents, and merge factor of 2. You might want to increase these values, for instance to 500 buffered docs, and merge factor of 20. You can do this in global configuration, e.g. wikidb : (single,true,20,500). Beware however that increasing number of buffered docs will quickly eat up heap. It's best to try out different values and see what are the best value for your memory profile.

If you run the searcher at a multi-CPU host, you might want to adjust SearcherPool.size in local config file. The pool size corresponds to number of IndexSearchers per index. You need to set it at least to number of CPUs, or better number of CPUs+1. This prevents CPUs from locking each other by accessing the index via single RandomAccessFile instance.

FAQ
Q1. Is a Single Search for Multiple Wikis using Multiple Databases possible.

A1. It is not supported. A possible workround is to index after dump all the wikis into a single file and index that.

Q2. If Lucene's being used by WMF, why isn't it in the CommonSettings or InitialiseSettings files?

A2. It's just not called "Lucene-search". Look at CommonSettings.php referring to $wmfConfigDir/lucene.php --.