Extension:Lucene-search/2.0 docs

From MediaWiki.org
Jump to navigation Jump to search

This is installation manual for lucene-search extension version 2.0.

Installing LSearch daemon[edit]

Requires: Linux, Java 5, Apache Ant 1.6, Rsync (for distributed architecture)

Note Windows users: LSearch daemon from version 2.0 doesn't support Windows platform (since it uses hard and soft file links). (It should be possible to get this to work in Vista with enough fiddling . . .) You can still use the old daemon written in C#. Here are the installation instructions: m:Installing lucene search.

There are few typical installation scenarios, depending on the size of your system.

Single-host setup[edit]

In most cases a single host will be able to handle both indexing and searching. Searching is typically very memory-hungry, and it's a good practice to have at least half of the index buffered up in memory. If the index is 2x larger than available memory you'll probably experience some serious performance degradation, and should consider distributing search.

Typically, search index is around 3-5 times smaller than the corresponding xml database dump.

For easy maintainance of distributed architecture, configuration is split into two parts: global and local. In single-host install you also need to setup both of them:

Local configuration:[edit]

  1. Obtain a copy of lsearch daemon, unpack it in e.g. /usr/local/search/ls2/
    If you downloaded from SVN, you'll also need mwdumper.jar in e.g. /usr/local/search/ls2/lib
  2. Make a directory where the indexes will be stored, e.g. /usr/local/search/indexes
  3. Edit lsearch.conf file:
    • MWConfig.global - put here the URL of global configuration file (see below), e.g. file:///etc/lsearch-global.conf
    • MWConfig.lib - put here the local path to lib directory, e.g. /usr/local/search/ls2/lib
    • Indexes.path - base path where you want the deamon to store the indexes, e.g. /usr/local/search/indexes
    • Localization.url - url to MediaWiki message files, e.g. file:///var/www/html/wiki/phase3/languages/messages
    • Logging.logconfig - local path to log4j configuration file, e.g. /etc/lsearch.log4j (the lsearch SVN has a sample log4j file you can use called lsearch.log4j-example)

For other properties you can leave default values.

Global configuration tells the daemon about your databases, and your network setup.

Global configuration[edit]

Edit lsearch-global.conf file. Each of these sections needs to be updated to use the correct host name and database.

[Database] section
Add some databases (where <database_name> is the database name set in $wgDBname in your MediaWiki LocalSettings.php file).
[Database]
#wikilucene : (single) (language,en) (warmup,0)
#wikidev : (single) (language,sr)
#wikilucene : (nssplit,3) (nspart1,[0]) (nspart2,[4,5,12,13]), (nspart3,[])
#wikilucene : (language,en) (warmup,10)
<database_name> : (single) (language,en) (warmup,10)
wikidb : (single) (language,en) - declares that wikidb is a single (nondistributed) index and that it should use English as its default language, and thus use English stemming.
The optional (warmup,100) instructs Lucene to apply 100 queries to the index when an updated version is fetched to warm it up. This enables smooth transition in performance and ensures indexes are always well cached and buffered.
Warning Warning: Make sure there are no spaces in the arguments (e.g. (warmup,10)). This condition can lead to failure to create search, snapshot or index folders when building Index.
[Search-Group] section
Map your server hostname to the database that's being searched and indexed.
[Search-Group]
#oblak : wikilucene wikidev+
<host_name> : <database_name>
Replace <host_name> with your local host name.
Warning Warning: don't use localhost, but your hostname exactly as in environment variable $HOSTNAME. To find this value you can run echo $HOSTNAME - use whatever value it returns.
[Index] section
Change oblak to your host name like you did for [Search-Group].
[Namespace-Prefix] section
Optionally, add your custom user namespaces to this section.

For other properties you can leave default values.

Build the JAR file[edit]

Next, if you didn't download the binary release, build LuceneSearch.jar by invoking ant.

Start the Daemon[edit]

Start the daemon with ./lsearchd. Note that you need to have java in your path.

Test the Daemon[edit]

Your may simply test the search result by browsing to the HTTP URL like,

http://<hostname>:8123/search/<database_name>/<your_test_query>

For example, http://localhost:8123/search/wikidb/hello.

Building the index[edit]

Simplest way to keep the index up-to-date is to periodically rebuild it. You can put in a cronjob, or make a script that rebuild the index and then sleeps for some time.

To build the index, you will need an XML dump of database (use dumpBackup.php). To be able to make the XML dump you need to setup AdminSettings.php. Then use the helper tool Importer, to rebuild the index. Here is the sample code: (you might want to adjust the dump file path, etc.. )

  php maintenance/dumpBackup.php --current --quiet > wikidb.xml &&
  java -cp LuceneSearch.jar org.wikimedia.lsearch.importer.Importer -s wikidb.xml wikidb
Warning Warning: If your wiki is fairly large (for example hundreds of thousands of articles), you will probably run out of heap space while building the index unless you put something along the lines of "-Xmx2048m " in front of the "-cp" in the second line below. This will tell the JVM that it can use 2048megs (2 gigs) of space for the heap.

The Importer will import the xml dump and make an index snapshot (-s option). Index snapshot will be picked up by the lsearch daemon (which periodically checks for index snapshots) and working copy of the index updated. Indexes for lsearch daemon are stored in standard locations. If /usr/local/search/indexes is your root index path, then indexes/snapshot will contain snapshots, indexes/search will contain the current working copy of the index, indexes/update the previous working copies and index updates, etc..

And that's it, if you correctly set up the MW extension, you should be able to search and have the index updated.

Troubleshooting[edit]

Due to the several components involved and unnecessary complexity of these instructions, getting LuceneSearch up and running can be difficult. A few notes:

  • If you have curl installed in your PHP installation, they must work in order for the script to return results. Otherwise you will get a search failure notice.
  • The database ("wikidb" in the explanations above) must match the MySQL (or other) database in which the wiki to be indexed is stored.
  • If you do get a search failure notice, check the lsearchd output. This can be found on the console where you started the daemon, assuming you started the daemon with the default log4j configuration. If you get error messages, including Java exceptions (ArrayIndexOutOfBoundsException, NullPointerException), carefully check over all your configuration settings for inconsistencies or mistakes.
    • if you changed your $HOSTNAME, you HAVE to update lsearch-global.conf
  • If nothing seems to be awry in the lsearchd output, turn on MediaWiki logging as explained on How to debug.
  • Internally, the LuceneSearch extension queries the Lucene daemon via an HTTP request. You'll be able to see the URL requested by looking in the MediaWiki log output, or manually determine the URL as described above. To test that the Lucene side is working, try typing this URL in a web browser and visiting it. If you get a text list of search results, it's working. If not, this should allow you to see what's going wrong.

Advanced Options[edit]

Incremental Updates[edit]

If you feel that periodically rebuilding the index puts too much load on your database, you can use the incremental updater. It requires some additional work:

  1. Install OAI Repository extension for MediaWiki. This extension enables the incremental updater to fetch the latest articles. It is a fairly complex installation but it is the most practical way to keep your index up-to-date without causing serious performance issues. This is used on Wikimedia servers.

Distributed Architecture[edit]

A common distribution is many searcher/one indexer approach. By a quick look at global config file (lsearch-global.conf) it should be obvious how to distribute searching. You just need to add more host : dbname mappings and startup lsearchd at those hosts. However, searchers need to be able to fetch and update their index, so:

  1. Setup rsyncd.conf and start the rsync daemon (there is sample config file in SVN) on the indexer host
  2. Add rsync path on the indexer host to global configuration in Index-Path section.

Restart everything (searchers and indexer) and they should now be aware of each other, searcher will periodically check for updates of indexes they are assigned. You need to run Importer at the indexer host, but you can run IncrementalUpdater at any host, since it will from global config know where the indexer is.

Split Index[edit]

If your index is too big, and cannot fit into memory, you might want to split it up in smaller parts. There are a couple of ways to do this. Simplest way is to do mainsplit. This is split index into two parts, one with all articles in main namespace, and one for everything else. You can also do a nssplit, which will let split index by any combination of namespaces. Finally, there is a split architecture which randomly assigns documents to one of the N index parts. From performance viewpoint it's best to split index by namespaces, if possible as mainsplit. This is best if we assume the user almost always wants to search only the main namespace.

If you split index to many hosts, the usage will be load-balanced. E.g. at every search different combination of hosts having the required index parts will be searched. The MediaWiki Lucene Search extension doesn't need to worry about this, just have to get the request to host that has some part of the index.

There are examples of using these index architectures in lsearch-global.conf in the package.

Performance Tuning[edit]

Default values for lucene indexer facilitate minimal memory usage and minimal number of segments. However, indexing might be very slow because of this. The default is 10 bufferred documents, and merge factor of 2. You might want to increase these values, for instance to 500 buffered docs, and merge factor of 20. You can do this in global configuration, e.g. wikidb : (single,true,20,500). Beware however that increasing number of buffered docs will quickly eat up heap. It's best to try out different values and see what are the best value for your memory profile.

If you run the searcher at a multi-CPU host, you might want to adjust SearcherPool.size in local config file. The pool size corresponds to number of IndexSearchers per index. You need to set it at least to number of CPUs, or better number of CPUs+1. This prevents CPUs from locking each other by accessing the index via single RandomAccessFile instance.