Help:CirrusSearch

Historically MediaWiki uses and extends the Lucene search engine, and now with CirrusSearch, it wraps it with ElasticSearch, and extends that with the Elastica extension. CirrusSearch became available June 2013. The Wikimedia Foundation migrated to CirrusSearch from MWSearch in late 2014 because CirrusSearch makes several key advances.
 * Faster updates to the search indexes: changes to articles are reflected in search results much more quickly.
 * Search index for templates: Template wordings and subtemplate naming are attributed at each page.
 * Better support for searching in different languages.
 * Regular expression searches.

There are several new search parameters. Some are just welcomed improvements, but others involve powerful new functionality and the need to cover the core search concepts of an "indexed search" and "page ranking".

If your question is not answered here, the authors of this page are currently offering to answer a question or provide a query on the talk page.

Full text search
A "full text search" is an "indexed search". All pages are stored in the wiki database, and all the words in them are stored in the search database, which is an index to the full text of the wiki. Every word is indexed to the list of pages where it is found, so a search can be as fast as pulling a single-record. Furthermore, the search index is updated within seconds of any change on the wiki.

There are actually many indexes on the "full text" of the wiki to facilitate the many types of searches needed. The full wikitext is indexed into many special-purpose indexes, each parsing the wikitext in whatever way optimizes there use.

For example "auxiliary" text, includes hatnotes, captions, ToC, and any wikitext classed by an HTML attribute class=searchaux. "Lead-in" text is the wikitext between the top of the page and the first heading. The "category" text indexes the listings at the bottom. Auxiliary text is weighted less, and lead-in text is weighted more, Templates are now indexed. If the transcluded words of a template change, then all the pages that transclude it are updated. (This can take a long while depending on a job queue.) Not just visible words, but also subtemplates are indexed per page using them. Documents such as PDFs stored in the File/Media namespace are now indexed. Thousands of formats are recognized.

There is support for dozens of languages, but all languages are wanted. There is a list of currently supported languages at elasticsearch.org; see their documentation on contributing to submit requests or patches.

In any case the end-user query is optimized, and the post-processing is run on the results. Various search parameters and CirrusSearch syntax advantage the indexes and the post processing. The resulting titles are weighted by relevance, and then processed 20 at a time: snippets are garnered and show search terms highlighted in bold text.

Search results will often be accompanied by various preliminary reports including Did you mean (spelling correction), and, when no results would otherwise be found it will say Showing results for (query correction) search instead for (your query).

Search features also include
 * sorting navigation suggestions by the number of incoming links.
 * Starting with the tilde character ~ to disable navigation and suggestions in such a way that also preserves page ranking.
 * Normalizing (or "folding") charactacters such as the accented characters, into keyboard characters.
 * Words and phrases that match are highlighted in bold on the search results page.

Words, phrases, and modifiers
The basic search term is a word, or a "phrase in quotes". Search recognizes a "word" to be
 * a string of digits
 * a string of letters
 * substrings of letters or digits, such as in txt2regex
 * substrings inside a compoundName using camelCase

A given word or a phrase matches all content rendered on the page, and the insource parameter matches all wikitext. A search parameter is also given a word or phrase, but can interpret it its own way, and search for it in its own index.

Spacing between words, phrases, and parameters, is whitespace, or, to use a more descriptive term, "greyspace". Greyspace is any string of spaces or non-alphanumeric characters: ~!@#$%^&*_+-={}|[]\:";'<>?,./. Greyspace is interpreted as a word boundary. Ignoring greyspace is how indexes are made and queries are interpreted. This fact will become useful later. There exceptions are the modifier characters, the colon, and the comma, but these all depend on syntax, and the same characters are greyspace otherwise.

The modifiers are ~ * ? - " ! . Depending on the syntax they can apply to a word and phrase, a parameter, or an entire query. Word and phrase modifiers are the wildcard, proximity, and fuzzy searches. Each parameter can have their own modifiers, but in general
 * A fuzzy-word or fuzzy-phrase search can suffix a tilde ~ character (and a number telling the degree).
 * A wildcard character inside a word can be a question ? mark for one character or an asterisk * character for more.
 * Truth-logic AND and OR is available, but not around parameters.
 * Truth-logic can prefix a - or ! to invert the usual meaning of the term from "match" to "exclude".
 * Quotes around words mark an "exact phrase" search.
 * Stemming is automatic but can be turned off using an "exact phrase".
 * A string of words with a grey_space_join is an aggressive phrase search with the following extra properties (described further below): greyspace matches greyspace, camelCase, or txt2number cases, and the words are stemmed.

Either an "exact phrase" or an insource parameter interprets the unspaced colon : character as a letter. In other words they don't match "in", "this", or "word" in that case; they can only find the one "word" with the colons intact. A similar event occurs with the comma, character inside a number. Words are just "letters", "numbers", or a combination of the two, and case does not matter.

A default word search uses whitespace and is aggressive with stemming, and when the same words are joined by greyspace characters they are aggressive with phrases and substrings. Substrings are defined by a change of letter-case, or a transition to or from a number. When common words like "of" or "the" are included in a greyspace-connected phrase, they are ignored, so as to match more aggressively. A phrase_by_greyspace search term, or a camelCase, or a txt2number term, match the signified words interchangably. You can use any form. They are simply available to provide for when these terms are known. Like the rest of Search, substring "words" are case-insensitive. By comparison the "exact phrase" finds whole words as defined by greyspace, and ignores numeric or letter-case transitions, and temming.

For example:
 * parser_function -"parser function" returns only  or.
 * Plan9 or Plan_9 finds any of:,  ,  ,  ,
 * "plan9" only find  (case insensitive)
 * Plan*9 finds  or.

The wildcard * character matches within a rendered word. After one or more letters, it matches a sequence of letters or digits. After one or more digits it matches a sequence of zero or more numbers; capital letters, ordinal letters (st, nd, rd), or time abbreviations (am or pm); or parts of decimal numbers.
 * The comma is considered a number, but a decimal point is considered greyspace.
 * Inside an "exact phrase" it matches words only stems or compound words.
 * It can be considered an alternative to both stemming or regex. For example, "word1 word2*" does for word2, but not word1.
 * It works for word, phrase, and insource searches, but not in the intitle parameters.

Putting a tilde ~ character after a word or phrase activates fuzziness. For an "exact phrase" it is termed a proximity search, because extra words are allowed to fit into the phrase, for example, "exact one two phrase"~2 ; but for a word it means extra characters or changed characters. For an "exact phrase" fuzziness requires a whole number telling how many extra words to fit in, but for a word fuzziness is a decimal fraction, defaulting to word~0.5 (word~.5). For a proximity phrase, a large number can be used, but it is an "expensive" search. For a word word~.1 is most fuzzy, and word~.9 is least fuzzy, and word~1 is not fuzzy at all.

For the closeness value of words given in right to left order, count and discard all the extra words, then add twice the the total count of remaining words minus one. (In other words, add twice the number of segments). For the full proximity algorithm, see Elastic Search.

An explicit AND is required between two phrases because of the "inner" quotation marks.

Quotes turn on exact term matches. You can append a ~ to the quote to go back to the more aggressive matcher you know and love.

Insource
The fastest searches target precise search domains and indexes using parameters. have fewer pages to process. Perfection is usually refined on the search page. During the refinement, any regexp specifications are processed last, slowly because they're not indexed searches.

Insource with the plain argument is the only way to find wikitext that is not shown on the rendered page, (and so not possible to find with a plain search), and yet is shown while editing the "source" wikitext. It can find any word rendered on a page. It can find any phrase rendered on a page with the rare exception of when there is a word in the wikitext between the words on the page. This happens inside template arguments, external links, wikilinks, tags, etc.

Both versions of insource are for refining the usefulness of plain searches.
 * Neither version find things "sourced" by a transclusion.
 * Neither version wants stemming; they want fewest results, not most.
 * Neither version offers proximity searches; they just scan sequences.

The insource with the plain argument, insource:word or insource:"phrase" treats the entire set of non-alphanumeric characters as whitespace. For example: &isin;

The insource with the slash-delimited /argument/, insource:/regexp/, the regular expression version is even more subtle. While refining searches regex are the only way to pickup up the markup as well. But regex are intended for use only after wildcard and proximity searches are considered. places an index search can never go. use used as a last resort are index searches.

The regexp version exists in an entirely different class of search tool. See below.

Prefix and namespace
One namespace can be specified at the beginning of a search. Two or more namespaces may be set from the search results page, Special:Search, in the Advanced pane of the search bar. Furthermore, this search domain "profile" can be set and remembered as a user preference there. Setting a namespace in the search box overrides all search bar settings or indications.

Enter a namespace name, or. Enter a colon   for the default namespace. Namespace aliases are accepted.

When the the file namespace, is involved a namespace modifier  has an effect, otherwise it is ignored.

You can now use an interwiki prefix as a namespace to search other projects.

The prefix syntax in its current form is relied upon for a great deal of functionality so it's been recreated as exactly as possible.

Note that the old rule of having to put prefix: at the end of the query still applies.

Prefix and namespace are used to set the initial search domain, but each is also a query. Like prefix, namespace can run alone, and it will return the the top twenty pages, and show the number of total pages.

Filters
Filters are required to accompany a bare regex search.

Any word or phrase is a filter because A filter returns a Y/N for every page in its given search domain. The filters If it can run as a standalone, a filter is also a query.

Intitle and incategory

 * intitle:foo
 * Find articles whose title contains foo. Stemming is enabled for foo.
 * intitle:"fine line"
 * Find articles whose title contains fine then line. Stemming is enabled. Matches The finest (lines) but not The finest ever lines.
 * intitle:foo bar
 * Find articles whose title contains foo and whose title or text contains bar.
 * -intitle:foo bar


 * Find articles whose title does not contain foo and whose title or text contains bar.
 * intitle: foo bar
 * Syntax error, devolves into searching for articles whose title or text contains intitle:, foo, and bar.


 * incategory:Music
 * Find articles that are in Category:Music
 * incategory:"music history"
 * Find articles that are in Category:Music_history


 * incategory:"musicals" incategory:"1920"
 * Find articles that are in both Category:Musicals and Category:1920
 * incategory:Felis_silvestris_catus|Dogs
 * Find articles that are either in category:Felis_silvestris_catus or in Category:Dogs
 * -incategory:"musicals" incategory:"1920"
 * Find articles that are not in Category:Musicals but are in Category:1920
 * cow*
 * Find articles whose title or text contains words that start with cow

Linksto

 * linksto:Help:CirrusSearch
 * find articles that link to a page
 * -linksto:Help:CirrusSearch CirrusSearch
 * find articles that mention CirrusSearch but do not link to the page Help:CirrusSearch

Hastemplate
You can find pages that use a certain template by adding the filter  to the search. We provide for the usual "syntactic sugar" of template calls. This means the lenient pagename and fullpagename capitalization works, and the main namespace abbreviation, ":" works. For example to find which pages transclude Quality image the full search (in all your preferred namespaces) can be: , and for that same template name in the main namespace, this works. You can omit the quotes if the template title does not contain a space. will filter pages that do not contain that template.

For wikitext that calls a template directly, you can use insource:, but hastemplate: searches the "post-expansion inclusion", so hastemplate: can find a template acting only temporarily as a "secondary template" or "meta-template", which are seen in neither the source nor content, ( but only included as a helper to any other template producing the final content). All content from a template is now reflected in search results is still the relevant philosophy here.

Page weighting
Weighting determines snippet, suggestions, and page relevance. The normal weight is one. Additional weighting is given through multipliers.

If the query is just words, pages that match them in order are given a boost. If you add any explicit phrases to your search, or for certain other additions, this "prefer phrase" feature is not applied.

Morelike
Find articles about stinging insects. Find templates about regex searching for template usage on the wiki.
 * Find articles whose text is most similar to the text of the given articles.
 * Find articles whose text is most similar to the text of the given articles.

The  query works by choosing a set of words in the input articles and run a query with the chosen words. You can tune the way it works by adding the following parameters to the search results URL: These settings can be made persistent by overriding  in Help:System message.
 * : Minimum number of documents (per shard) that need a term for it to be considered.
 * : Maximum number of documents (per shard) that have a term for it to be considered.
 * : Maximum number of terms to be considered.
 * : Minimum number of times the term appears in the input to doc to be considered. For small fields this value should be 1.
 * : Minimal length of a term to be considered. Defaults to 0.
 * : The maximum word length above which words will be ignored. Defaults to unbounded (0).
 * (comma separated list of values): These are the fields to use. Allowed fields are,  ,  ,  ,   and.
 * ( | ): use only the field data. Defaults to : the system will extract the content of the   field to build the query.
 * : The percentage of terms to match on. Defaults to 0.3 (30 percent).
 * Example:

Prefer-recent
You can give recently edited articles a boost in the search results. It goes anywhere in the query. It defaults to 160 days as recent. If you're interested in the last week, use 7 instead. All articles older than seven days are boosted half as much, and all articles older than 14 days are boosted half as much again, and so on. The boost is more than the usual multiplier, it is exponential. The factor used in the exponent is the time since the last edit.
 * prefer-recent: anywhere in the query.
 * prefer-recent:recent,boost

and a score boost of 60% of the It takes a comma-separated pair of numbers defining "recent" and the boost. The default behavior for a bar adding "prefer-recent:" to the beginning of your search. By default this will scale 60% of the score exponentially with the time since the last edit, with a half life of 160 days. This can be modified like this: prefer-recent:proportion_of_score_to_scale,half_life_in_days This number works pretty well if very small. I've tested it around .0001, which is 8.64 seconds.
 * Proportion_of_score_to_scale must be a number between 0 and 1 inclusive.
 * Half_life_in_days must be greater than 0 but allows decimal points.

This will eventually be on by default for Wikinews, but there is no reason why you can't activate it in any of your searches.

Boost-templates
You can boost pages' scores based on what templates they contain. This can be done directly in the search via  or you can set the default for all searches via the new   message. replaces the contents of  if the former is specified. The syntax is a bit funky but was chosen for simplicity. Some examples:


 * Find files in the China category sorting quality images first.
 * Find files in the China category sorting quality images first.


 * Find files in the China category sorting quality images first and low quality images last.
 * Find files in the China category sorting quality images first and low quality images last.


 * Find files about popcorn sorting quality images first and low quality images last. Remember that through the use of the  message this can be reduced to just.
 * Find files about popcorn sorting quality images first and low quality images last. Remember that through the use of the  message this can be reduced to just.

Don't try to add decimal points to the percentages. They don't work and search scoring is such that they are unlikely to matter much.

A word of warning about : if you add really really big or small percentages they can poison the full text scoring. Think, for example, if enwiki boosted featured articles by a million percent. Then searches for terms mentioned in featured articles would find the featured articles before exact title matches of the terms. Phrase matching would be similarly blown away so a search like  would find a featured article with those words scattered throughout it instead of the article for Brave New World.

Sorry for the inconsistent  in the name. Sorry again but the quotes are required on this one. Sorry also for the funky syntax. Sorry we don't try to emulate the template transclusion syntax like we do with.

Regular expression searches
A basic indexed-search finds words rendered visible on a page. Hyphenation and punctuation marks and bracketing, slash and other math and computing symbols, are merely boundaries for the words. It is not possible to include them in an indexed search.

because its possible that this could block other regexp searches. Other regex users are probably blocked when you query takes too long If ever a regexp search takes more than These return much much faster when you limit the regexp search-domain to the results of one or more index-based searches.

Warning: Do not run a bare insource:/regexp/ search. It will probably timeout after 20 seconds anyway, while blocking responsible users.

An "exact string" regexp search is a basic search; it will simply "quote" the entire regexp, or "backslash-escape" all non-alphanumeric characters in the string. All regexp searches also require that the user develop a simple filter to generate the search domain for the regex engine to search:

The last example works from a link on a page, but { {FULLPAGENAME}} doesn't function in the search box. For example
 * ' [[Special:Search/insource:/regex/ prefix:| finds the term regex'' on this page ]].

Any search with no namespace specified (or prefix specified) searches your default search domain, settable on any search-results page, i.e. settable at Special:Search. The default search domain is commonly reset by power users to All namespaces, i.e. the entire wiki, but if this occurs for a bare regexp search, then on a large wiki it will probably incur an HTML timeout before completing the search.

A regex search actually scours each page in the search domain character-by character. By contrast, an indexed search actually queries a few records from a database separately maintained from the wiki database, and provides nearly instant results. So when using using an insource:// (a regexp of any kind), consider creating one the other search terms that will limit the regex search domain as much as possible. There are many search terms that use an index and so instantly provide a more refined search domain for the /regexp/. In order of general effectiveness:


 * insource:"" with quotation marks, duplicating the regexp except without the slashes or escape characters, is ideal.
 * intitle, incategory, and linksto are excellent filters.
 * hastemplate: is a very good filter.
 * "word1 word2 word3", with or without the quotation marks, are good.
 * namespace: is practically useless, but may enable a slow regexp search to complete.

The prefix operator is especially useful with a { {FULLPAGENAME}} in a search template, a search link, or an input box, because it automatically searches any subdirectories. To develop a new regexp, or refine a complex regexp, use  on a page with a sample of the target data.

Search terms that do not increase the efficiency of a regexp search are the page-scoring operators: morelike, boost-template, and prefer-recent.

Metacharacters
This section covers how to escape the metacharacters used in rexexp searches. For the actual meaning of the metacharacters see the explanation of the syntax.

The use of an exact string requires a regexp, but the regexp term obligates the search to limit itself. Add a regexp term, never search a bare regexp. Start by noting the number of pages in a previous search before committing an exact string search. Querying with an exact string requires a filtered search domain.

For example, Refining with an exact string. because it adds it as a single regexp term while refining a search, the limited number of pages the regexp must crawl is can be seen.
 * to search a namespace, gague the number of pages with a single term that is a namespace. This will list the number of pages in that namespace.
 * starting out to find again what you may have seen, like "wiki-link" or "(trans[in]clusion)" start with namespace and insource filters.
 * refinining an ongoing search process with what you want to see, like "2 + 2 = 4", or "site.org" This is ideally the best use of regex,

You can start out intending an exact string search, but keep in mind
 * regex only search the wikitext not the rendered text, so there are some differences around the markup, and even the number of space characters must match precisely.
 * You are obligated to supply an accompanying filter.
 * You must learn how to escape regex metacharacters.

There are two ways to escape metacharacters, and they are both useful at times, and sometimes concatenated side-by-side in the escaping of a string.
 * Backslash-escape one of them \char. The insource:/regexp/ uses slashes to delimit the regexp. Giving /reg/exp/ is ambiguous, so you must write /reg\/exp/.
 * Put a string of them in double quotes "string". Because escaping a character can't hurt, you can escape any character along with any possible metacharacters in there. Escaping with quotes is cleaner.
 * You can't mix methods, but you can concatenate them.

Double-quotes escaping using insource:/"regexp"/ is an easy way to search for many kinds of strings, but you can't backslash-escape anything inside a double-quoted escape.
 * instead of
 * is as good as
 * But  always.
 * And .  It finds the   literally, which is not the   you probably wanted.

Backslash-escape using insource:/regexp/ allows escaping the " and / delimiters, but requires taking into account metacharacters, and escaping any:
 * To match a  delimiter character use.
 * To match a  delimiter character use.
 * The metacharacters would be.
 * The equivalent expression is.

The simplest algorithm to create the basic string-finding expression using insource:/"regexp"/, need not take metacharacters into account except for the " and / characters:
 * 1) Write   out. (The /" delimiters "/ are not shown.)
 * 2) Replace   with   (previous double-quote: stop, concatenate, quote restart).
 * 3) Replace   with   (stop, concatenate, start).
 * 4) You get , showing concatenation of the two methods.

The square-bracket notation for creating your own character-class also escapes its metacharacters. To target a literal right square bracket in your character-class pattern, it must be backslash escaped, otherwise it can be interpreted as the closing delimiter of the character-class pattern definition. The first position of a character class will also escape the right square bracket. Inside the delimiting square brackets of a character class, the dash character also has special meaning (range) but can it too can be included literally in the class the same way as the right square bracket can. For example both of these patterns target character that is either a dash or a right square bracket or a dot:  or.

For general examples using metacharacters
 * matches "2 + 2 = 4", with zero spaces between the characters.
 * match with zero or one space in between. The equals = sign is not a metacharacter, but the plus + sign is.

There are some notable differences from standard regex metacharacters:
 * The dot . metacharacter stands for any character including a newline, so .* matches across lines.
 * The number # sign means something, and must be escaped.
 * The ^ and $ are not needed. Like "grep" (global per line, regular expression, print each line), each insource:// is a "global per document, regular expression, search-results-list each document" per document.
 * support a multi-digit numeric range like [0-9] does, but without regard to the number of character positions, or the range in each position, so <9-10> works, and even <1-111> works.

Advanced example
For example, using metacharacters to find the usage of a template called Val having, inside the template call, an unnamed parameter containing a possibly signed, three to four digit number, possibly surrounded by space characters, AND on the same page, inside a template Val call, a named argument "fmt=commas" having any allowable spaces around it, (it could be the same template call, or a separate one):



It is fast because it uses two filters so that every page the regexp crawls has the highest possible potential. Assuming your search domain is set to ALL, it searches the entire wiki, because it offers no namespace or prefix.