Help:CirrusSearch

Historically MediaWiki uses and extends the Lucene search engine, and now with CirrusSearch, it wraps Lucene with ElasticSearch. CirrusSearch became available July 2013, and the Wikimedia Foundation migrated to CirrusSearch from MWSearch by late 2014 because CirrusSearch makes several key advances.
 * Faster updates to the search indexes: changes to articles are reflected in search results much more quickly.
 * Search index for templates: Template wordings and subtemplate naming are attributed at each page.
 * Better support for searching in different languages.
 * Regular expression searches.

There are seven additional search parameters, and new syntax and arguments in the other four parameters and in the search words and phrases. In order to document all this exquisite control we cover the prerequisite concepts of an "indexed search" and of "page ranking". For its renewed importance in regexp searches, we reintroduce the idea of one search creating the search domain for anther term in the same query.

If your question is not answered here, please ask on the talk page.

Full text search
A "full text search" is an "indexed search". All pages are stored in the wiki database, and all the words in them are stored in the search database, which is an index to the full text of the wiki. Each word in all visible content is indexed to the list of pages where it is found, so a search for a word is as fast as looking up a single-record. Furthermore, for any changes in wording, the search index is updated within seconds.

There are actually many indexes on the "full text" of the wiki to facilitate the many types of searches needed. The full wikitext is indexed into many special-purpose indexes, each parsing the wikitext in whatever way optimizes their use.

For example "auxiliary" text, includes hatnotes, captions, ToC, and any wikitext classed by an HTML attribute class=searchaux. "Lead-in" text is the wikitext between the top of the page and the first heading. The "category" text indexes the listings at the bottom. Auxiliary text is weighted less, and lead-in text is weighted more, Templates are now indexed. If the transcluded words of a template change, then all the pages that transclude it are updated. (This can take a long while depending on a job queue.) Not just visible words, but also subtemplates are indexed per page using them. Documents such as PDFs stored in the File/Media namespace are now indexed. Thousands of formats are recognized.

There is support for dozens of languages currently, but inclusion of all languages is desired. You can find a list of currently supported languages at elasticsearch.org. See their documentation on contributing to submit requests or patches.

In any case the end-user query is optimized, and the post-processing is run on the results. Various search parameters and CirrusSearch syntax advantage the indexes and the post processing. The resulting titles are weighted by relevance, and then processed 20 at a time: snippets are garnered and show search terms highlighted in bold text.

Search results will often be accompanied by various preliminary reports including Did you mean (spelling correction), and, when no results would otherwise be found it will say Showing results for (query correction) search instead for (your query).

Search features also include
 * sorting navigation suggestions by the number of incoming links.
 * Starting with the tilde character ~ to disable navigation and suggestions in such a way that also preserves page ranking.
 * Normalizing (or "folding") charactacters such as the accented characters, into keyboard characters.
 * Words and phrases that match are highlighted in bold on the search results page.

Words, phrases, and modifiers
The basic search term is a word, or a "phrase in quotes". Search recognizes a "word" to be
 * a string of digits
 * a string of letters
 * subwords of letters or digits, such as in txt2regex
 * subwords inside a compoundName using camelCase

A given word or a phrase matches all content rendered on the page, and the insource parameter matches all wikitext. A search parameter is given a word or phrase, but can interpret it its own way, and search for it in its own index.

Spacing between words, phrases, parameters, and input to parameters can include generous instances of whitespace and greyspace characters. "Greyspace characters" are all the non-alphanumeric characters ~!@#$%^&*_+-={}|[]\:";'<>?,./ . A mixed string of greyspace characters and whitespace characters, is "greyspace", is one big word boundary. Greyspace is how indexes are made and queries are interpreted. The few exceptions are where an non-spaced colon : character in a string of letters can be a treated as a letter, and a non-spaced comma, character in a string of numbers can be a treated as a number. Greyspace characters are otherwise ignored unless, due to query syntax, they can be modifier characters.

The modifiers are ~ * ? - " ! . Depending on their placement in the syntax they can apply to a word and phrase, a parameter, or to an entire query. Word and phrase modifiers are the wildcard, proximity, and fuzzy searches. Each parameter can have their own modifiers, but in general
 * A fuzzy-word or fuzzy-phrase search can suffix a tilde ~ character (and a number telling the degree).
 * A tilde ~ character prefixing a query guarantees search results instead of any possible navigation.
 * A wildcard character inside a word can be a question ? mark for one character or an asterisk * character for more.
 * Truth-logic AND and OR is available, but not around parameters.
 * Truth-logic can prefix a - or ! to invert the usual meaning of the term from "match" to "exclude".
 * Quotes around words mark an "exact phrase" search. For parameters they are also needed to delimit multi-word input.
 * Stemming is automatic but can be turned off using an "exact phrase".

Greyspace characters can also be a modifier. A string of words joined_by_greyspace(characters) is also a "phrase search", as are camelCase, or txt2number cases. The tolerances are progressive: Each one contains and widens the tolerances of the previous one. When the common words in the periphery of a grey_space or camelCase phrase are ignored such as the three common words in the_invisible_hand_of_a , then nothing is reported. When these ignore an unknown word on its periphery, it can be any word, and a preliminary "search instead" report is issued. Parameters, such as insource, only accept an "exact phrase".
 * An "exact phrase" will only tolerate greyspace.
 * A greyspace_phrase will additionally tolerate stemming and the ignoring of common words.
 * camelCase will additionally match all lowercase
 * A word search will additionally find the words anywhere on the page.

Note how the "exact phrase" search interpreted the non-spaced colon : character as a letter, but not the underscore _ character. In other words given , an "exact phrase" search will not match "in", "this", or "word". In that case it will only find the one "word" with the colons intact. The same is true for input to an insource parameter. A similar event occurs with the comma, character inside a number. With these two exceptions in mind, plus the idea of words as subwords as described in detail next, we then say that "words" are just "letters", "numbers", or a combination of the two, and case does not matter.

The common, word search employs the space character and is aggressive with stemming, and when the same words are joined by greyspace characters or camelCase they are aggressive with phrases and subwords.

Subwords are defined by a change of letter-case, or a transition to or from a number. When common words like "of" or "the" are included in a greyspace-phrase, they are ignored, so as to match more aggressively. A greyspace_phrase search term, or a camelCase, or a txt2number term, match the signified words interchangeably. You can use any form. Now camelcase matches camelCase because Search is not case sensitive, but camelCase matches camelcase because camelCase is more aggressive. Like the rest of Search, subword "words" are not case-sensitive. By comparison the "exact phrase" is greyspace oriented and ignores numeric or letter-case transitions, and stemming.

From the table we can surmise that: parser_function -"parser function" returns only  or.

Making inquiries with numbers, we would find that
 * Plan9 or Plan_9 finds any of:,  ,  ,  ,
 * "plan9" only find  (case insensitive)
 * Plan*9 finds  or.

The wildcard * character matches within a rendered word. After one or more letters, it matches a string of letters and digits. After one or more digits it matches a sequence of zero or more numbers; capital letters, ordinal letters (st, nd, rd), or time abbreviations (am or pm); or parts of decimal numbers. For the wildcard character:
 * The comma is considered part of one number, but the decimal point is considered a greyspace character, and will delimit two numbers.
 * Inside an "exact phrase" it matches extensions of a word that could have stems or compound words.
 * It may sometimes serve as an alternative to stemming or regex; or it may serve as an auxiliary, for example, "word1 word2*" finds stemming or compound words for word2, but not word1.
 * Wildcards work for word, phrase, and insource searches, but not in the intitle parameter.
 * ? can represent one letter or number; *? is also accepted, but ?* is not recognized.

Putting a tilde ~ character after a word or phrase activates a fuzzy search. For a phrase it is termed a proximity search, because extra words are allowed to fit into the otherwise exact phrase. For example, "exact one two phrase"~2 matches. For a word it means extra characters or changed characters. For an phrase a fuzzy search requires a whole number telling how many extra words to fit in, but for a word a fuzzy search is a decimal fraction, defaulting to word~0.5 (word~.5), where at most two letters can be found swapped, changed, or added, but never the first two letters. For a proximity phrase, a large number can be used, but that is an "expensive" (slow) search. For a word word~.1 is most fuzzy, and word~.9 is least fuzzy, and word~1 is not fuzzy at all.

For the closeness value necessary to match in reverse (right to left) order, count and discard all the extra words, then add twice the the total count of remaining words minus one. (In other words, add twice the number of segments). For the full proximity algorithm, see Elastic Search.

An explicit AND is required between two phrases because otherwise the two "inner" "quotation marks" are confused.

Quotes turn off stemming, but you can append a lone ~ to reactivate the stemming inside the phrase.

Insource
Insource can find any word rendered on a page, but for phrases there can be words in the wikitext between the words on the page. Also the insource parameter treats the non-spaced colon like a letter. The exceptions happen around templates and parser functions, URLs, wikilinks, HTML tags, and comments.

For example: insource: "state state autocollapse" matches

Regular expressions search wikitext only, and they are also created with the insource parameter. The syntax for the regexp is insource: no space, and then /regexp/. All the parameters except insource:/regexp/ and prefix:string generously accept space after their colon.

Both versions of insource are similar.
 * Neither version find things "sourced" by a transclusion.
 * Neither version does stemmed, fuzzy, or proximity searches.
 * They just scan sequences, want the fewest results, and work faster with other terms to act as filters

While refining searches, because wildcards searches do not match greyspace, regex are the only way to find an exact string of any characters, including a sequence of two spaces, The regexp version exists as an entirely different class of search tool, because a regexp search is not an indexed search. See below.

Prefix and namespace
One namespace can be specified at the beginning of a search. Two or more namespaces may be set from the search results page, Special:Search, in the Advanced pane of the search bar. Furthermore, this search domain "profile" can be set and remembered as a user preference there. Setting a namespace in the search box overrides all search bar settings or indications.

Enter a namespace name, or. Enter a colon   for the default namespace. Namespace aliases are accepted.

When the the file namespace, is involved a namespace modifier  has an effect, otherwise it is ignored.

You can now use an interwiki prefix as a namespace to search other projects.

The prefix syntax in its current form is relied upon for a great deal of functionality so it's been recreated as exactly as possible.

Note that the old rule of having to put prefix: at the end of the query still applies.

Prefix and namespace are used to set the initial search domain, but each is also a query. Like prefix, namespace can run alone, and it will return the the top twenty pages, and show the number of total pages.

Filters
Filters are required to accompany a bare regex search.

Any word or phrase is a filter because A filter returns a Y/N for every page in its given search domain. The filters If it can run as a standalone, a filter is also a query.

Intitle and incategory
Word and phrase searches match in a title and match in the category box on bottom of the page. But with these parameters you can select titles only or category only.


 * cow*
 * Find articles whose title or text contains words that start with cow
 * intitle:foo
 * Find articles whose title contains foo. Stemming is enabled for foo.
 * intitle:"fine line"
 * Find articles whose title contains fine then line. Stemming is disabled.
 * intitle:foo bar
 * Find articles whose title contains foo and whose title or text contains bar.
 * -intitle:foo bar
 * Find articles whose title does not contain foo and whose title or text contains bar.
 * intitle: foo bar
 * Syntax error, devolves into searching for articles whose title or text contains intitle:, foo, and bar.
 * incategory:Music
 * Find articles that are in Category:Music
 * incategory:"music history"
 * Find articles that are in Category:Music_history
 * incategory:"musicals" incategory:"1920"
 * Find articles that are in both Category:Musicals and Category:1920
 * incategory:Felis_silvestris_catus|Dogs
 * Find articles that are either in category:Felis_silvestris_catus or in Category:Dogs
 * -incategory:"musicals" incategory:"1920"
 * Find articles that are not in Category:Musicals but are in Category:1920

Intitle and incategory are old search parameters. Incategory no longer searches any subcategory automatically, but but you can now add multiple category pagenames manually. To get the search parameter [//wikitech.wikimedia.org/wiki/Nova_Resource:Catgraph/Deepcat deepcat] , to automatically add up to 70 subcategories onto an incategory parameter, incategory:category1|category2|...|category70 , you can add a line to your user-customized javascript.

Linksto
Linksto finds wikilinks to a given name, not links to content. The input is the canonical, case sensitive, page name. It must match the title line of the content page, exactly, before any title modifications of the letter-case. (It must match its { {FULLPAGENAME}}, e.g. .)

Linksto does not find redirects. It only finds [ [wikilinks]], not internal URL links. It does find wikilinks made by a template.

To find all wikilinks to a "Help:Cirrus Search", if "Help:Searching" and "H:S" are redirects to it:
 * 1) linksto: "Help:Cirrus Search"
 * 2) linksto: Help:Searching
 * 3) linksto: H:S

finds articles that mention "CirrusSearch" but not in a wikilink.

Hastemplate
You can specify template usage with. Input the canonical pagename to find all usage of the template, but use any of its redirect pagenames finds just that naming. Namespace aliases are accepted, capitalization is entirely ignored, and redirects are found, all in one name-search. The namespace defaults to Template. (Compare boost-template no default namespace; linksto no namespace aliases, case-sensitive, no redirects; intitle no redirects.)

Hastemplate finds secondary (or meta-template) usage on a page: it searches the post-expansion inclusion. This is the same philosophy as for words and phrases from a template, but here it for templates from a template. The page will be listed as having that content even though that content is not seen in the wikitext.


 * , finds "Template:Quality image" usage in your default search domain (namespaces).
 * , finds mainspace usage of a "Contents/TOCnavbar" template in the Portal namespace.

Page weighting
Weighting determines snippet, suggestions, and page relevance. The normal weight is one. Additional weighting is given through multipliers.

If the query is just words, pages that match them in order are given a boost. If you add any explicit phrases to your search, or for certain other additions, this "prefer phrase" feature is not applied.

Morelike
Find articles about stinging insects. Find templates about regex searching for template usage on the wiki.
 * Find articles whose text is most similar to the text of the given articles.
 * Find articles whose text is most similar to the text of the given articles.

The  query works by choosing a set of words in the input articles and run a query with the chosen words. You can tune the way it works by adding the following parameters to the search results URL: These settings can be made persistent by overriding  in Help:System message.
 * : Minimum number of documents (per shard) that need a term for it to be considered.
 * : Maximum number of documents (per shard) that have a term for it to be considered.
 * : Maximum number of terms to be considered.
 * : Minimum number of times the term appears in the input to doc to be considered. For small fields this value should be 1.
 * : Minimal length of a term to be considered. Defaults to 0.
 * : The maximum word length above which words will be ignored. Defaults to unbounded (0).
 * (comma separated list of values): These are the fields to use. Allowed fields are,  ,  ,  ,   and.
 * ( | ): use only the field data. Defaults to : the system will extract the content of the   field to build the query.
 * : The percentage of terms to match on. Defaults to 0.3 (30 percent).
 * Example:

Prefer-recent
You can give recently edited articles a boost in the search results. It goes anywhere in the query. It defaults to 160 days as recent. If you're interested in the last week, use 7 instead. All articles older than seven days are boosted half as much, and all articles older than 14 days are boosted half as much again, and so on. The boost is more than the usual multiplier, it is exponential. The factor used in the exponent is the time since the last edit.
 * prefer-recent: anywhere in the query.
 * prefer-recent:recent,boost

and a score boost of 60% of the It takes a comma-separated pair of numbers defining "recent" and the boost. The default behavior for a bar adding "prefer-recent:" to the beginning of your search. By default this will scale 60% of the score exponentially with the time since the last edit, with a half life of 160 days. This can be modified like this: prefer-recent:proportion_of_score_to_scale,half_life_in_days This number works pretty well if very small. I've tested it around .0001, which is 8.64 seconds.
 * Proportion_of_score_to_scale must be a number between 0 and 1 inclusive.
 * Half_life_in_days must be greater than 0 but allows decimal points.

This will eventually be on by default for Wikinews, but there is no reason why you can't activate it in any of your searches.

Boost-templates
You can boost pages' scores based on what templates they contain. This can be done directly in the search via  or you can set the default for all searches via the new   message. replaces the contents of  if the former is specified. The syntax is a bit funky but was chosen for simplicity. Some examples:


 * Find files in the China category sorting quality images first.
 * Find files in the China category sorting quality images first.


 * Find files in the China category sorting quality images first and low quality images last.
 * Find files in the China category sorting quality images first and low quality images last.


 * Find files about popcorn sorting quality images first and low quality images last. Remember that through the use of the  message this can be reduced to just.
 * Find files about popcorn sorting quality images first and low quality images last. Remember that through the use of the  message this can be reduced to just.

Don't try to add decimal points to the percentages. They don't work and search scoring is such that they are unlikely to matter much.

A word of warning about : if you add really really big or small percentages they can poison the full text scoring. Think, for example, if enwiki boosted featured articles by a million percent. Then searches for terms mentioned in featured articles would find the featured articles before exact title matches of the terms. Phrase matching would be similarly blown away so a search like  would find a featured article with those words scattered throughout it instead of the article for Brave New World.

Sorry for the inconsistent  in the name. Sorry again but the quotes are required on this one. Sorry also for the funky syntax. Sorry we don't try to emulate the template transclusion syntax like we do with.

Regular expression searches
A basic indexed-search finds words rendered visible on a page. Hyphenation and punctuation marks and bracketing, slash and other math and computing symbols, are merely boundaries for the words. It is not possible to include them in an indexed search.

because its possible that this could block other regexp searches. Other regex users are probably blocked when you query takes too long If ever a regexp search takes more than These return much much faster when you limit the regexp search-domain to the results of one or more index-based searches.

Warning: Do not run a bare insource:/regexp/ search. It will probably timeout after 20 seconds anyway, while blocking responsible users.

An "exact string" regexp search is a basic search; it will simply "quote" the entire regexp, or "backslash-escape" all non-alphanumeric characters in the string. All regexp searches also require that the user develop a simple filter to generate the search domain for the regex engine to search:

The last example works from a link on a page, but { {FULLPAGENAME}} doesn't function in the search box. For example
 * ' [[Special:Search/insource:/regex/ prefix:| finds the term regex'' on this page ]].

Any search with no namespace specified (or prefix specified) searches your default search domain, settable on any search-results page, i.e. settable at Special:Search. The default search domain is commonly reset by power users to All namespaces, i.e. the entire wiki, but if this occurs for a bare regexp search, then on a large wiki it will probably incur an HTML timeout before completing the search.

A regex search actually scours each page in the search domain character-by character. By contrast, an indexed search actually queries a few records from a database separately maintained from the wiki database, and provides nearly instant results. So when using using an insource:// (a regexp of any kind), consider creating one the other search terms that will limit the regex search domain as much as possible. There are many search terms that use an index and so instantly provide a more refined search domain for the /regexp/. In order of general effectiveness:


 * insource:"" with quotation marks, duplicating the regexp except without the slashes or escape characters, is ideal.
 * intitle, incategory, and linksto are excellent filters.
 * hastemplate: is a very good filter.
 * "word1 word2 word3", with or without the quotation marks, are good.
 * namespace: is practically useless, but may enable a slow regexp search to complete.

The prefix operator is especially useful with a { {FULLPAGENAME}} in a search template, a search link, or an input box, because it automatically searches any subdirectories. To develop a new regexp, or refine a complex regexp, use  on a page with a sample of the target data.

Search terms that do not increase the efficiency of a regexp search are the page-scoring operators: morelike, boost-template, and prefer-recent.

Metacharacters
This section covers how to escape the metacharacters used in rexexp searches. For the actual meaning of the metacharacters see the explanation of the syntax.

The use of an exact string requires a regexp, but the regexp term obligates the search to limit itself. Add a regexp term, never search a bare regexp. Start by noting the number of pages in a previous search before committing an exact string search. Querying with an exact string requires a filtered search domain.

For example, Refining with an exact string. because it adds it as a single regexp term while refining a search, the limited number of pages the regexp must crawl is can be seen.
 * to search a namespace, gague the number of pages with a single term that is a namespace. This will list the number of pages in that namespace.
 * starting out to find again what you may have seen, like "wiki-link" or "(trans[in]clusion)" start with namespace and insource filters.
 * refinining an ongoing search process with what you want to see, like "2 + 2 = 4", or "site.org" This is ideally the best use of regex,

You can start out intending an exact string search, but keep in mind
 * regex only search the wikitext not the rendered text, so there are some differences around the markup, and even the number of space characters must match precisely.
 * You are obligated to supply an accompanying filter.
 * You must learn how to escape regex metacharacters.

There are two ways to escape metacharacters, and they are both useful at times, and sometimes concatenated side-by-side in the escaping of a string.
 * Backslash-escape one of them \char. The insource:/regexp/ uses slashes to delimit the regexp. Giving /reg/exp/ is ambiguous, so you must write /reg\/exp/.
 * Put a string of them in double quotes "string". Because escaping a character can't hurt, you can escape any character along with any possible metacharacters in there. Escaping with quotes is cleaner.
 * You can't mix methods, but you can concatenate them.

Double-quotes escaping using insource:/"regexp"/ is an easy way to search for many kinds of strings, but you can't backslash-escape anything inside a double-quoted escape.
 * instead of
 * is as good as
 * But  always.
 * And .  It finds the   literally, which is not the   you probably wanted.

Backslash-escape using insource:/regexp/ allows escaping the " and / delimiters, but requires taking into account metacharacters, and escaping any:
 * To match a  delimiter character use.
 * To match a  delimiter character use.
 * The metacharacters would be.
 * The equivalent expression is.

The simplest algorithm to create the basic string-finding expression using insource:/"regexp"/, need not take metacharacters into account except for the " and / characters:
 * 1) Write   out. (The /" delimiters "/ are not shown.)
 * 2) Replace   with   (previous double-quote: stop, concatenate, quote restart).
 * 3) Replace   with   (stop, concatenate, start).
 * 4) You get , showing concatenation of the two methods.

The square-bracket notation for creating your own character-class also escapes its metacharacters. To target a literal right square bracket in your character-class pattern, it must be backslash escaped, otherwise it can be interpreted as the closing delimiter of the character-class pattern definition. The first position of a character class will also escape the right square bracket. Inside the delimiting square brackets of a character class, the dash character also has special meaning (range) but can it too can be included literally in the class the same way as the right square bracket can. For example both of these patterns target character that is either a dash or a right square bracket or a dot:  or.

For general examples using metacharacters
 * matches "2 + 2 = 4", with zero spaces between the characters.
 * match with zero or one space in between. The equals = sign is not a metacharacter, but the plus + sign is.

There are some notable differences from standard regex metacharacters:
 * The dot . metacharacter stands for any character including a newline, so .* matches across lines.
 * The number # sign means something, and must be escaped.
 * The ^ and $ are not needed. Like "grep" (global per line, regular expression, print each line), each insource:// is a "global per document, regular expression, search-results-list each document" per document.
 * support a multi-digit numeric range like [0-9] does, but without regard to the number of character positions, or the range in each position, so <9-10> works, and even <1-111> works.

Advanced example
For example, using metacharacters to find the usage of a template called Val having, inside the template call, an unnamed parameter containing a possibly signed, three to four digit number, possibly surrounded by space characters, AND on the same page, inside a template Val call, a named argument "fmt=commas" having any allowable spaces around it, (it could be the same template call, or a separate one):



It is fast because it uses two filters so that every page the regexp crawls has the highest possible potential. Assuming your search domain is set to ALL, it searches the entire wiki, because it offers no namespace or prefix.