User:OrenBochman/Search

=Overview= A quick review of the above is summarized as follows:

Mediawiki does not appear to have native search capabilities. It can be searched via external components (indexed and then searched) via three extensions:
 * 1) Sphinx Search - for small sites (updated 2010)
 * 2) Lucene Search - Lucene search for large sites
 * 3) EzMwLucene - Easy  Lucene search - an unadapted package from

MWSearch does not perform searches rather it provides integration with Lucene-search.

=More Info Search Tools=

= Brainstorm Some Search Problems=

How does Search currently work
Solution:
 * Contact rainman aka Robert Stojnić rainman-sr who Developed Extension:Lucene-search. and Maintained the search servers.
 * (Consult his thesis)


 * Consult the unit test
 * Consult the api
 * Consult search related bus
 * Write a spec

What is the aproach to wikipedia ranking

 * 1) How does My Ideas currently do not involve changing the ranking algorithm.

Problem: Lucene search processes Wikimedia source text and not the outputted HTML.
Solution:
 * 1) index output HTML (placed into cache)
 * 2) stip unwanted tags (while)
 * 3) boosting thingslike
 * Headers
 * Interwikis
 * External Links

Problem: HTML also contains CSS, HTML, Script, Comments
Either index these too, or run a filter to remove them. Some Strategies are: (interesting if one wants to also compress output for integrating into DB or Cache.
 * 1) solution:
 * 1) Discard all markup.
 * 2) A markup_filter/tokenizer could be used to discard markup.
 * 3) Lucene Tika project can do this.
 * 4) Other ready made solutions.
 * 5) Keep all markup
 * 6) Write a markup-analyzer that would be used to compress the page to reduce storage requirements.
 * 1) Selective processing
 * 2) A table_template_map extension could be used in a strategy to identify structured information for deeper indexing.
 * 3) This is the most promising it can detect/filter out unapproved markup (Javascripts, CSS, Broken XHTML).

Problem: Indexing offline and online

 * 1) real-time "only" - slowly build index in background
 * 2) offline "only" - used dedicated machine/cloud to dump and index offline.
 * 3) dua - each time the lingustic component becomes significantly better (or there is a bug fix) it would be desire able to upgrade search. How this would be done would depend much on the architecture of the analyzer. One possible aproach would be
 * 4) production of a linguistic/entity data or a new software milestone.
 * 5) offline analysis from dump (xml,or html)
 * 6) online processing newest to oldest updates with a (Poisson wait time prediction model)

Problem: Lucene Best Analyzers are Language specific

 * 1) N-Gram analyzer is language independent.
 * 2) A new Multilingual analyzer with a language detector can produced by
 * 3) extract features from query and check against model prepared of line.
 * 4) model would contain lexical feature such as:
 * 5) alphabet
 * 6) bi/trigram distribution.
 * 7) stop lists; collection of common word/pos/language sets (or lemma/language)
 * 8) normalized frequency statistics based on sampling full text from different languages..
 * 9) a light model would be glyph based.

Problem: Search is not aware of morphological language variation

 * 1) in language with rich morphology this will reduce effectiveness of search. Hebrew, Arabic,
 * 2) index Wiktionary so as to produce data for a "lemma analyzer".
 * 3) dumb lemma (bag with a representative)
 * 4) smart lemma (list ordered by frequency)
 * 5) quantum lemma (organized by morphological state and frequency)
 * 6) lemma based indexing.
 * 7) run a semantic disambiguation algorithm (tag )on disambiguate
 * other benefits:
 * 1) lemma based compression. (arithmetic coding based on smart lemma)
 * 2) indexing all lemmas
 * 3) smart resolution of disambiguation page.
 * 4) algorithm translate English to simple English.
 * 5) excellent language detection for search.
 * metrics:
 * 1) extract amount of information contributed by a user
 * 2) since inception.
 * 3) in final version.

How can search be made more interactive via Facets?

 * 1) SOLR instead of Lucene could provide faceted search involving categories.
 * 2) The single most impressive change to search could be via facets.
 * 3) Facets can be generated via categories (Though they work best in multiple shallow hierarchies).
 * 4) Facets can be generated via template analysis.
 * 5) Facets can be generated via semantic extensions. (explore)
 * 6) Focus on culture (local,wiki), sentiment, importance, popularity (edit,view,revert) my be refreshing.
 * 7) Facets can also be generated using named entity and relational analysis.
 * 8) Facets may have substantial processing cost if done wrong.
 * 9) A Cluster map interface might be popular.

How can data be used to make search resolve ambiguity

 * The The Art Of War proscribes the following advice "know the enemy and know yourself and you shall emerge victorious in 1000 searches". (Italics are mine).
 * Google calls it "I'm feeling lucky".

Ambiguity can come from lexical form of the queary or from the result domain. When the top result of a search is an exact match is a disambiguation page. In either case the search engine should be able to make a good (measured) guess as to what the user ment.

Instrumenting Links
 than fetches the required page.
 * If we wanted to collect intelligece we could instrument all links to jump to a redirect page which logs
 * It would be interesting to have these stats for all pages.
 * It would be realy interesting to have these stats for disambiguation/redirect pages.


 * Some of this may be available from the site logs (are there any)

Use case 1. General browsing history stats available for disambiguation pages
Here is a reolution huristic
 * 1) use inteligence vector of  to jump to the most popular (80% solution) - call it "I hate disambiguation" preference.
 * 2) use inteligence vector  to produce document term vector projections of source vs target to match most related source and traget pages. (should index source).

Use case 2. crowd source local interest
Search Patterns are often affected by televison etc. This call for analyzing search data and producing the following intelligence vector . This would be produced every N<=15 minutes.
 * 1) use inteligence vector   together with  if significant on the search term to steer to the current interest.

Use case 3. Use specific browsing history also available

 * 1) use  and as above but with a mememory  weighed by time to fetch personalised search results.

How can search be made more relavant via Intelligence?

 * 1) Use current page (AKA refrerer)
 * 2) Use browsing history
 * 3) Use search history
 * 4) Use Profile
 * 5) API for serving ads/fundrasing

Developer/Admin Information

 * | media wiki manual
 * | extentions

Search Options
highlights:
 * | Search Extentions
 * | Extension MWSearch
 * | Lucene Search
 * | Extension:EzMwLucene
 * | Extension:SphinxSearch
 * 

=More Info Search Tools=