User:OrenBochman/Search/NGSpec


 * The ultimate goal is to make searching simple and satisfacory.

secondry goals are:
 * improve precision and recall.
 * Evaluate component by the knowldge & intelligence they can expose
 * Use infrastrucure effectiv;ly
 * Low edit to index time

=Features=

Standard Features

 * Result Highlighting
 * Did you mean?
 * Spell checker
 * Auto-Complete Suggestions
 * Automatic Query Expansion ReSearcher
 * More Like This
 * Facetting

Ranking

 * links - page rank
 * confidence - author rank
 * external links

Wiki Specific Features

 * Wiki Code search
 * Edit History Search
 * Index both source and output
 * Links & Anchor text
 * Images
 * Index tables
 * Index disambiguation pages

Media Support

 * Indexing of uploaded document
 * pdf
 * excel
 * word
 * Indexing of image metadata commons
 * Geographical Search based on GPS data & Geographical Named Entities
 * TimeLine Search based on time Merology

Performance & Scalabability

 * based on Apache SOLR
 * modify index for Bittorent protocol update (via a edit ziph distribution field)
 * Easy Management - Installation, Management - Leverage SOLR
 * Bittorent index distribution

UI & Admin UI

 * user |JS front end
 * mw admin front end
 * sys op front end - JMX
 * search admin front end

search analytics

 * Search analytics
 * Search CTR
 * 0 hits
 * Top Queries
 * Slow Queries
 * User Click Ranking
 * Paging Depth
 * Top Facets

Crowdsourcable components

 * Lexical
 * Ontological
 * Use Categories
 * Use Interwiki
 * Data Based Learning
 * Search Analytics
 * Data Mining

= Brainstorm Some Search Problems=

LSDEAMON vs Apache Solr
As search evolves it might be prudent to migrate to Apache SOLR as a stand alone search server instead of the LSDEAMON

Pros

 * Reducing The Code Base to MediaWiki specific features
 * Free Feature Upgrades - Since Integrated with lucene releses
 * Tested/Supported on large user base


 * Monitoring via JMX
 * Can communicate directly with PHP via JSON.


 * Exisintg Features Supported :
 * Fast Vector Highlighting
 * Spell Checking
 * More Like this
 * Two Word phrase indexing via shingles
 * The Text field Can hold aggregate copies of multiple fields just for searching to reduce queries (similar to the OAIRepository)
 * Shrading (splitting index) hash title
 * Shrad Replication

Can support many more features from above matrix


 * Clustering of search results
 * via Carrot2 for top 1000 results.
 * via Mahut for millions of results.

Cons
May require a new front end MWSearch.
 * Typical development risks.
 * Untested in the cloud
 * Less familiar with dev/deployment

Problem: Lucene search processes Wikimedia source text and not the outputted HTML.
Solution:
 * 1) Index output HTML (placed into cache)
 * 2) Stip unwanted tags (while)
 * 3) boosting things like
 * Headers
 * Interwikis
 * External Links

Problem: HTML also contains CSS, HTML, Script, Comments
Either index these too, or run a filter to remove them. Some Strategies are: (interesting if one wants to also compress output for integrating into DB or Cache.
 * 1) solution:
 * 1) Discard all markup.
 * 2) A markup_filter/tokenizer could be used to discard markup.
 * 3) Tika project can do this.
 * 4) Other ready made solutions.
 * 5) Keep all markup
 * 6) Write a markup-analyzer that would be used to compress the page to reduce storage requirements.
 * 1) Selective processing
 * 2) A table_template_map extension could be used in a strategy to identify structured information for deeper indexing.
 * 3) This is the most promising it can detect/filter out unapproved markup (Javascripts, CSS, Broken XHTML).

Problem: Indexing offline and online

 * 1) solr can access the DB directly...?
 * 2) real-time "only" - slowly build index in background
 * 3) offline "only" - used dedicated machine/cloud to dump and index offline.
 * 4) dua - each time the linguistic component becomes significantly better (or there is a bug fix) it would be desire able to upgrade search. How this would be done would depend much on the architecture of the analyzer. One possible approach would be
 * 5) production of a linguistic/entity data or a new software milestone.
 * 6) offline analysis from dump (xml,or html)
 * 7) online processing newest to oldest updates with a (Poisson wait time prediction model)

Problem: Analysis And Language

 * 1) N-Gram analyzer is language independent.
 * 2) A new Multilingual analyzer with a language detector can produced by
 * 3) extract features from query and check against model prepared of line.
 * 4) model would contain lexical feature such as:
 * 5) alphabet
 * 6) bi/trigram distribution.
 * 7) stop lists; collection of common word/pos/language sets (or lemma/language)
 * 8) normalized frequency statistics based on sampling full text from different languages..
 * 9) a light model would be glyph based.

Problem: Search is not aware of morphological language variation

 * 1) Language with rich morphology this will reduce effectiveness of search. (e.g. Hebrew, Arabic, Hungarian, Swahili)
 * 2) Text Mine en.Wiktionary && xx.Wiktionary to for the data of a "lemma analyzer". (Store it in a table based on Apertium Morphlogical Dictionary format).
 * 3) Index xx.Wikipeia for frequency data and via a row/column algorithm to fill in the gaps of the Morphological Dictionary Table
 * 4) dumb lemma (bag with a representative)
 * 5) smart lemma (list ordered by frequency)
 * 6) quantum lemma (organized by morphological state and frequency)
 * 7) lemma based indexing.
 * 8) run a semantic disambiguation algorithm (tag )on disambiguate
 * other benefits:
 * 1) lemma based compression. (arithmetic coding based on smart lemma)
 * 2) indexing all lemmas
 * 3) smart resolution of disambiguation page.
 * 4) algorithm translate English to simple English.
 * 5) excellent language detection for search.
 * metrics:
 * 1) exact amount of information contributed by a user
 * 2) since inception.
 * 3) in final version.

Cache Analytics

 * Rank Pages/Links By Chace Hits (Hadoop)
 * Score Links in Disambiguation Pages
 * Score Redirects Pages
 * Notmalize on Interwiki links

Cross Language



 * 1) Phonetical Compiler
 * 2) Index sound of Proper Names
 * 3) transliteration Plugin
 * 4) IPA-NGRAM mapping transliterator (databased)
 * 5) Allow domain expert to write a rules based transliteration from IPA to their script/language.
 * 6) Allow exceptions (say old hungarian names)
 * 7) Search for "Yasser Arafat" or "Marwan Bargutti" and match the original (arabic script)


 * 1) Lexical Compiler - Compiles Machine Readable Lexicons/Thasari for lexical analysis chain.
 * 2) Ontological Compiler - Compiles Machine Readable Lexicons for Ontological analysis chain.
 * 3) Human intervention lyre - Allow a human to override the lexicon.
 * 4) Wiki Compressor Utility (build a compression utility for a wiki).

Lexical Chain

 * 1) Language Detection
 * 2) *Document - Apache Tika (Extend to all wiki languages)
 * 3) *Query - Apache Tika
 * 4) *Lexeme -
 * 5) Produce Machine Lexicons (consumable by analyzes, machine translation and spellers).
 * 6) Produce Thesarus (Semantic Interface) bootstrap with WordNet (a pTaylor diSeme expansion)


 * 1) Disambiguators.
 * 2) probabilistic POS Tagger (Morphological ambiguity).
 * 3) Semantic (Word Sense ambiguity).
 * 4) Xlanguage Disambiguator (disambiguate by looking across languages)
 * 5) Disambiguator Simplifier (replace poor word choices)
 * 6) Disambiguator

Semantic Chain

 * 1) Titles/Disambiguation/Redirect "Proper Nouns"
 * 2) Category/Clustering
 * 3) Link (Detect|Annotate role)
 * 4) Named Entity detection
 * 5) Annotation for Ontological Indexing.
 * 6) Merology.
 * 7) *Equal>>DirectPartOf>>PartOf.
 * 8) *Disjoint/OverLap.
 * 9) Time Ontology (Partial).
 * 10) Instant.
 * 11) Interval>>ProperInterval>>DateTimeInterval
 * 12) DateTime
 * 13) Interval Before/After/Contains/OverLaps.
 * 14) Instant Before/After


 * Lexical semantic interface (cross back and forth to disambiguate based on new knowledge)
 * POS of W1 based on recognising a ProperNoun is TIME_ONT/INSTANT.
 * Recognise that ALON is Name and not Tree based on verb...

Soluition 2 - specialized Language Support
Integrate new resources for languages analysing as they become available.


 * 1) contrib location for
 * 2) lucene
 * 3) * https://svn.apache.org/repos/asf/lucene/dev/tags/
 * 4) * https://svn.apache.org/repos/asf/lucene/dev/tags/lucene_solr_3_5_0/lucene/contrib/
 * 5) * https://svn.apache.org/repos/asf/lucene/dev/trunk/lucene/contrib/; and for branch_3x (to be released next as v3.6), see
 * 6) * https://svn.apache.org/repos/asf/lucene/dev/branches/branch_3x/lucene/contrib/
 * 7) solr
 * 8) * https://svn.apache.org/repos/asf/lucene/solr/dev/tags/
 * 9) * https://svn.apache.org/repos/asf/lucene/solr/dev/tags/lucene_solr_3_5_0/lucene/contrib/
 * 10) * https://svn.apache.org/repos/asf/lucene/solr/dev/trunk/lucene/contrib/; and for branch_3x (to be released next as v3.6), see
 * 11) * https://svn.apache.org/repos/asf/lucene/solr/dev/branches/branch_3x/lucene/contrib/
 * 12) external resources
 * 13) Benchmarking
 * 14) TestSuite (check resource against N-Gram)
 * 15) Acceptence Test
 * 16) Ranking Suite based on "did you know..." glosses and thier articles
 * 1) Ranking Suite based on "did you know..." glosses and thier articles

How can search be made more interactive via Facets?

 * 1) SOLR instead of Lucene could provide faceted search involving categories.
 * 2) The single most impressive change to search could be via facets.
 * 3) Facets can be generated via categories (Though they work best in multiple shallow hierarchies).
 * 4) Facets can be generated via template analysis.
 * 5) Facets can be generated via semantic extensions. (explore)
 * 6) Focus on culture (local,wiki), sentiment, importance, popularity (edit,view,revert) my be refreshing.
 * 7) Facets can also be generated using named entity and relational analysis.
 * 8) Facets may have substantial processing cost if done wrong.
 * 9) A Cluster map interface might be popular.

How Can Search Resolve Unexpected Title Ambiguity

 * The The Art Of War prescribes the following advice "know the enemy and know yourself and you shall emerge victorious in 1000 searches". (Italics are mine).
 * Google called it "I'm feeling lucky".

Ambiguity can come from:
 * The Lexical form of the query (bank - river, money)
 * From the result domain - the top search result is an exact match of a disambiguation page.

In either case the search engine should be able to make a good (measured) guess as to what the user meant and give them the desired result.

The following data is available:
 * Squid Chace access is sampled at 1 to a 1000
 * All edits are logged too.

Instrumenting Links
 than fetches the required page.
 * If we wanted to collect intelligence we could instrument all links to jump to a redirect page which logs
 * It would be interesting to have these stats for all pages.
 * It would be really interesting to have these stats for disambiguation/redirect pages.
 * Some of this may be available from the site logs (are there any)

Use case 1. General browsing history stats available for disambiguation pages
Here is a resolution heuristic
 * 1) use intelligence vector of  to jump to the most popular (80% solution) - call it "I hate disambiguation" preference.
 * 2) use intelligence vector  to produce document term vector projections of source vs target to match most related source and target pages. (should index source).

Use case 2. crowd source local interest
Search Patterns are often affected by television etc. This call for analyzing search data and producing the following intelligence vector . This would be produced every N<=15 minutes.
 * 1) use inteligence vector   together with  if significant on the search term to steer to the current interest.

Use case 3. Use specific browsing history also available

 * 1) use  and as above but with a memory  weighed by time to fetch personalised search results.

How can search be made more relevant via Intelligence?

 * 1) Use current page (AKA referer)
 * 2) Use browsing history
 * 3) Use search history
 * 4) Use Profile
 * 5) API for serving ads/fundraising

How Can Search Be Made More Relevant via metadata extraction ?
While semantic wiki is one approach to metadata collections, the Apache UIMA offers a possibility of extraction of metadata from free text as well (as templates).


 * entity detection.

How To Test Quality of Search Results ?
Ideally one would like to have a list of queries + top result, highlight etc for different wikis and test the various algorithms. Since data can change one would like to use something that is stable over time.


 * 1) generated Q&A corpus.
 * 2) snapshot corpus.
 * 3) real world Q&A (less robust since a real world wiki test results will change over time).
 * 4) some queries are easy targets (unique article) while others are harder to find (many results).

Personalised Results via ResponseTrackingFilter

 * Users post search action should be tracked anonymously to test and evaluate the ranking to their needs.
 * Users should be able to opt in for personalised tracking based on their view/edit history.
 * This information should be integrated into the tracking algorithm as a component that can filter search.

External Links Checker
External Links should be scanned once they are added. This will facilitate
 * testing is a link is alive.
 * testing if the content has changed.

The links should be doctored for frequency count.

PLSI Field for cross language search

 * index a cross language field with N=200 words from each language version of wikipedia in it.
 * the run PLSI alorithem on it.
 * this will produce a matrix that associates phrases with cross language meaning.
 * so it should then be possible to use the out put of this index to do xross language search.

Payloads

 * payloads allow storing and retrieving arbitrary tokens for each token.
 * payloads can be used to boost at the term level (using function queries)

What might go into payloads?


 * 1) Html (Logical) Markup Info that is stripped (e.g.)
 * 2) isHeader
 * 3) isEmphesized
 * 4) isCode
 * 5) WikiMarkUp
 * 6) isLinkText
 * 7) isImageDesc
 * 8) TemplateNestingLevel
 * 9) Linguistic data
 * 10) LangId
 * 11) LemmaId - Id for base form
 * 12) MorphState - Lemma's Morphological State
 * 13) ProbPosNN - probability it is a noun
 * 14) ProbPosVB - probability it is a noun
 * 15) ProbPosADJ - probability it is a noun
 * 16) ProbPosADV - probability it is a noun
 * 17) ProbPosPROP - probability it is a noun
 * 18) PropPosUNKOWN - probability it is Other/Unknown
 * 19) Semantic data
 * 20) ContextBasedSeme (if disambiguated)
 * 21) LanguageIndependentSemeId
 * 22) isWikiTitle
 * 23) Reputation
 * 24) Owner(ID,Rank)
 * 25) TokenReputation


 * some can be used for ranking.
 * some can be used for cross language search.
 * some can be used to improve precision.
 * some can be used to increase recall.