User:Gautham shankar/Gsoc

Identity

 * Name : Gautham Shankar G
 * Email : gautham.shankar@hiveusers.com
 * Project Title: Lucene Automatic Query Expansion From Wikipedia Text

Contact / Working Info

 * Timezone: UTC/GMT +5:30 hours
 * Typical working hours: 9 AM to 9 PM (3:30 AM to 4:30 PM UTC) (flexible)
 * IRC handle: gauthamshankar
 * Skype: gautham.shankar3
 * Phone: +919884837794

Project Summary
Query expansion is a method used to improve the performance of information retrieval systems. The following problems may exist when an user gives a search query,


 * Users typically formulate very short queries and are not likely to take the trouble of constructing long and carefully stated queries.

Hence the search results obtained are not satisfactory and might not contain the relevant information that the user is looking for. This project aims to solve the problem in two stages
 * The words used to describe a concept in the query are different from the words authors use to describe the same concept in their documents.


 * 1) Creation of a multilingual wordnet
 * 2) *Wordnet is a lexical database that has become a standard resource in NLP research.
 * 3) *In wordnet nouns, verbs, adjectives and adverbs are organized in sets of synonyms, called synsets which convey a concept.
 * 4) *These synsets are connected to other synsets by semantic relations (hiponymy,antonomy etc.).
 * 5) *The wordnet can be built using the vast multilingual data in Wiktionary.
 * 6) Query expansion
 * 7) *The input query is expanded with relevant terms in order to encompass a wider range of context for a successful search.
 * 8) *The search query is mapped with wordnet to obtain relevant expansion terms.
 * 9) *Integrating wordnet and search will provide data on the effectiveness of the wordnet.
 * 10) *The query expansion is added as a Lucene filter.

Both the stages are very big and most of the time during Gsoc will go into building the wordnet. If time permits the lucene filter can be added.

While there are other wordnets like DBpedia and EuroWordNet that include multilingual support, EWN requires licenses for use and DBpedia and other Wikipedia based wordnets have been generated using wiki articles and are mined based on the categories and content of the article rather using a linguistic approach. Also in the future, wordnet can be automatically updated as new words are added or when changes are made to Wiktionary. Completing the wordnet would provide a vast lexical database in machine readable format for future NLP projects. The completion of the entire project would greatly enhance the quality of the search results obtained by the user.

Deliverables
A framework to mine Wiktionary to create a wordnet

There are two approaches to create a multilingual wordnet For this project I find Expand model more suitable since the translation for English words are automatically generated by Wiktionary. As the synsets for the English words are being generated their counterparts in other languages can also be simultaneously generated. To reduce the complexity involved, the wordnet will be built only on noun and verb forms of the word and the synsets will be semantically linked using hypernymy/hyponymy. For example,
 * 1) Merge Model
 * 2) *A new ontology is constructed for the target language.
 * 3) *Relations between an existing wordnet in English and this local wordnet are generated.
 * 4) Expand Model
 * The synsets in the English wordnet are translated (using bilingual dictionaries) into equivalent synsets in the other language.
 * hypernymy – {tree, tree diagram} is a kind of {plane figure, two-dimensional figure}
 * hyponymy – {tree} can be a {chestnut, chestnut tree}.

The wordnet generation can be first tested for two languages, English being one of them. Words in some of the languages are polysemic, so more than one synset is assigned. In some of these cases, a word can be monosemic at least in one language, with a unique synset assigned. Thus the wordnet will contain two types of semantic links, Language independent links (between languages) and Language dependent links (within languages).

Required deliverables

 * 1) Page Collection Module
 * 2) *Wiktionary dumps are downloaded
 * 3) *The data is parsed to remove noise
 * 4) *An effective storage mechanism is created for future retrieval
 * 5) Dbase Module
 * 6) *Handles the final wordnet data storage
 * 7) *A data structure is created to store the wordnet
 * 8) *The wordnet will be in RDF/OWL format
 * 9) Extraction Module
 * 10) *This extracts information for a particular word
 * 11) *Synonyms are extracted to generate synsets
 * 12) *Hypernymy/hyponymy are extracted to generate links
 * 13) *Gets the translations for the word and adds them to the queue
 * 14) Mapping Module
 * 15) *This takes care of establishing the two semantic links in the wordnet
 * 16) *Data generated from extractors are used
 * 17) Extraction Manager
 * 18) *This module coordinates the Extractors and Mapping
 * 19) *It writes the final output into Dbase
 * 20) *Consistency checks are put into place in this module
 * 21) Process Manager
 * 22) *The process manager automates the task of fetching new words from extractors and adding them to queue
 * 23) *It regulates the entire automatic wordnet creation process
 * 24) *Consistency checks are written in this module

If time permits

 * 1) A Lucene filter for query expansion
 * 2) *The filter will use the wordnet to generate expansion terms
 * 3) *The expansion terms can be filtered by creating semantic maps of the query
 * 4) *Modified query is passed to obtain results

Future Project Maintenance

 * 1) Synchronizing wordnet data with wiktionary
 * 2) *As of now this project will build the wordnet using wiktionary dumps.
 * 3) *By time, wiktionary data are revised, which makes the data in the wordnet outdated.
 * 4) *A live wordnet extractor module needs to be written to continuously keep in sync the wiktionary and wordnet data.
 * 5) *Until then a monthly wordnet update needs to be released to include the revised content.
 * 6) Spam Control
 * 7) *The addition of noise to the data will greatly affect the results produced by the wordnet.
 * 8) *To identify and remove spam from the wiktionary dumps in order to build a effective wordnet.
 * 9) Algorithm Updates
 * 10) *The field of computation linguistics is widely researched with frequent updates to techniques used and algorithms.
 * 11) *To keep in constant touch with new algorithms and techniques and to test it on the wordnet for performance and update the code if necessary.

Community bonding Period

 * Interact with the mentors and the community.
 * Discuss the deliverables with the mentor and finalize the approach to be taken to solve the problem.
 * Familiarize myself with the required algorithms and data structures for the project.

Coding Period
I have my university exams until 31st May and will start coding from 1st June.

Schedule for the first leg :


 * 1) 1st June to 16th June (Milestone 1, 2.2 weeks)
 * Page Collection Module
 * 1) 17th June to 23th June (Milestone 2, 1 week)
 * Dbase Module
 * 1) 24th June to 30th June (Milestone 3, 1 week)
 * Extraction Module
 * 1) 1st July to 7th July (Milestone 4, 1 week)
 * 2) * Completion of coding and testing for first leg of Gsoc
 * 3) * Prepare documentation for Mid-term Evaluation

Schedule for the second leg :
 * 1) 8th July to 14th July (Milestone 5, 1 week)
 * 2) * Feedback on performance from mentors
 * 3) *Mapping Module
 * 4) 15th July to 28th July (Milestone 6, 2 weeks)
 * Extraction Manager
 * 1) 29th July to 11th August (Milestone 7, 2 weeks)
 * Process Manager
 * 1) 12th August to 18th August (Milestone 8, 1 week)
 * 2) * Obtain the final wordnet results
 * 3) * Complete coding and testing for second leg
 * 4) * Prepare documentation for Evaluation
 * 5) 19th August to 29th August (Milestone 9, 1.5 weeks)
 * Make final changes if any to make it presentable

About Me
I'm Gautham Shankar pursuing my fourth year B.E in computer science and engineering. I have a great passion for programming and problem solving. My first exposure to programming was using c++ and I created a basic MS paint replica using c++ in high school.The thrill of watching my friends draw shapes and fill colors in something that i 'created' has left me hooked into programming ever since. What drives me is the joy of creation using a language, just like a painting created by an artist using a brush. Moving into college i got interested in the world wide web and have ever since been fascinated with the huge volumes of data available and its potential when it is structured. This motivated me to take courses in Artificial Intelligence and Data Mining in order to create better art.

I'm fluent in C, C++ and Java. I'm also familiar with PHP and have built a product hive using PHP, Mysql and javascript. The search engine used in hive is lucene. My interests in data mining led me to build a recommendation framework in Java using heat diffusion principle, The project has been implemented on the AOL dataset and gives effective query recommendations. I have been exposed to the concepts of Corpus Linguistics and WordNet but have not worked on practical implementations.This project will be my first efforts in that direction.

Search is the gateway to harnessing the wealth of the Internet and any improvement in search would greatly affect the average Internet user. I believe this project will provide me the opportunity to do so.

Participation

 * I generally work from 9am to 10pm
 * I use Emails to communicate updates and progress
 * The project can be hosted on github so that my mentor can review code
 * To discuss on project issues in detail i use Skype or gtalk
 * When I'm convinced that i need help on a certain problem i look into forums and blogs for people who have faced similar issues, if that does not yield results, then i contact relevant people/mentors in the community through mailing lists or IRC to address my problems.

Past open source experience
I have experience working in data mining and have built a Recommendation Framework Using Webgraphs that implements the heat diffusion algorithm. The framework currently uses the AOL search dataset to recommend better queries that can be typed for a given input query. It has been implemented in Java. Since it is a framework it can be used to recommend different types of data. For example the same framework can be used to recommend movies as well as music. I'm currently working on an extension of this project to add social network graphs so as to map similar people based on the content they search for. The AOL dataset is stored using the Webgraphs and Graphlabeler Java libraries. I have uploaded the project code on github under the title webgraphs.

I have also built a web based product "hive" which is a networking platform for members of the power generation industry. It is an open forum where members can share their experiences and interact with one another to effectively run their machines and solve common problems. This is similar to an open source community. The product has been implemented using PHP (Zend framework), mysql, javascript (inc ajax). Lucene is the search engine and is used to index and retrieve large volumes of machine history. phpbb is used for forums. The code is available on github under the title hive. The website is currently live at http://www.hiveusers.com.

My github link is https://github.com/gauthamshankar

I have extensively used open source technologies for all my projects and given an opportunity i would like to work and contribute back to the community that i extensively exploit for my needs.

Other Info
1. DBpedia - A Crystallization Point for the Web of Data http://mx1.websemanticsjournal.org/index.php/ps/article/download/164/162

2. Introduction to EuroWordNet http://www.springer.com/computer/ai/book/978-0-7923-5295-2

3. Web Query Expansion by WordNet http://www.sftw.umac.mo/~fstzgg/dexa2005.pdf

4. My github link is https://github.com/gauthamshankar