User:Language portal/Language coverage matrix/GSoC 2013

This is proposal for GSoC project.

Identity
Name: Harsh Kothari Email: harshkothari410@gmail.com Project title: Language Coverage Matrix dashboard

Contact/working info
Timezone: UTC+5:30 (IST - India) Typical working hours: 12:00 PM to 6 PM (IST) and 10:00 PM to 4:00 AM (IST) IRC or IM networks/handle(s): harshkothari (freenode) ''I live in Ahmedabad, a metropolitan city with 24/7 power supply and a good enough and uninterrupted Internet Connection. So working online will not be hampered by any means. ''

Project Outline
The Language Coverage Matrix dashboard would help automate the information about language support provided by the Language Engineering team for e.g. key maps, web fonts, translation, language selector, i18n support for gender, plurals, grammar rules. The LCM would display this information as well as provide visualization graphs of language coverage using various search criteria such as tools or languages. I will build this web based dashboard using Javascript libraries integrated with MySQL to manage the data. I found this project very useful for language engineering team since wikipedia supports more than 300 languages. This tool will help them analyse the details of various available features of individual language. The Language Engineering team can efficiently prioritize and include some missing features, that is the features which are not currently available particular language. The overall impact of this project will lead to an efficient and enhanced user experience for Wikis.

This web based dashboard will also help other products and communities for showing innumerous search results and visualization graphs for the same.

Document of Matrix Data
http://hexm.de/LangMatrix

Bug on Bugzilla
https://bugzilla.wikimedia.org/show_bug.cgi?id=46651

Thread on Mailing List
http://lists.wikimedia.org/pipermail/wikitech-l/2013-April/068882.html

Mentors
Runa Bhattacharjee and Alolita Sharma are my mentors.

Deliverables

 * A Python Script - To save all current data into the MySql Database.
 * Currently all the data is in spreadsheet. So I will create one Python Script that saves all the data of spreadsheet into database.


 * A Form ( HTML + jQuery ) - To manually enter new data in the Database.
 * If there will be any new entry it can be easily entered manually. This can be done by only admin.


 * PHP - All the integration.
 * Since it is web based application all integration will be done by PHP.


 * jQuery + AJAX - To fill up the dashboard as per the searching / filtering criteria.
 * JavaScript / jQuery + AJAX - Data Visualization as per the requirements.
 * Parent - Child representation. It will be dynamic. It uses JavaScript/jQuery for implementation. It may also include charts, graphs or any other interactive visualization methods. So that one can easily get information as well as can understand.


 * CSS - Designing.
 * Professional look and feel Dashboard design with all mentioned facility.


 * Optimized search facility with autocomplete feature
 * MySQL database
 * Table 1 : Details of all languages
 * Table 2 : For administrator
 * will create or update new tables as per requirement

Use Cases

 * Homepage -> All languages or regions.
 * [ Clicking on ] Region -> All languages of that region
 * [ Clicking on ] Language -> All the details of that particular language ( example - key maps, web fonts, translation, language selector, i18n support for gender, plurals, grammar rules ) + Visualization Graph of the language.
 * Dropdown menu for Search / Checkbox for Filtering the Search
 * Search field with autocomplete / suggestion facility.
 * [ Select from ] Dropdown / [ Select from ] Checkbox -> Dashboard will be filled automatically as per the query. ( i.e. - If user wishes to see the list of languages that have grammar rules - just a *click on ‘grammar rules’ will show all the languages that have grammar rules. )
 * Error Handling Use Case : [ Select from ] Dropdown / [ Select from ] Checkbox -> If no result found for same query then it will show "No Result Found" and will show data before the select/filter process is done.

Some features that would be directly useful to MediaWiki developers and Wikmedia site maintainers

 * Direct integration with existing lists of languages: Names.php, langdb.yaml in jquery.uls, extra languages supported in translatewiki.net and incubator, etc.
 * Integration with a matrix of existing or planned Wikimedia projects, so it would be clear from the matrix - is there a project in this language? Are the language tools extensions installed in this project? Is there an incubator project in this language?
 * Understanding variants: does this language supports variants in any way?

If time permits

 * I would add integration with other knowledge bases about languages, such as Ethnologue, CLDR and others, that would provide information such as number of speakers, literacy levels, language contact, etc. This way it would be possible to see, in a way that is slightly more structured that what we have now, how well our projects are covering the different languages of the world.
 * I will create one MediaWiki Extension for matrix support + visualization graph + filtering searching facility.
 * I will create browser support matrix for TUX and other product. i.e https://bugzilla.wikimedia.org/show_bug.cgi?id=45602

About you
I am Harsh Kothari, final year engineering student of L.D. College of Engineering. I am from Wikipedia Gujarat Community, and also a contributor in MediaWiki for almost 8 months now. I have developed Mediawiki Extension : TwitterCards. I am a promoter of 1st MediaWiki group of India. I have localized and ported different gadgets in Gujarati Wikipedia as well as other indic wiki. i.e HotCat, Reference Tooltip, PopUps.

Apart from Media wiki, I am also an active contributor of several open source communities such as Mozilla, fedora etc.I am also the ambassador of FOSS program of Government of India and Promoting open source technology across Gujarat. Programming is my passion and I enjoy coding across various technologies. I have good knowledge as well as experience in C,C++, Java, Python, PHP, JavaScript. I have successfully completed my internship at Physical Research Laboratory. I was working on a project named as Genetic Algorithm Based Digital Filter design which involves digital filtering on the basis of Artificial Intelligence. I have been one of the developer and organizer for last two years of Online Coding Competition like Google Code Jam and Facebook Hacker Cup named voidmain.

The proposed project is about Language Engineering Matrix Dashboard which is an Internalization project. This will develop a Web based dashboard that will include all the details of languages supported by Media Wiki.This project is One of the significant project for Language Engineering team of WikiMedia.This tool will help them analyse the details of various features of individual languages.

Participation
In my opinion, IRC is the best way of communication hence I am available on IRC all the time on the channels such as #mediawiki, #mediawiki-i18n, #wikimedia-dev, #wikimedia-lab. I am an active participant in the discussions on different mailing lists such as wikitech-I, mediawiki-india, mediawiki-i18n. I would appreciate all discussions related to my project to be carried on the above mentioned mailing lists and WikiPage.

I own a blog where i would update all the progress of my proposed project. I would also update all my progressive work regarding this project on Github. Also since my project aims at implementing new features, i would be taking regular feedback from the community over my Interface designs through testing and prototypes, and also through the mailing list.

Past Open Source Experience
I am involved with many open source activities in Ahmedabad. I am an active member of Google Developer Group Ahmedabad. I have created MediaWiki Extension TwitterCards. I am also a small contributor in MediaWiki Extension EtherEditor, jquery.uls and jquery.ime. My all code is open-source and is uploaded at Github. I have also worked on Library to get metadata from parsed raw description text. I was also invited as a delegate to share my knowledge on open source in various open source events. I was a speaker in the workshop of MediaWiki Gadget Kitchen held at Gnunifyand Avenir.

Acknowledgment
I really want to thank my mentors Runa Bhattacharjee and  Alolita Sharma for guiding me through out and very special thanks to  Amir Aharohi  for his valuable inputs for this proposal. Last but not the list special thanks to Sumana and Quim for polishing my proposal as well as for valuable feedback.