User:OrenBochman

Top MediaWiki IA Flaws
editors.
 * Most Policy should be built into the software.
 * No click stream analytics - dev teams are working blind as to how users are working (there are some feed back tools!)
 * The standard way to automate wiki should be built in.
 * No role based logic - U.I. Elements & behaviour should be visibile only if they are actionable (Non admin can click on hundreds of things that won't work).
 * UI/Extension are closely linked to parser.
 * No UI widgets format - means that extensions are either
 * tag based
 * single page based
 * have no ui.
 * modify the existing ui in complicated ways (steeper learning curve).
 * Parser
 * The parser is not really a parser but a set of transformation. (this is getting fixed!)
 * There is direct Access to the parser via hooks instread of an abstracted mechanism to protect it from bad extentions.
 * Watch is limited. (does not support follow up time based followup action)
 * Talk pages are primitive and lacking basic social features for interpersonal communication. (So people roll their own inferior features)
 * Signatures should be automatic.
 * Discussions should be threaded (this actually exists but it is built on top of talk pages)
 * No formal relations - friends/collaboration groups.
 * No avatars - identities are highly non social.
 * No Private/Alternative communications network (IM,Email,Messages,VOIP).
 * No blogging, social bookmarking, social games. (These are not roles considered part of Wikipedia but they would be worth integrating to increase editor engagement by developing personal spaces.)
 * No browsing history widget
 * No editing history widget (only a special page)
 * Support for Persitent Quiz Pages (Kind of works).


 * History - all subsequent edits by a single editor should be merged into one.

Confrences

 * http://i-semantics.tugraz.at/

SOLR
security:

Stuff

 * Google's panda to wikimania
 * Cooperate with
 * Google on NLP
 * Academia
 * Apertium
 * HFST

Wikipedia Corpus Tools
Goal: automate extract cleanup wikipedia text in leading various language to a corpus. Deliverables:
 * 1) framework for handling all languages.
 * 2) sentence chunkers.
 * 3) pos taggers
 * 4) pos/tagged wikipedia dumps.

Lucene Lemma Analyzers based on Morphology Extraction from Wikipedia Text

 * Part 1: use & expand induction software to process exiting languages.


 * 1) Lemmas to word sense:
 * 2) exsiting works
 * 3) semantic frames - verb "think" (about) takes a noun complement XXX. In hungarian this is more explicit. Can be powerfull format for representing knowldge in sentences. Could be used to convert text to relation. (go, go to XXX,go from XXX to YYY) not many relations are needed. Verbs of motions, events,
 * 4) logic frames - map simple senteces to a prologu like logic structure


 * Part 2 extract semantic frames from (part of speech tagged) corpus.
 * deliverables:
 * 1) semantic networks used in wikipedia
 * 2) search and retrieve sample sentences for semantic frame patterns

Lucene - Automatic Query Expansion System
use SVD or other methods to make a cross language word nets

User Fingerprinting

 * 1) anonymous fingerprinting for:
 * free unregisterd editor contribution.
 * sock pupet detection


 * probably not a good GSOC concept

Lucene - NG Wiki Parser Filter
Integrate the cutting edge parser as a lucne filter to allow offline indexing of wiki source. Deliverable: up to date wikipedia parser. Problems - no specs Problem - templates THis will probably be one of my own projects if I get to work full time

UIMA Content Extraction From Talk Pages
Use UIMA to automate content extraction talk and user Talk Pages. This is to facilitate tracking of action on various policies. Product a Q&A system.

This is on the frnge of contetnt analytics.