User:YuviPanda/GSoC

How to interpret estimates
The given values are lower bounds. Multiply by 3 to get higher bound. Multiply by 2 to get average. Overshooting timelines is to be expected. Will be adjusted as the project motors along.

Current implementation

 * Perl!(?)
 * http://en.wikipedia.org/wiki/User:WP_1.0_bot
 * Is a batch-processingish bot

Rewrite specifications

 * Written in PHP
 * Backwards compatible with current assessment templates used
 * Should be 'good enough' to be deployed on enwiki
 * Feature Parity with WP1.0 Bot

Components

 * 1) Assessment Data Collector
 * 2) Update Assessment Data whenever it is changed
 * 3) Log changes to assessments
 * 4) Import initial data from current Bot
 * 5) Querying interface (Assessment Statistics + Articles List)
 * 6) Arbitrary Querying of assessment data
 * 7) Embedding of arbitrary query results in different forms inside wiki articles (Statistical Table embedding)
 * 8) Creating, managing and exporting 'interim collections'
 * 9) Usage of Extension:Collections tbd

WikiProjects
Maintains list of WikiProjects along with their template information.

Tasks

 * 1) Develop Project model, with DA code (est: 2 hours)
 * 2) Design seamless interface to maintain list of Wikiprojects. Possibilities are:
 * 3) Parse out a WikiPage, and run the parser as a cronjob (preferred) (est: 6 hours)
 * 4) Special Page where wikiproject admins add new wikiprojects (existing ones imported) (est: 10 hours)
 * 5) Maintain templates used by the WikiProjects in an efficient form for assessment detection. Possibilities are:
 * 6) Original data kept in the database and a cron job generating a config file (YAML or SerializedPHP or JSON) that is loaded for assessment parsing(preferred) (6 hours)
 * 7) Database is hit everytime assessments are parsed (ouch) (&infin; hours?)

Assessments
Maintains assessments per wikiproject for each article. Only importance and quality assesments.

Tasks

 * 1) Develop Assessments model, with DA code (est: 2 hours)
 * 2) Write a 'parser' of sorts that can take wikitext/preprocessed DOM and spit out all the assessments done by all WikiProjects in it. (est: 6 hours)
 * 3) Detect which assessments have changed and update the database accordingly. (2 hours)
 * 4) Write out logs of changed assessments. (est: 6 hours)

Logs
Logs of assessment changes every time they are changed.

Tasks

 * 1) Develop logging model, with DA code (2 hours)
 * 2) Write a Special Page extension to view/filter the log. Filter By: (14 hours)
 * 3) Time of Change
 * 4) Type of Change (Importance/Quality/Other)
 * 5) User making change
 * 6) Direction of Change (Improve/Detoriate)
 * 7) Category/Project of article change is made to
 * 8) Article name

Query Engine
Set of core components that can execute any arbitrary queries, producing both statistics and article lists

Tasks

 * 1) Build a basic querying engine that can be extended in the future over other assessment backends (not just WikiProject based assessments). Abstract and well defined interfaces built. List of supported query operations would rather closely mirror that of LINQ. (est: 12 hours)
 * 2) Implement the querying engine for the WikiProject based assessments (Component #1) (est: 12 hours)
 * 3) Implement specific statistical engine for WikiProject based assesments. Support for overall and per project tables (est: 12 hours)

Querying Interface
User Interface to interactively query the assessments - both overall statistics and article lists.


 * 1) Expose the query engine via a Special Page (est: 12 hours design + 12 hours implementation)
 * 2) Expose the statistical engine via a Special Page  (est: 12 hours design + 12 hours implementation).

Embedding Interface
Magic Words (or similar) that let you embed statistical tables inside wikipages. Customizable.


 * 1) Build magic words to embed statistical tables/results in wikipages (est: 6 hours)
 * 2) Build magic words to embed query results in wikipages (est: 8 hours)