User talk:Anubhav iitr

--Qgil (talk) 00:43, 24 March 2013 (UTC)

some feedback
Thanks for your draft proposal. It would help you to link to some examples of your proficiency in web programming -- do you have past open source contributions or school projects that you could show us? Showing us the code is a good step. Also, have you tried playing with MediaWiki's code yet?

best, Sharihareswara (WMF) (talk) 17:28, 8 April 2013 (UTC)


 * You're welcome. I made a facebook app based on Orkut Crushlist. You can checkout the code here. Other than that I have made some school projects which currently are deployed on LAN. I was intern and am future employee in a 500 startups funded startup, Mygola. There I develop CRM tool for them. Unfortunately I can't show them working to you as of now :( . No I don't have any open source contribution, but I feel GSoC is a good first step to bond with an open source community. Yes I have gone through the codebase of mediawiki and submited a patch for review for this bug.

--Anubhav iitr (talk) 07:12, 12 April 2013 (UTC)

Proposal Comments
Some quick initial comments:
 * Updating the UI to collect the corpus is going to be hard, much more work than one week. Getting a button added to the UI is something that would need design review, and approval from the administrators. Alternatively, you may be able to collect reverts from Cluebot-ng or STiki, or possibly look at reverted edits by users who have been block for spam. You could also add a button to the page using javascript, that tags the revision just before the revert-- convincing a few administrators to use your javascript will be much easier than convincing them all that another button is needed in the interface.
 * Thanks for the suggestion. I guess I will use STiki, it labels texts as vandalism and innocent, so it would be easier to gather classified data.


 * For the offline processing, you may want to focus on implementing the filter as a bot, which reads all of the incoming edits, and does the processing outside of the WMF cluster. The data handling will need to be pretty mature before we can run it on the production servers. Running this on a wmflabs instance shouldn't be a problem.
 * I am doing that only. The filter will be a python daemon. It will be called from a php script, extension SpamFilter. It will provide it with all the incoming edits. Filter will evaluate it as a sapm or ham, update the DB, return the result.
 * So the difference is where the python script actually runs. To get it on the Wikimedia cluster, it will need to be pretty mature, and go through a rigorous review for performance and security before it can be deployed. This can take several weeks. If, instead, it's actually running on a wmflabs instance, and just consuming a feed (using irc or the api) of recent changes, then there are almost no security or performance requirements. So I'd recommend starting with that, with the goal have having it run on the cluster (either from a hook, or as a job runner) during the second half of the program.
 * After talking with anubhav, the goal will not be to run this on the WMF cluster this summer, but just to develop the extension, and the WMF can evaluate it's usefulness on WMF sites when it's done. So above comments about the cluster are irrelevant. CSteipp (talk) 17:49, 25 April 2013 (UTC)


 * Why creating the classifier in python? As you're developing it from zero, it may be better to write it in php. Unless python has some advantage for the task.
 * Well I am thinking of developing this as a bot(will discuss that with Chris Steipp), so it will be not really a part of mediawiki codebase. As such php provided no advantage. I viewed a question on Stackoverflow concerning the same where people suggested Python. More over Php does not have a in built support for multi-threading.


 * You may find some revision metadata interesting, too. The variables recorded by AbuseFilter are: user_editcount, user_name, user_groups, article_article_id, article_namespace, article_text, article_prefixedtext, article_recent_contributors, action, summary, minor_edit, old_wikitext, new_wikitext, edit_diff, new_size, old_size, edit_delta, added_lines, removed_lines, added_links, all_links, old_links, tor_exit_node, timestamp. Some values which could be interesting: aggregated user_editcount, article_namespace, summary, minor_edit, tokenized added_lines, time since last edit...
 * Can you provide me an analysis of abusefilter on how some of these variables infect the spamming. I will take a look in abusefilter in a while.


 * The alpha/special/whitespace characters should be configurable/depend on the language. Perhaps Unicode properties could be used.
 * That's a good suggestion. I would keep that in mind


 * What's the reasoning behind the short words % attribute? Seems more likely that a problematic edit contains a 25 character "word"
 * Looking up words in a dictionary may be interesting (% of the words found in the language dictionary) as an alternative method.
 * Will do that

Platonides (talk) 21:44, 25 April 2013 (UTC)

Going over your proposal again:
 * I noticed Aug 3-17, for the online integration. I'm not sure if you were planning on it or not, but it might be best to build that on top of the existing AbuseFilter extension. The extension has hooks to allow other extensions to supply variables, so simply adding whether the filter thinks it's spam or not could be added. That will save you from re-implementing things like logging (which would need to includes deleting and suppression of entries, and other nasty issues like that), and the tagging/warning/blocking logic.
 * I have changed the proposal accordingly. I will be understanding the AbuseFilter code before the GSoC timeline to understand how can I use it better.


 * Also, just to clarify, the week of Aug 24th, you're planning to use the existing JobQueue infrastructure in MediaWiki, correct? I think it should be able to all of what you need.
 * No. Actually as you in the model I am proposing the filter would be separate bot as such it won't be a part of MediaWiki code base. So I guess I wont't be able to use the MediaWiki job scheduler. As the classifier is as a separate bot I was thinking of developing it in python for writing the daemon script. I will be using Advance Python scheduler for the job queue.


 * Lastly, do you need to schedule time to come up to speed on MediaWiki in general? And getting your dev environment setup? Just make sure to give yourself time. A lot of people will be traveling to Wikimania the week of Aug 5th, so we want to make sure you're all setup before that to hit your Aug 3rd-10th milestone CSteipp (talk) 22:03, 26 April 2013 (UTC)

Are you already on the Wikimedia mailing list for India?
Please join the Wikimedia India mailing list so you can keep up with what's happening in the Indian Wikimedia community! Sharihareswara (WMF) (talk) 15:18, 8 May 2013 (UTC)


 * Thanks Sumanah, I have joined it

GSoC / OPW IRC AllHands this week
Hi, you are invited to the GSoC / OPW IRC AllHands meeting on Wednesday, June 26, 2013 at 15:00 UTC (8:30pm IST, 8am PDT). We have done our best finding a time that works decently in as many timezones as possibles. Please confirm at qgil@undefinedwikimedia.org so I can add you to the calendar invitation and I have your preferred email for other occasions. If you can't make it's fine, but let me know as well. Thank you!--Qgil (talk) 18:01, 24 June 2013 (UTC)

Wrapping up GSoC
Congratulations for your PASS! Now please wrap up your GSoC project properly:


 * Update the related Bugzilla report(s) accordingly, filing reports for known bugs when appropriate.
 * Publish your wrap up post at wikitech-l (as en email or a blog post) and then add the URL to Mentorship_programs/status.

Take a break and celebrate. You deserve it! We hope to see you sticking around, extending your project or joining new tasks. If you need advice please check with your mentors or myself. I will be happy to help you in whatever I can!--Qgil (talk) 21:11, 1 October 2013 (UTC)

Notice: Admin activity review
Hello Anubhav iitr,

I hope that this message finds you well.

I am writing to inform you that you may lose your adminship (and other advanced permissions) on mediawiki.org because of inactivity.

A policy regarding the removal of advanced permissions (e.g.: administrator, bureaucrat, interface-admin, etc.) was adopted by community consensus in 2013. While initially that policy did not apply to this site, the mediawiki.org community decided in August 2020 to opt-in.

You are being notified because we have identified that your account meets the inactivity criteria stated in the policy: no edits and no administrative log actions for the last 2 years.


 * If you want to keep your advanced permissions, you should inform the community (at Project:Current issues) about the fact that the stewards have sent you this information about your inactivity. A community notice about this process has been also posted on said page. If the community has a discussion about it and then wants you to keep your advanced permissions, please contact the stewards at the stewards noticeboard, and link to the discussion of the local community, where they express their wish for you to continue to maintain your advanced permissions.
 * If you wish to resign your advanced permissions, you may do so by filing a request for removal on Meta-Wiki.
 * If there is no response at all on one month after this notification, the stewards will proceed to remove your advanced permissions without further notice.

In ambiguous cases, stewards will evaluate the responses and will refer a decision back to the local community for their comment and review.

If you have any questions, please let me know or feel free to message us at the stewards. If you feel we've made a mistake and your account is active, we'd also appreciate to let us know, and please accept our apologies.

Best regards, --MarcoAurelio (talk) (via MassMessage) 22:13, 9 January 2021 (UTC)