Talk:Requests for comment/CAPTCHA

From mediawiki.org
Latest comment: 11 years ago by Anomie in topic The ideal solution

Issue: image classification CAPTCHAs need a secret corpus[edit]

Image classification CAPTCHAs like ASIRRA, by definition, require the user to classify images in a way which is difficult to do by a computer. Unfortunately, this implies that verifying the correctness of the classification also cannot be easily done by a computer. Therefore, in order to work, any such scheme needs a large corpus of human-classified images. (If the corpus is too small, spammers can just learn it.)

Now, as it happens, the WMF does have a large corpus of human-classified images: Commons and its category system. Unfortunately, because this corpus is public, anyone could, in principle at least, just download it and apply existing image recognition tools to compile a reverse index mapping images to categories. Worse, they may not even need to — instead, they can use existing public image search engines like TinEye or Google Image Search to find the images' description pages on Commons, from which they can then extract the categories or whatever other information they need.

Now, granted, TinEye's and Google's coverage of Commons images is not currently perfect, but that's not really a state which we want to persist in the future. Furthermore, based on a quick test, at least Google's coverage is actually pretty good: out of ten images selected using the random file link on Commons, Google more or less correctly identified eight thumbnails, while TinEye found matches for four out of ten. (For two of the images, which happened to be locator maps, both Google and TinEye returned other maps of the same region using the same base map. Otherwise, Google mostly returned exact matches to Commons or Wikipedia, whereas TinEye mostly found copies from other sites.)

ASIRRA gets around this problem by using a proprietary image database contributed to Microsoft by Petfinder.com; only a small fraction of this database is publicly viewable, making database-cloning attacks infeasible. In principle, WMF could do the same, either by relying on a third party database or collecting their own. However, both methods have their problems: using a third party would introduce an external dependency, something which the WMF has been unwilling to do in the past, while spending precious volunteer effort to compile a massive secret image classification database would seem perverse, given that the same effort could be spent on improving our public image categorization. --Ilmari Karonen (talk) 13:27, 4 September 2012 (UTC)Reply

honeypot[edit]

Note, we already have a display:none'd honeypot text box on edit pages.

I imagine most spammers targeting Wikimedia are targetting them as a high profile target, so are making their bot specific to us (As opposed to trying to hit every site on the internet). That tends to make the honeypot approach be ineffective (If i understand correctly what you mean by honeypot). Bawolff (talk) 22:37, 10 January 2013 (UTC)Reply

That's the Extension:SimpleAntiSpam and yes, as far as I know very few wikis find it useful. --Nemo 23:07, 10 January 2013 (UTC)Reply

FancyCaptcha and effectiveness[edit]

The effectiveness of captchas will be tested a bit by m:Research:Account creation UX/CAPTCHA, but (sadly?) it's not the main object of the test. The only valid witnesses are the wikis which try different things and monitor results for longer periods: it's enough to read their experiences e.g. on Manual talk:Combating spam and Extension talk:ConfirmEdit to see that only extreme captchas like Asirra or custom solutions like QuestyCaptcha for people. Of course it's possible that nobody out there is configuring FancyCaptcha correctly (it's quite hard), but the images which are difficult for bots are even more difficult for humans. --Nemo 07:20, 11 January 2013 (UTC)Reply

The ideal solution[edit]

https://xkcd.com/810/ Anomie (talk) 13:57, 11 January 2013 (UTC)Reply