Topic on Talk:Article feedback

Is this a positive or a negative?

11
WereSpielChequers (talkcontribs)

My concerns about article rating systems are that like templating they risk diverting some editors and potential editors away from improving articles and towards critiquing them, As well as driving away some existing editors who don't like getting judged badly.

We have major problems in that our editing community is not recruiting as many new editors as it used to, nor are we persuading as many of them to stick around. One theory is that it is the templating & deletionist culture that has soured the community and made this a less attractive place to spend time editing. If so then another tool to divert people away from improving the Pedia and towards annoying other editors is a step in the wrong direction.

I may be wrong in this, and it could be something completely different that is making editing wikipedia a less attractive hobby. But if this trial goes ahead I would like to see two sets of statistics collected:

  1. Number of edits to the 100,000 articles in the trial compared to an equally random control sample of 100,000 articles not in the trial.
  2. Some sort of retention analysis of the authors of the articles in the trial.


The first of these is relatively straightforward, and I would hope that if the control sample gets more edits than the test sample then the trial will be ended and the Article feedback tool removed.

The second of these is more difficult to quantify, if this does have a negative affect on editors it is quite likely that some editors will be tolerant of a test on 3% of the articles they've contributed to, but would be lost if this was rolled out to all of them. I'm particularly concerned at the editors who are not quite fluent in English, I rather suspect that we have a number of editors who can partially justify their hobby of writing Wikipedia as a free way to get feedback on their written English; If such editors found that they were getting less constructive feedback from collaborative editing and more negative feedback from this tool then we risk losing them.

Of course it is possible that something completely different caused the decline in editor numbers, or that this might even recruit more editors than it loses us. But it is important that if we trial this we measure the right thing to tell us whether it is positive or negative for the project.

History2007~mediawikiwiki (talkcontribs)

In a complex environment such as Wikipedia, guessing what causes editor attrition is pure guesswork. There is hardly any scientific basis for assuming that this will cause attrition, based on simple data analysis. Rational reasoning, however may give hints, e.g. if an expert writes something and many joker IPs rate it as low, he may, of course get fed up and leave. However, this tool is the beginning of reliability anaysis in Wikipedia - and in 3-5 years will lead to great results if nurtured. History2007 23:56, 13 May 2011 (UTC)

This post was posted by History2007~mediawikiwiki, but signed as History2007.

Howief (WMF) (talkcontribs)

The issue of cannibalization of edits for ratings is something we need to watch very closely. We are hoping this feature will actually increase participation by serving as an on-ramp for editing. The idea is that it will take readers from "doing nothing" (reading) to "doing something," and that once a user has done something, they will be more likely to participate in other means, namely edit.

The other possibility, as WereSpielChequers mentions, is that users who would have otherwise edited will now rate since it's an easier thing to do. I think it's a good idea to track the edit patterns of the 100k group of test articles, though I think comparing them against a control group of 100k randomly selected articles could be a little problematic. The main thing is that the 100k control group may have different editing characteristics than the the test group (even though they theoretically shouldn't). We could look at the before/after behavior of the 100k test group to see if there are any meaningful differences in editing volume. We could also look at the before/after for both the test and control group.

I'm not sure how we would measure the effect low ratings could have on editors of a particular article. Maybe we can identify a sample of articles that receive poor ratings and look at their edit histories?

This post was posted by Howief (WMF), but signed as Howief.

History2007~mediawikiwiki (talkcontribs)

There is a real limit to what can be determined for sure from looking at a few pages and a few editors. I do not know what the exact effect will be, but I do know that it will be hard for me to buy any hard conclusions given that we unfortunately do not have access to what goes on in the heads of 100,000 editors. The closest (and of course most difficult) path would be to interview 500 of them. Then we would have a rough answer - maybe. Anything less than that would be pure speculation in my view, either way. History2007 13:45, 17 May 2011 (UTC)

This post was posted by History2007~mediawikiwiki, but signed as History2007.

WereSpielChequers (talkcontribs)

Hi Howie, I agree it would be great if rating an article made it easier for readers to interact with the pedia and as a result more of them edited. But we need to test for the possibility that this feature diverts potential editors away from improving the pedia. 100,000 is a very large sample, providing it was a genuinely random one it should be possible to create a control sample and compare the edit counts - perhaps omitting a few anomalous articles that have recently been newsworthy - Paul Revere springs to mind.

Comparing the 100,000 to the average for the pedia will almost certainly show that they are edited less, if only from the inevitable selection bias of omitting the new articles created during the trial. So to be fair the comparison needs to be with a control sample, or perhaps against the same articles in a previous period - though you'd need to weight the two according to total edits across the pedia in the two periods.

Of course there is the possibility that the effects will be longterm - either positively with readers spending months rating hundreds of articles and then starting editing, or negative with readers spending some time rating articles then giving up on wikipedia because rating an article badly doesn't necessarily result in someone coming along and improving it.

I'm pretty sure that the easy to use templating tools have diverted some potential editors from improving a few articles to critiquing larger numbers of them, and if this tool exacerbated that unhealthy trend I believe it would be very damaging for the community and the project. But I would emphasis the if, I would be delighted if the 100k control test showed that the assessment tool was simply getting more readers to interact wit the pedia without diverting any away from improving it.

Rich Farmbrough (talkcontribs)

"guessing what causes editor attrition is pure guesswork." That is WereSpielChequers point - here is a random sample of 100k articles that is easy to control with another similar set. We can easily measure the activity of the authors of the two sets before and after the trial has been running for a month.

He7d3r (talkcontribs)

"driving away some existing editors who don't like getting judged badly."

Before contributing to a wiki, every editor has to agree that his writing can be edited, used, and redistributed at will, and for other people to "edit" his existing text, they will probably need judge it beforehand (e.g. thinking "this phrase is not good" or "this should to be rewritten in a better way"). So, I don't think the editors of a public wiki should expect that the content they submit to the wiki is not going to be judged. If someone don't like this possibility, they are probably not going to edit anyway. I'm not sure the ArticleFeedback tool will make this worse. It may even improve it by letting the authors to receive some feedback about their work. This is particularly important for Wikibooks editors, since each wikibook usually has no more than two authors. I'm inferring this based on what I see on Portuguese Wikibooks. E.g.:

The authors should not be fooling themselves thinking because they are the only authors of a text everyone has to like it. The tool would let them to know what the readers think about the text (and if some aspect of the text is poorly rated, they can try to reword the explanations, or to add more references, examples and so on...)

WereSpielChequers (talkcontribs)

My spelling and possibly even my poor Grammar have improved as a result of my time on Wikipedia. But those improvements have come from specific feedback from other editors either correcting my work or leaving a message on my talkpage. The problem with the article assessment tool is that it gives general and anonymous feedback - general so you don't know what improvements the assessor wants and anonymous so you can't even discuss it with them. If someone assesses your article poorly you can try adding more references rewording or adding examples, but with this assessment tool how are you supposed to know that their concern is that they'd rather you'd written it in American English, or they disagree with you as to whether dates should be CE/BCE or AD/BC, or your article about a mountain in South America covers the biology, mountaineering and vulcanology aspects but omits any mention of it being the abode of the Gods of the local pre-Columbian civilisation. My experience is that not all negative feedback requires changes to the article, but the more specific the feedback the better and sometimes you need to discuss and clarify people's concerns.

Currently editors get all sorts of feedback ranging from a typo fix to a complete rewrite, but if another editor simply blanks the page, blanks a section or reverts a contribution then unless they justify their edit in their edit summary we are liable to treat that as vandalism. The risk of the article assessment tool is not that it gives feedback, but that it could give vague unusable feedback of the sort that we are used to treating as vandalism.

Knowing what the readers think about the text would be great, but is this tool giving us specific actionable feedback of the sort that articles benefit from or vague unfocussed criticism that serves no more useful purpose than adding a cleanup tag without giving an explanation on the talkpage?

Wasbeer (talkcontribs)

The problem is that we want to give text input, not a 1-5 star rating. This feedback should, of course, be visible to the public. We already have a place where anyone can place feedback on an article. It is called a talkpage.

But the decision to NOT move this tool to the talkpage and develop it into something useful while it is there has already been made.

I think this is a very unfortunate decision, I think I realize what the implications are and I think it would be a great idea to move the AFT to the talkpages and display the feedback there, especially in this stage of its development.

As History2007 points out it might take multiple years for the AFT to achieve its potentially "great results", and some people are even sceptical about the greatness of those results.

The AFT is currently still shown to people who create a new stub, I hope the new deployment comes soon so that problem will be fixed.

He7d3r (talkcontribs)
The AFT is currently still shown to people who create a new stub, I hope the new deployment comes soon so that problem will be fixed.

Until bug 29212 is fixed, I think it is possible to request the stub categories to be added to the blacklist of categories of the extension ($wgArticleFeedbackBlacklistCategories), so that the tool stop being added to articles tagged as being just a stub.

Wasbeer (talkcontribs)

True, but that is unfortunately no solution for new users. I know I have to include a stub template if I write a stub, but most Wikipedia users do not. So they will see the AFT until someone comes along and gives the stub the right template.

Reply to "Is this a positive or a negative?"