Topic on Talk:Article feedback

Three requests for Fabrice (new project lead)

13
Wittylama (talkcontribs)

Hi Fabrice, welcome onboard! I'd like to make three requests, if I may.

1) Like Brandon said in the "Gaming the system" thread - go with the qualitative feedback option. In terms of recruiting new editors, giving people an easy way to give constructive feedback in words (not just star-ratings) will identify those potential new-editors who are happy and able to give good editorial advice can be engaged by the existing community to contribute directly. Currently, the star-rating system doesn't help us separate the more literate reviewers from the people who just like clicking things. Furthermore, giving feedback in words is actually useful for the existing editing community. Currently the tool is only (potentially) useful as a recruitment device for new users and the actual reviews that are given are not being used by the community which is only driving resentment to the tool "cluttering" the space.

2) I've said for a while that the quantitative reviews (the rating scales) will only really become useful when they are expressed over time. Show a 4-coloured line graph (for each of the criteria) over time and then editors can quickly see if the article is improving, declining or if there's an unexplained spike that could imply someone trying to game the system. As it stands, having a single number (even if the older reviews "expire") does not tell us anything that the talkpage assessments don't already do - and more often that not they don't even do that.

3) Hide it from logged-in users :-) A large proportion of the people complaining about it are asking for aesthetic improvements so it does not take up so much room, or at least be centered. Since the tool is only asking for reviews from the readers (not editors) why not just auto-hide it for logged-in users. The qualitative and/or quantitative feedback it generates can be made visible somewhere else (e.g. on the talkpage alongside the article's quality "class" and "importance" assessments).

WhatamIdoing (talkcontribs)
identify those potential new-editors who are happy and able to give good editorial advice can be engaged by the existing community

Can you explain how you expect this to work? Imagine that I rate the page and say "Needs more pictures". AFT duly records for the data dumps my ratings, my comment, and whether or not I'm logged in. No one gets to see my username or IP. So how exactly will you "engage" me?

Wittylama (talkcontribs)

I'm not exactly sure how it would work, but the software's not built yet so I think we're allowed to imagine things at this point :-) My main concern with the AFT software is the dichotomy between its ostensible reason for existence (provide useful feedback to editors) and it's actual reason for existence (convert readers to becoming editors) so I'm trying to think of ways that it can do both at the same time. I'm guessing is if someone actually leaves a thoughtful and literate comment then that's the person we'd like to be able to "convert" to being an editor. Much more useful spending our time working with her than trying to teach Wikipedia MoS to someone else who leaves a comment like "great article".

Thinking out loud... the software could possibly include a check a box for people giving comments saying "I am happy to be contacted in response to my comment" which then lets them leave their email address. Wikipedians who want to could then click "email the person who wrote this comment" which would hide the commenters' email address just like the current "email this user" system does. From there it's a matter of designing tools to encourage the commenter to engage in the talkpage discussion directly... and the next thing you know they're doing Feature Article peer reviews! :-)

Fabrice Florin (talkcontribs)

Dear Wittylama and WhatamIdoing, thank you so much for your kind welcome and good recommendations!

We are just getting started on incremental designs for the next version of the Article Feedback Tool, so your comments are extremely helpful at this early stage. Our latest wireframes are shown in this new slide show from Oct. 11th, as stated in the previous thread. I expect to post another set of wireframes in the next 24 hours, and keep iterating every few days, based on feedback from the Wikimedia community.

I agree with Wittylama's request that we emphasize qualitative feedback over ratings, for all the reasons he stated. As I pointed out earlier on this talk page, our current direction is to de-emphasize the ratings in the next version of the AFT, and to invite readers instead to offer specific suggestions for improvement (so their feedback can be more constructive and useful to editors). To that end, we're looking at services like GetSatisfaction.com for inspiration.

Your second point is also well taken, that a single aggregated rating is not nearly as useful as a graph showing how that rating has evolved over time. Given that we are de-emphasizing ratings, this timeline feature may not be implemented immediately, but is definitely something I think we should do.

I will respond to your third point in the next sub-section, to address WhatamIdoing's observations.

Thanks again for all your invaluable feedback. I really appreciate the opportunity to develop this project together, as a community!

He7d3r (talkcontribs)
Example from Extension:ReaderFeedback

FYI: notice that another MediaWiki extension provides this kind of graph of ratings over time. Take a look at the image on the right.

Maybe some code could be reused when adding such a feature to the new version of ArticleFeedback.

He7d3r (talkcontribs)

PS: Since Extension:LiquidThreads changes the order of threads in a talk page and also lets the users to read the new comments on Special:NewMessages, it would be better to provide a link when saying "previous thread" (there is a "Link to" button on each thread for this), to make sure other people knows which thread you are referring to. ;-)

Fabrice Florin (talkcontribs)

Dear Helder, thank you so much for your good insights. I really like your idea of using a line graph to track ratings over time. We will aim to include this feature in phase 2 of the AFT, in a 'Ratings detail' panel on the feedback page.

Right now, we are focusing on other forms of feedback that de-emphasize ratings over free-form text and less judgmental forms of input (e.g. checkboxes for suggested improvements), based on the general recommendations of the community. I will post an updated set of slides and wireframes at the end of the day.

Thanks as well for you kind tip about including links to other threads in my comments in the future. I will do this in the future. I also love how Wikipedians like you go out of their way to help newbies like me. It's very gracious of you :)

He7d3r (talkcontribs)

You're welcome! (but I'm more like a "wikibookian", since pt.wikibooks is my "homewiki" ;-)

WhatamIdoing (talkcontribs)
Since the tool is only asking for reviewd from the readers (not editors)
  • Unlike some of our self-identified editors, the WMF believes (based on data, by the way) that most of our editors (especially occasional editors) are also legitimate readers, and your proposal doesn't make sense for people who do both. Power users can figure out how to turn it off manually, but these occasional folks are much less likely to be able to figure out how to turn it on.
  • In my spot-check, about 3% of the ratings are from logged-in users, which I suspect is a response rate higher than these logged-in users' reading rates. This higher level of use indicates that editors are accepting of the tool being visible.
  • If the tool needs improvements, and you hide it from the people most willing and able to identify and squawk about those needed improvements, then how will we learn what isn't working for people?
Wittylama (talkcontribs)

Whilst it is certainly true that there is a crossover between reader and editor, and that editors are certainly allowed to use the AFT, editors are not the intended target for this tool. Jorm (Brandon) says so directly in the "gaming the system" thread: "I do not believe that there is any intention to integration with the current system of article assessment. Article assessment is done by people who are devoted to it or are (supposedly) subject-matter experts. This tool is aimed at readers." Whilst it's just a suggestion and certainly not necessarily the "only answer", turning the tool off (or collapsing it by default?) for logged-in users would enable the tool to be even more clearly targeted at readers without offending the editors who find it intrusive. This is similar to the "Mood Bar" tool which is only visible to people who have very recently created a user account - it is designed for a specific group and would be intrusive for others. And, just like the Mood Bar, the actual feedback it generates can be displayed elsewhere (e.g. talkpage) in a format designed to be of most benefit to those who use that place - the editors.

Fabrice Florin (talkcontribs)

WhatamIdoing makes some very reasonable points that there are distinct benefits in encouraging editors to use the AFT as well, not just readers.

In fact, we are considering a way to sort the feedback page for each article so that editor comments could be listed first, so that we can hear from these more experienced and committed users as well, above and beyond reader comments.

And if we follow Jorm's idea of using this tool to create a "work list", it would become even more important to have editorial participation.

Lastly, we want to make sure that we create more opportunities for readers and editors to collaborate, a goal which could be harmed by segregating these two user groups. We hope this tool can be valuable for everyone!

But we will strive to make it more compact, to address editor concerns. And it is already possible for an editor to disable that function if they do not want it.

Thanks for this great discussion, which is truly helpful! You are asking the right questions and it will make our work a lot easier as a result.

Jason Quinn (talkcontribs)

Could you explain what you mean by, "In my spot-check, about 3% of the ratings are from logged-in users, which I suspect is a response rate higher than these logged-in users' reading rates. This higher level of use indicates that editors are accepting of the tool being visible." I don't follow you.

WhatamIdoing (talkcontribs)

I don't know what confuses you, so here are some random comments that might be helpful:

  • 3% of ratings are from logged-in users (in the sample set I checked): The data's freely available, if you want to check for yourself.
  • It's generally alleged that less than 1% of pages are read by logged-in users (especially after removing page "views" by bots, high-speed AWB-using editors, etc, who are obviously not reading the pages).
  • The discrepancy between 3% of ratings vs <1% of readings means that logged-in users are providing page ratings at (at least) three times the rate of non-logged-in users.
  • If users don't like a tool (any tool), they don't use it. The fact that the logged-in users are using the tool indicates that they (as a whole, not every single one of them) do accept the tool.
Reply to "Three requests for Fabrice (new project lead)"