Thread:Talk:Article feedback/more effective to engage casual readers on talk page?/reply (2)

Actually, what you're saying reinforces my point, and you seem to overlook that I began by saying I think having user ratings is a good idea. But if the ratings don't really matter as quality ratings, then why shouldn't the rubrics reflect more accurately what we could expect to learn from even the most casual users? To reiterate my concerns about each:
 * Complete. I would be very interested in knowing whether users got the information they needed or expected, and asking them that question is a more targeted way to collect the data; instead, they are asked to evaluate an ideal of completeness. The user, and only the user, knows whether she got what she came looking for; she can provide precise and accurate data in response to that question. She doesn't necessarily have the body of knowledge to evaluate whether the article is "complete," because she may have come there to learn about the subject from scratch. The question "Does the article need major improvements" also gets at both completion and an overall impression of quality.
 * Well-written. This invites an aesthetic response that's irrelevant to an encyclopedia. "Well-written" generally means "stylistically elegant, pleasurable to read." I want to know whether the user found the article clear and easy to read, and perhaps even interesting.
 * Objective. Objectivity is not even a WP goal! In fact, NPOV policy states that we are explicitly not to confuse neutrality with objectivity. So why are we even asking this?
 * Trustworthy. Either the information is verified by RS, or it isn't. I'm not sure what we're asking with this one. A question such as "Was the article accurate and balanced?" might get at these last two qualities.

"Would you recommend this article?" serves a general expressive function analogous to "like". If the goal is to engage readers more, and perhaps even encourage active participation, then a link at the bottom of the Ratings tool for providing verbal responses (which would take them to the talk page) does that better than the current dead-end process. And if the questions were fair measures of the goals of the encyclopedia and how it's used, I wouldn't mind the box being on the article page.

With customer-service questions, the collective data could yield stats along the lines of:
 * 57% of [number of respondents] readers got the information they needed from this article.
 * 35% thought the article was clear and interesting to read.
 * 42% thought the article was accurate and balanced.
 * 62% would recommend the article.
 * 18% thought it needed major improvements.

I recognize that would mean a complete redesign of the tool. But if the data could be analyzed by percentage, it would be much more useful to both editors and users.