Talk:Article feedback/Public Policy Pilot/Workgroup

From mediawiki.org

Other projects[edit]

What about testing the extension on smaller wikis? Some Wiktionaries use a JavaScript-based tool to gather feedback, I suppose they would be interested. --Nemo 06:55, 15 September 2010 (UTC)Reply

Ok, now I'll add the link: wikt:en:Wiktionary:Feedback (see also interwiki); and don't forget strategy:Special:RatedPages (there are lots of comments on the wiki about it). I don't understand why you're developing a new feature with a pilot on (a small part of) Wikipedia while there are several other projects that eagerly need such a feature, and in fact are already using something similar (but much more crappy). --Nemo 07:01, 24 September 2010 (UTC)Reply
Hey Federico!
I totally missed this given the flurry I've been trapped in over the past couple days and for that I apologize.
The answer is this: part of the reason we are doing this on such a small article subset is to actually ensure that the technology works and see immediate problems. While I don't see any moral or political reasons not to enable it in other places, the extension is slated for a series of rather rapid, iterative changes (hopefully improvements). So my advice is to wait a bit; I'm about to start on design for phase 2 (some of the feedback we've already gotten dovetails with what was expected, and we're going ahead and implementing it).
In the meantime, I'd love it if you joined the workgroup and gave some ideas. You're a smart guy and can see around corners a lot.
I know that's not the answer you were looking for but I hope that helps.--Jorm (WMF) 19:16, 24 September 2010 (UTC)Reply

Assorted comments[edit]

This feature has a lot of potential, but the current implementation sucks.

A bit of background[edit]

First, we need to establish that rating articles/entries is not a new idea. The English Wiktionary, for example, has been doing this for years. You can look at wikt:User:Conrad.Irwin/feedback.js for the code that the English Wiktionary uses. A few key points about the English WIktionary's implementation:

  • because it's implemented in JavaScript, it only works for users who load and run JavaScript on this domain;
  • it only displays for anonymous users;
  • it displays in the sidebar;
  • it appears to only work in the (now antiquated) Monobook skin currently;
  • it uses a number of simple metrics for articles with a simple one-click interface; the options to choose from are: [this entry is] "good," "bad," "messy," "mistake in definition," "confusing," "could not find the word I want," "incomplete," "entry has inaccurate information," "definition is too complicated," and finally "if you have time, leave us a note."

Current ArticleAssessment implementation[edit]

The current implementation of ArticleAssessment has a few niceties:

  • it's implemented in PHP with a proper database backend;
  • it has a nice UI for rating an article (the stars are pretty).

But the main issues I see with it are:

  • it's enormous — the entire "view results" box shouldn't be shown at all until the user clicks something;
  • the metrics are terrible;
  • it's located at the bottom of lengthy articles, making it unlikely that anyone will see it; those who do see it will likely not want to participate because it looks complicated (as opposed to the one-click system that the English Wiktionary uses).

Room for improvement[edit]

My suggestions:

  • look at how a site like ted.com uses user feedback; the Wikipedias have hundreds of awesome articles that nobody knows about and they aren't sorted by anything useful currently; this tool could be adapted to create useful metrics, e.g., [this article is] informative, interesting, sloppy, boring, unintelligible, confusing (math articles, anyone?), biased
  • once you have ratings from users, you can generate all sorts of nifty tools; you can have the most interesting articles listed in a dynamic report; or you can have "select a random informative, well-sourced history article"; this is actually something that would be useful;
  • I understand and appreciate the desire to be unobtrusive, but the rating system needs to be more visible somehow; the sidebar is a good place to look at (esp. if you can reasonably collapse some of the interwiki links on long articles); it might also be possible to put an unobtrusive icon near the top of the page (the central focal point for nearly any article); mashable.com has been using a blue box at the top of articles—that's a bit much, I think;
  • further simplify the interface, but allow for more in-depth comments if the user wants to provide them.

Hope that helps, --MZMcBride 22:54, 24 September 2010 (UTC)Reply

Response to MZMcBride's Comments[edit]

A couple or responses, so that the design rationale is better understood:

First, we decided specifically against allowing user comments with ratings. My opinion was (and I still hold it) that such comments will be either a) of little value or b) better as comments on the corresponding Discussion page. The options thus left us with two directions:

  1. They aren't stored anywhere except some random table. They would quickly become outdated or useless. They would require additional development to allow them to be visible, likely resulting in yet another tab ("View Rating Comments" or somesuch);
  2. We inject the comments as a new item on the Discussion page (either standard Talk or LiquidThreads). They are visible to be sure but since they are going to be left by users who do not normally engage in Discussion pages, any responses will be either ignored/unseen or become confusing to the user. (Users who understand Discussion pages will already know to leave comments there).

Further, comments in such a form are likely (at this stage) to be about the tool and not the article.

I agree that, from the viewpoint of a Wikitionary, that comments at rating would be valuable, but Wiktionary entries do not spawn the same types of discussions that Wikipedia entries have, and this tool is targeted at encyclopedic content.

Second, the placement of the ratings box. The placement of the box at the bottom of the article is not by chance; it is very specifically by design. Placing it above, within the article space (or even in the side bar) does not help to ensure that the article has actually been read. If the article is 7 screens long and the tool is located on the first screen (say, below the language links), then users will be encouraged to rate the article before they have read it completely.

I agree that the current placement is sub-optimal; I'd prefer it to arrive before the reference list. However, we decided for ease-of-impact to place it as low as possible.

We are not mashable, nor are we Netflix or even Yelp. They have entirely different motivations for their ratings tools (they boil down to generation of clicks, which generates ad revenue [with the exception of Netflix, whose rating system is interestingly outside of scope]).

The exact unobtrusiveness of the tool is specific as well, for a couple reasons:

  1. Community Acceptance. It was early on determined that a "loud" ratings box would be received negatively by the community. My initial design had the View Ratings box and the Rate this Page box completely decoupled, with the View box at the top of the article (which is where I expect it will eventually live, should the tool be accepted). We decided that this was too much for the community to accept in one dollop, so I decided to visually connect them (I personally see the tool as two "tools" with discrete purposes - purposes that, for all intents, are at odds with one another). The vulnerability of the system to information cascade and anchoring is why results are hidden at the outset.
  2. It's Not the Point. The point of the article is the article, not the ratings box. Sure, the ratings are another aspect of an article, but they are not the article itself (just as the History is not, nor the Discussion - even though I believe those are just as important). I personally view the ratings histogram to be another vector (hah) within the History.

Regarding the display of the "View Results" pane at the outset: it must be obvious to the user that they can see the results of the article. A primary goal was to make the tool as minimal to use as possible (and a planned design is even more minimal than this one).

I cannot speak to the choice of metrics except to say:

  1. They are configurable. We can change them at any time (pending translations, of course)
  2. They are an experiment. I personally believe that we can approximate the expected values of three of them using analytics; the outlier is "Neutrality," which we may find is entirely useless on a metric scale (but may still be useful as a type of "honeypot" for reader venom). One of the answers we hope to get out of the workgroup is a better set of metrics. (The workgroup goes beyond metrics as well: I want to get better formulas for "stale" and "expired", for instance.)

This tool, too, is effectively implemented in Javascript. The design decision behind that was one of performance: we want to reduce calls to the server database as much as possible. There are two questions we have to ask each time:

  1. Should the tool be displayed? (handled in php)
  2. Does the tool need to display existing ratings? (this is the big one, and handled via javascript)

As a result, simply injecting the html as the page gets rendered would have been easier but would also have been more of a burden. A full-scale roll-out would clearly be implemented in php, but for now it's done client-side.

As far as graphs and histograms go, that's on the plan. We did not have sufficient design or development time to include them (though in the early design comps there are indications as to where they should go).

There's a lot of stuff that is "in the plan" that didn't make it into this revision, by the way. There is a roadmap, and I'm currently working to get a framework written for it. For example, the concept of "expired" ratings isn't in the current version, and I'm keen to get it into the design. Also, the idea of "self-identified experts" - we'd like to track that. Even if we don't apply weight to self-identified experts, it makes for an interesting line in the histogram. There are even more aspects that lie further in The Deep (ways to tie this into discussion systems, or viewing user rating histories and the like).

This became a book. Sorry. --Jorm (WMF) 05:33, 25 September 2010 (UTC)Reply

Thanks for a 95% constructive and helpful comment, Mz. ;-) I'll add to Brandon's note that the design of the system reflects its primary intentions. These are explained in Article feedback/Public Policy Pilot/FAQs, but specifically, the quantitative assessment of change-over-time across defined quality dimensions is one of the objectives for this deployment. That's a lot easier when you're dealing with a four variable / five point scale where the vast majority of ratings submit complete data for all variables, as opposed to a tagging system with forced prioritization, where your objective is to highlight predominant characteristics (you're surfacing that a video is "inspiring", but you end up with very little information about how many people think it's "long-winded").

That is not to say that we didn't discuss tagging systems -- we did, and I thank you for bringing up TED; I had only seen the output side of it, and your post inspired me to look at the input side. It's a very cool system, and I agree that a system like this could be very useful for precisely the kind of purposes you describe: surfacing articles with specific characteristics. Another direction to explore is the system employed by Newstrust, which offers a similar initial rating system to ours, and expands to allow for additional input for those who would like to provide it.--Eloquence 07:45, 29 September 2010 (UTC)Reply

Workgroup open[edit]

Who is welcomed to join the workgroup? Thorncrag 05:47, 27 September 2010 (UTC)Reply

Hi. I already answered your question in the blog comments. guillom 13:50, 27 September 2010 (UTC)Reply
Oh, sorry; I did not see that the last time I checked. Thorncrag 20:15, 27 September 2010 (UTC)Reply

Colour of the Stars[edit]

Good morning, I've come to the test from an article in the German Signpost. Not sure whether this is the right place for feedback, but here goes: I do not like the colour of the rating stars for three - partly cross-cultural - reasons:

  • In Central Europe, the Red Star is perceived as a symbol of Communism in general, and the Russian Army in particular. Neither of these are very friendly connotations for large sections of the user population here. Now I realize that your star is a bit more bulky than a pentagram, but a five-pointed red star is a five-pointed red star.
  • When teachers grade term papers and the like over here, red is used to mark errors. The more red in your paper, the worse it is. This runs exactly counter to the meaning implied here.
  • Red means stop, green means go. Again: red flags as markers of quality are trouble signs, not good things.

Would you consider changing the colour of the stars? To a dark green or a blue possibly? --Minderbinder 05:51, 29 September 2010 (UTC)Reply

colour significances vary. In the US, red is currently the symbol of the (right-of-center) Republican party DGG 01:09, 30 September 2010 (UTC)Reply
How about using something completely neutral such as Cscr-featured.png‎? (the featured article star) or a tick sign?--Kudpung 23:44, 9 December 2010 (UTC)Reply

Feature Ideas[edit]

  • Provide the ability to generate a graph of how the ratings have changed over time. The point of the software is to see how good an article is at any given time, but it would be super useful to actually see if the article was deeemed to have improved over time. -- Witty lama.

Shimgray's comments[edit]

See http://www.generalist.org.uk/blog/2010/article-ratings/ . guillom 03:20, 1 October 2010 (UTC)Reply

Comments from en-wiki[edit]

These are copied over from an ill-conceived discussion page on English Wikipedia. Sorry for the confusion, Nifboy and Peregrine Fisher. -Sage

Comments from Fetchcomms[edit]

I haven't seen anything too objectionable, and I'm not familiar with the technical side, but the one thing that bothers me is its placement. Can we move the feedback box after the categories? I think it makes the page flow better a bit. Also, is there a way to turn on the feedback tool for articles in a more "secure" way than just a category? I don't know what (maybe a MediaWiki-space listing of pages that need the feedback tool, or some special page to configure it, or something else), but that might be more useful from keeping people from inadvertently removing the category. Anyway, it seems to have worked fine for me so far. Fetchcomms 02:20, 8 October 2010 (UTC)Reply

Suggestions[edit]

I'd like to make a few suggestions:

  • That the "Your Feedback" section be linked to a "Your Feedback" which is listed first in the "Interaction" sidebar.
  • The feedback should have a "Graphics" option for rating the usefulness of images and a "Layout" option for rating how well the graphics have been layed out, and how well they have been positioned. The "Graphics" option could also have a subsection for Caption quality.
  • There should be a "Grammer" option for rating the grammer of the article.
  • There should be a comment textbox at the bottom of the form.
  • The forms should be collapsible.

Smallman12q 00:27, 13 December 2010 (UTC)Reply

If Smalman12q's suggestions are implemented, I recommend spelling Grammar correctly. EncMstr 09:36, 29 December 2010 (UTC)Reply

Feedback on feedback[edit]

Just saw this for the first time and it works fairly well. At first I couldn't figure out which end (left or right) was the high end for the ratings, but once I put the pointer over the circles and the stars showed up it made sense. The results box should probably be hidden until after the input is submitted.

I wonder to what uses you'll put the data. If you answer "Just general feedback and then editors will develop more uses as they go along," then you are doing this wrong. Data collection should be designed with specific uses or questions in mind. As set up now the survey will probably only answer questions like "Do readers like this article?" without much information on which readers do or don't like the article or why they do or don't like the article. In other words, the scores on "Well sourced," "complete," etc. will always come back equal, or perhaps in a fixed pattern, e.g. readability always lower. You might also be interested in who likes or dislikes the article - e.g. by age, sex, or participation on Wikipedia, or why they were reading the article (because of a news story, school assignment, work, ....)

And do remember that you don't have to have the same survey for everybody - different boxes can pop up each time the article is shown - so that readers don't have to answer say 20 questions. Readers might only answer 3 questions per survey, but the overall readership might be answering 20 questions in total. 68.45.215.63 13:15, 19 January 2011 (UTC)Reply

Article Feedback does not work on secure pages[edit]

Using the HTTPS proxy, the Article Feedback tool does not work correctly. For example, [1] shows "An error has occurred. Please try again later" and neither the Show results link nor the Feedback link works. Dtrebbien 18:16, 22 January 2011 (UTC)Reply

Connecting the Likert scale to reality[edit]

A majorproblem with the rating system IMO is that it's too subjective. What's three stars for me might be one star for you. It's also possible that over time people will start to game the ratings as they do at other popular sites, jamming it 0s or 5s to have the most impact on certain aspects of the article (eg: this article is completely biased because I disagree with it). It's also hard to deal with newer articles. A short article gets a low rating for completeness... but is it neutral? It could be 5 stars for neutrality because what's in there is neutral. It could be a 1 star for neutrality because it doesn't fairly represent all sides of the debate. Same thing with sources. Is a new article written from two sources well sourced because everything is sourced, or poorly sourced because it needs far more expansion?

The answer is NOT better jargon. People will say we need to use more common sense language, to appeal to the average reader. Others will say we need to use the Wikipedia definitions like Verifiability and NPOV to ensure maximum accuracy. Both miss the point. You still open the system up to gaming, and wildly conflicting interpretations of what "3 stars" means.

A better system would be to make room for a literal description of what the ratings mean. It wouldn't have to be cluttered. In fact, the scale would be easier to read if it were all presented down the left hand side. This would make room for verbal descriptions on the right hand side:

  • Well-Sourced: (x)(x)(x)()()  : Inconsistent use of sources: some parts sourced, some not.
  • Neutral: (x)(x)(x)(x)()  : Neutral in tone, but issues in how the debate is presented.
  • Complete: (x)(x)(x)()()  : Another section of detail is needed.
  • Readable: (x)(x)()()()  : Difficult to read.

As you mouse over the different ratings from 0 to 5, the literal descriptions would automatically change. For example, mousing over "0" for well-sourced would say "Mostly unsourced and dubious statements", whereas "5" for well-sourced would say "all statements appear supported by reliable sources". Clicking on the star would "lock in" the literal description (eg: 4 stars says "mostly sourced, but some might not be reliable"). You could always click on a new star rating to lock in a different rating and description.

The benefit of this new design would flow both ways. Readers would make fewer blatant errors with rating, and it would be easier to find ratings made in bad faith. Editors would gain more consistent feedback with a more clear interpretation. And readers would start to understand the standards of a good article much better... instead of just going off their own feelings of whether they liked it or agreed with it.

I hope you reconsider the current design because I think it presents several problems if applied on a large scale. Bigwikifan 18:09, 22 February 2011 (UTC)Reply

useless crap. Please. Go write some articles.[edit]

  • Please. Please. Please. If you do not feel competent to write articles, please go somewhere else to while away your online time; do not clog (and crappify) Wikipedia with self-reference crap. Wikipedia is an encyclopedia, not Face Book. If you want an encyclopedia, you do not want this initiative (or whatever it is). GlitchCraft 14:38, 6 March 2011 (UTC)Reply
I'm sure none of us felt competent to write articles when we first started editing on Wikipedia. Everyone needs to start somewhere. The system can also be (more) useful for highlighing articles in need of serious work, which we are already not aware of through the WikiProject article assessment. If grammar was implemented, it would be highly useful in alerting users who like to clean-up bad grammar. Jolly Janner 01:32, 11 March 2011 (UTC)Reply
Articles with systemic bad grammar are easy to find, if anyone believes that it would be useful to do so, contact me on en: and I'll see what I can do. Rich Farmbrough 23:53, 24 April 2011 (UTC)Reply

"trustworthy" and "objective" will not accumulate meaningful results.[edit]

people are going to rate the articles in these two categories based on how dissonant or consonant it is with their pre-established beliefs. quite irrespective of the fidelity or neutrality or balance of the information in the article. that is just human nature. the results will be near 50-50 for more controversial articles, and near 100% for more "boring" articles. and that is really what these will end up measuring, how controversial vs. how boring the subject matter is. and that information really is of no use to anyone trying to improve the articles.174.102.197.120 12:59, 18 March 2011 (UTC)Reply

oh, and i mention this because i think we should replace these two categories with ones that will give more meaningful results. 174.102.197.120 13:14, 25 March 2011 (UTC)Reply
I would say that it is (vacuously) measuring how people perceive the articles rather than whether they are objectively trustworthy or objective. Rich en:User talk:Rich FarmbroughFarmbrough

Any chance we can get it applied across Wikiproject Poland?[edit]

Any chance we ca have the tag applied across Wikiproject Poland pages? Is there a bot-generated list of articles tagged (specifically one by WikiProject) to aid in management of feedback and method for tracking improvement? Ajh1492 12:48, 25 March 2011 (UTC)Reply

en:User:Femto bot will generate such a list if you ask it nicely. Rich Farmbrough 23:53, 24 April 2011 (UTC)Reply

Shakespeare Authorship Question?[edit]

This has been added to the SAQ article, which does not meet the stated requirements for inclusion (a lot of edits anticipated in the coming months or an undeveloped article). In addition, it's impossible to tell who added it from the edit history. Can anyone tell me anything about this? Tom Reedy 20:15, 25 March 2011 (UTC)Reply

This feedback device will just prove to be a magnet for POV pushers in this particular case. Paul Barlow 20:28, 25 March 2011 (UTC)Reply

Update: I've removed it from the SAQ page. The selection criteria appear to be completely random. --GuillaumeTell 21:44, 25 March 2011 (UTC)Reply
Please see the following thread for a summary of the Article Feedback tool, its goals, and the current trial. For research purposes, we've selected 3,000 articles to collect data on. The 3,000 were determine by selecting a random set of articles within 3 different article length-bands as the early research showed that article of different lengths showed different distributions of ratings. Howief 22:51, 25 March 2011 (UTC)Reply

A link to project page[edit]

I read the Sharon Osbourne article and noticed the "Rate this page"-box at the bottom (it's gone now, but can be found on all pages in w:Category:Article Feedback Pilot). May I request that that box has a link to the project page for the ratings project so anyone interested in it can read more about it? --Bensin 05:43, 3 April 2011 (UTC)Reply

I noticed the same thing on w:Industrialisation. It would be nice to have a link to the project page so that editors and users can find more information on the project. Else like me they will have t make a specific search to find any information on it. Polyamorph 18:12, 5 April 2011 (UTC)Reply
I agree with what the above editors have suggested. I just ran into this myself and found it very difficult to track down the project page to figure out what this thing is. If there is a good reason why the "Rate this page" box shouldn't link back to the project it is part of then let's hear it. -Thibbs 14:04, 10 April 2011 (UTC)Reply
Still no response, is this a dead project? if so I suggest it needs to be shut down and cleaned up. I w:Stanislao Lista has the template with no link to the project. JeepdaySock 15:46, 23 May 2011 (UTC)Reply

comments aren't tied to a specific page version, subjective/vague, "I am highly knowledgeable...", not being discussed where it's being used[edit]

The comments aren't tied to a specific page version. They also are completely subjective and rather vague. Then there's the "I am highly knowledgeable about this topic." Also, although Article "feedback" is only being used/tested on the English Wikipedia, it's not being discussed on the English Wikipedia. Here are my concerns with those issues.

  • comments aren't tied to a specific page version What happens when a page is revised (either incrementally or substantially)? How long do comments stick around for? If comments stick around for a long time then they aren't likely to be very useful compared to say some sort of cleanup template which can simply be removed after the cleanup is performed (or after someone is bold and determines that cleanup has been performed). If comments only stick around for a short time, then they aren't going to be very useful because you aren't likely to get enough traffic to really form consensus on how the article should be rated.
  • completely subjective and rather vague Where's the metric that we're supposed to be using to make these value judgement? With what rubric am I supposed to judge? "Completeness" to one person may simply be non-stub status while to another person it might include everything that has ever been said even remotely connected to the topic. How detailed are we going to get, how detailed should we get? This idea of completeness already varies from page to page and topic to topic. Theoretically, something more could always be said about something (or has been said and simply not referenced here). Wikipedia is not a primary source, it's an aggregate of second hand sources. It's an incredible starting point for research or learning about something, but it is not a primary source.
  • I am highly knowledgeable about this topic. One of the five pillars of Wikipedia is that it contains neutral, verifiable, material yet we have no verification of whether or not someone is really knowledgeable about any given topic. *I have a relevant college/university degree *It is part of my profession *It is a deep personal passion *The source of my knowledge is not listed here. Wow, really? Good heavens. I love Wikipedia, I think its premise is sound, I think it's an incredible source of knowledge but I believe that this is because of verification -- I'm certainly not about to trust what some random faceless person says without any reference point whatsoever and I'm most certainly not willing to assign any greater weight to a faceless person's opinion simply on their say-so. Honestly, have you all forgotten Essjay? Verification, people.
  • tested only on enwp but not discussed there This just sort of feels a little sneaky to me, like someone isn't giving full disclosure. Was this just to avoid the reviewed article fallout? Something as potentially major as this should be discussed where it's being used. That's just my opinion and this is a minor point compared to the previous points.

Banaticus 00:25, 13 April 2011 (UTC)Reply

Hello! I'll try to address your points in order.
1) I'm not sure what you mean by "comments" - do you mean ratings? Or the knowledge metrics? If so, then they are, indeed tied to specific revisions. However, for display to users, there is a "fuzzy window" where the rating is considered to be relatively applicable. Over time, with enough edits, individual ratings will expire and drop off the aggregate (which is then a moving average). I'm not sure of the exact numbers when this happens, however. You'd need to ask Trevor Parscal.
2) Yes, the metrics are vague. This is actually intentional: the tool is not designed to capture pure metrics and scales of this type are always user-subjective. However, when enough opinions are applied to the pot, an average tends to make its way out. What this data actually is and what it represents is something we are still learning about, which is the point of the pilot. We want to see if quality can be measured over time, and if so, what does that look like.
3) Yes, Wikipedia cares more about verifiability than reputation when it comes to factual representations within the encyclopedia. However, we are not working with encyclopedic facts; we are working with the opinions of other people about that encyclopedia. It then becomes interesting to know if the ratings being given are set by people who consider themselves to be knowledgable about the topic at hand. This does not mean that their opinions carry any more or less weight - they don't - but it allows us to have additional degrees by which we can slice the data we receive.
4) I'm not certain what you mean by this, to be honest. There was some discussion on the English Wikipedia that I remember back when we started the pilot (at least six months ago), and there has been coverage in the Signpost and on various mailing lists. The feature has not been put to an RFC on the English Wikipedia, that's true - but it is a pilot program: we are testing the feasibility of whether or not it even works before we talk about whether or not it would be a good idea to enable on every page.
Hope that helps answer your questions.--Jorm (WMF) 02:16, 13 April 2011 (UTC)Reply
Article_feedback/Public_Policy_Pilot/Early_Data -- I think we can safely say that as far as technical implementation goes it works and works just fine. As far as whether it should be rolled out to more pages, etc., I think consensus should be sought on that. Banaticus 01:17, 15 April 2011 (UTC)Reply

"Understandable"[edit]

It would be nice if there was a fifth item in the template -- "Understandable." (Yes, I know "comprehensible" is a better word, but some people may not know what it means.) One of the biggest issues many people have with Wikipedia is the incomprehensibility of many articles, even on elementary subjects. It's often hard for knowledgeable editors to know if they have explained a subject in a manner the non-knowledgeable can understand. A gauge of comprehensibility would be helpful. -- 174.116.177.235 01:49, 15 April 2011 (UTC)Reply

Yes Indeed!!This is meeningfull suggestion to devs! I as contributor on biological subjects often wonder how the commen readers take it. I can find enough feedback, whether the article is factual or how well I am with style. But the comprehensibility by laymen can not be simulated much by any fellow contributor. I would love this question above any of those which are considered now. And well, I am not native speaker of english (and that's so typical for :en) and that just only adds to the need.
Thanks if You even try to give the idea moment to think about --Reo On 06:17, 31 May 2011 (UTC)Reply

Easy/visible way to get rid of article feedback boxes?[edit]

I'm not sure if this is the right place to post this, but it does not appear that there is an easy/visible way to permanently block the article feedback boxes on pages. If I am incorrect, I apologize. On the English Wikipedia, I used my skin's css file to block the boxes, but I doubt that every user would know how to do this. I also doubt that I'm the only user that doesn't want see this box on any articles.--Rockfang 23:49, 15 April 2011 (UTC)Reply

Trustworthiness vs sources[edit]

I just did the feedback on on Butterfly and didn't notice the hover text until after clicking the "Trustworthy" item. The label says "Trustworthy" and the hover text says "Do you feel this page has sufficient citations and that those citations come from trustworthy sources?". These are not the same thing. Trustworthy means Do I believe the stuff that I read on this page? Sourcing can help with that, but really, there are many highly trustworthy articles with skimpy sourcing, and lots of heavily sourced articles (basically any politically contentious article) that have basically no credibility because of the underlying POV editing. Wikipedia itself has a credibility problem as long as it confuses the two. Trustworthiness comes from a sense that the article editors know the subject and are working together to produce a complete and unbiased article. Untrustworthiness comes from the sense that editors are trying to push a POV, and that the state of the article at any moment is just the instantaneous location of a moving front.

Anyway, I think the questionnaire designers should decide what they want that question to ask, and fix either the hover text or the rating label. If you want to know whether the article is well-sourced, change "Trustworthy" to "Well-sourced". If you want to know whether the article is trustworthy, change the hover text to "Do you feel that this article is accurate, that its editors know what they are doing, and that they are being honest with you?". The latter in my opinion is much more important than the former. It's a different question than objectivity also. There are various articles that are obviously quite opinionated, yet still very trustworthy, since the author's POV is not disguising itself and they are obviously knowledgeable. 69.111.194.167 07:58, 26 April 2011 (UTC)Reply

Mechanism[edit]

IS the "pilot" category the mechanism for including articles? For I saw there are only 4,200+ articles in the cat, and I would have estimated more are in the article feedback group. Or is the category now redundant? Rich Farmbrough 22:49, 20 May 2011 (UTC).Reply

Showing up on redirect[edit]

Hey, one of the feedback boxes showed up on the redirect w:en:Farrah Abraham. I don't know why this mechanism would have stayed after the redirect is implemented on the page, Sadads 11:43, 22 May 2011 (UTC)Reply

One of the devs picked this problem up at en:WP village pump, so it should be fixed. Doesn't seem like anyone involved with the project reads this page. Rich Farmbrough 10:18, 23 May 2011 (UTC).Reply

Notification on related talk on en:wiki[edit]

I would like to bring to your attention, that there are poping up new sections in Village pumps like the two following:

Basically the suggestions in those two chapters can be sumed up in one sentence (Quoted from there):

Conclusion: the feature needs a "What is this?" link and a "turn this off" button as well as a good place to turn it on again.

Interestingly that conlusion is in agreement with two points already adressed also here, in:

So if it looks, that I am pushing for it, :], then actually, I am :], well just a bit ;) - I too, think, there should be at least one link pointing somewhere, to find more about the feature. -- Reo On (en:User:Reo_On)22:18, 22 May 2011 (UTC)Reply

New sections on :en:
What you can find there? Basically repeated questions like where to find the discussion to the topic. Qustion on the time of the feature deployment.
And that it doesnt give sence to engage creator of the page with the tool.
Eloquence* communicated back well - that most issues are already adressed. --Reo On 07:08, 31 May 2011 (UTC)Reply
Reo On 07:08, 31 May 2011 (UTC)Reply

Explanations on talk pages needed[edit]

I just noticed this box for first time at this Jose Guerena shooting page and managed to work my way here. I think at the very least there should be some explanation of this on the talk page, which hasn't even been started yet. The article obviously isn't perfect, but why would it have a rating at this page? What does it mean? This should be explained on talk pages, not have people running all over trying to figure it out. Thanks. 71.163.192.179 00:01, 27 May 2011 (UTC)Reply

I whole-heartedly agree. It seems like all other items on Wikipedia have a link to explain the rationale, while this seems to pop out of nowhere and one has to search to find out. There needs to be either a link on the talk page or in the Rateing box, like a hyper link saying, "What is this?" that links here. AngryApathy 18:05, 27 May 2011 (UTC)Reply

Showing up on disambiguation pages?[edit]

I'm just wondering if there's any way to turn off the article feedback box for pages with the {{Disambiguation }} template on them, e.g., w:en:Saint Regis. Unless this is intentionally there, I don't know how to rate the trustworthiness of a disambiguation page. Fetchcomms 18:46, 31 May 2011 (UTC)Reply

Lesser used pages-en:wp[edit]

I raised this at en:Village pump and was directed here:

Rate this Page template seems to have appeared on all pages- I missed the proposal or discussion. I find it irritating. I am editing stubs of villages in the Gard eg en:Logrian-Florian a page viewed about 50-70 times a month.

Couldn't the template be made less intrusive and more relevant by a little conditional coding. For stubs, it should be smaller than the text content, and the Complete/ well written questions just make Wikipedia look ridiculous? I support the template as is- for B, GA, FA but it should be contextualised for Starts, Stubs and C.

Any thoughts?--ClemRutter (talk) 08:45, 28 August 2011 (UTC) --ClemRutter 10:01, 28 August 2011 (UTC)Reply