Article feedback/UX Research

General Findings

 * Readers (and editors) choose to come to Wikipedia (when given the option in a search engine) to get a quick and concise understanding of a particular topic or to find a particular piece of information. They find the general quality of Wikipedia articles to be "good enough", more often "impressive", and in almost cases, serving of their needs. They come, the consume, they leave. This efficiency is part of why they keep coming back. (For more, See Perception of Quality and  Reading Wikipedia)


 * Readers and editors alike qualify ratings with comments. With sites such as Amazon, Yelp, ING, IMDb, and YouTube, these comments primarily served to help users make a decision. In the case of Wikipedia, the greatest value of these comments identified is to 1) give feedback to editors to improve the quality of the article and to 2) provide context with which to understand and absorb the aggregate ratings and 3) to connect to others through shared/different experience or opinion. (For more, See Rating and Review Sites and  Wikipedia "Authors" + Feedback)


 * Wikipedia users have an intuitive grasp on an article's quality. With their regular and repeated use of the site and other online written work, they have a mental model of a good quality article, can quickly judge an article as such, and can readily find flaws or points for improvement. (For more, See "Good" Articles vs. "Bad" Articles)

General Recommendations

 * Socialize the tool: to both provide a richer experience rating and reviewing experience, but also to continue to develop the online community. For the rating experience, include the option to leave comments, criticism, and praise; integrate the tool with the discussion page; provide users with the ability to see who has rated an article, what else they've rated, where they're from, perhaps starting some more formalized elements of a user page; provide users with the ability to see and communicate with the primary contributors to a given article. On developing the community - though Wikipedia isn't a social network, it is indeed an online community that can benefit from enabling social interactions to take place through, say rating, reviewing, or editing. For readers, Wikipedia is like an library full of information, but without any visible people in it, this can be just one way to de-anonymize the community.


 * Consider reader's habits, behavior, and language if they continue to be the primary target audience for the feature. Allow users to give feedback that doesn't require a thorough reading; balance their motivation with the payoff by either giving more incentive to leave ratings and feedback or extending the consumption of ratings to include more historical, demographic, and social information; ask questions in the language readers use.


 * Pursue two onboarding paths: editing and contributing and partaking on discussion pages or otherwise engaging with the community.


 * Future directions: Recommendation Engines (for both content and work), Reputation systems (for contributions, articles, and contributors), Lists (beyond the watchlist), Structure on user pages, means to more readily communicate between and with users.

Target Audience

 * 75% Reader/25% Editors
 * 50/50 Male Female, Age distribution (0-20, 21-30, 31-40, 41+)
 * 50% Consume Ratings/50% Consume + Contrib, any thres - amazon, fb, yelp, netflix, etc.

Recruiting
We recruited from three sources:
 * Student list for the two Berkeley Public Policy Initiative Classes
 * Recent SF Donor Email list (CiviCRM)
 * Craigslist

For further description, see here.

Perception of Quality
Overall, all of the participants think that the quality of [English] Wikipedia articles is generally high, its content trustworthy, its contributors educated and "with it" or organized. Wikipedia is not, however, infallible. The difference between 'good' articles and 'not so good or low quality' articles is intuitively noticeable, but particularly noted by out of date or inadequate information as well as poor English writing and grammar and lack of visual aids. Participants reported that even with the 'not so good' articles, when they had no knowledge on the topic, the Wikipedia article still was a good place to start. Poor writing and lack of neutrality are noticed, but do not send users elsewhere, rather these flaws are "glossed over". Generally time-sensitive (a new figure in the public eye, someone's recent death) and niche articles (more obscure artists, deep history) were identified as examples of the low-end of quality Wikipedia articles.


 * "It's really good. Some [topics] like new products might have less information, but the ones like battery or glasses always have enough information." 
 * – Cindy W.


 * "For the most part, I'm pretty impressed. At least for the pages I look at. The people that are contributing seem to really have it together." 
 * – Tom H.


 * "In general it's pretty good." 
 * – Logan H. (editor)


 * "[The quality] varies. One's that have been around for a long time are excellent. the ones that are just starting out on a new topic or on a long tail niche topic tend to be a little more..." 
 * – Jason M.


 * "I think the quality is great: I love it because it gives me all of the information I'm looking for. It goes in order, it's so easy to read." 
 * – Jennifer G.

6/8 users enter Wikipedia from Google. 2/8 frequently enter Wikipedia from a search engine, but also sometimes start at wikipedia.org or with other bookmarks. In the former, when participants see the Wikipedia article in their search results, they almost always choose it. Participants explained that Wikipedia was their go-to source when looking for a particular piece of information or initially encountering an unknown topic, even in areas they have more experience or expertise with. It is not just the confidence in and familiarity with the Wikipedia brand, offerings, and experience that continues to bring back such participants, but as they noted, the lack of an alternative. "It's good enough" and "it's better than everything else out there".


 * "It's more trusted [than other sources]. My experience with Wikipedia has boosted my confidence level in the site.......I have built up a confidence level, but there are still pros and cons." 
 * – Gigi L.


 * "For the kind of stuff I'm looking for, it's good enough. If it's something that really really requires fact checking, then I'll follow up on it." 
 * – Tom H.


 * "If I need some basic fact, I go to Google and pick Wikipedia from the top. I also have [an offline version] on my phone." 
 * – Jason M.

"Good" Articles vs. "Bad" Articles
Readers initially explained that a good article - above any specific criteria - is one that helps them understand the topic they are trying to learn about OR an article that contains the information that they were looking for. Editors initially had a set of criteria for judgment and evaluation. Both readers and editors went on to identify characteristics of good Wikipedia articles, many of which are similar to other online written materials, but also include formatting specific to Wikipedia (i.e. "internal linking"; Reference and External Links sections; Infoboxes and other templates). Readers and editors have the assumption that good articles have been around for longer and having more readers and contributors. Bad articles are more obscure or have fewer contributors.


 * [On a good article] "It's an intuitive feeling - when you read the article, does it expand your thinking or knowledge?" 
 * – Gigi L.


 * If it's something that I don't know very much about, it's like 'how much better do I understand it after I've read this piece?' And with some of the physics ones, I still don't understand what any of that means. Most of the time, it's like, yeah, that's what I was looking for and I got it and then can move on." 
 * – Tom H.


 * "It makes my opinion more objective. The awareness is what is most beneficial for me." 
 * – Owen S.


 * "I look at so much content online, I can quickly scan something and know if it's good content." 
 * – Jason M.

When asked to explain characteristics of a good quality Wikipedia articles, interview participants cited the following:
 * Well-written (also: good use of English, Good grammar)
 * Clear and well organized (also: broken into sections of an appropriate length)
 * Concise, the right amount of depth (also: first paragraph, the length of article matching the topics notability)
 * Citation of References within the article body (also: right proportion, not too many references for a small amount of information)
 * {Diverse, credible, relevant} sources and links to specific {articles, books, journals}.
 * Links to other Wikipedia Articles ("Internal Links") (also: links to delve deeper to keep or go into more niche subjects)
 * Inclusion and integration of photos (also: relevant photos, appropriate photos, galleries, good captions)
 * Presence of Infoboxes (also: other templates, see Template Warnings below)


 * "If I read the first paragraph and it's well written, if it has pictures (if not, it's usually half-baked). A lot of sections means it's been re-edited 100s of times. Polish, up to date information." 
 * – Jason M.


 * "The quality of the writing is what matters; good grammar, well structured, good structure for the article, and I like lots of references which adds lots of validity to the article. I never read the references, I just like that the little numbers are there." 
 * – Owen S.


 * "[On good articles] A lot of credible sources, wiki links within the article itself, unbiased and very neutral....written clearly, no confusing language. What I like a lot about the good articles is that they don't go too in depth. If you want to go further in depth you can click on the wikilinks. Oh and tables or template on the side, and images." 
 * – Logan H. (editor)


 * "Some things are nice and concise and everything you need to know is in those 4 or 5 paragraphs and if you need to find more than you can use the references to find something out......that's what a good article is. The length meets the notability of the article. Then the quality of the writing in those paragraphs, if they're concise and to the point and sections are broken up to a certain length. " 
 * – Tom M. (editor)


 * "Having references." 
 * – Tom M. (editor)

When asked to explain the characteristics of a low quality Wikipedia article, interview participants cited the following:
 * Opinionated (also: clearly one persons point of view, frequent use of air typing here)
 * Missing information (also: short)
 * Inability to find what participants are looking for (usually noted as inappropriate level of detail, technical knowledge or precision - too much or too little)
 * Lack of sources (also: lack of place to explore further)
 * Lack of coherence, consistency, poor use of English language (authors identified as ESL),
 * Articles that don't warrant being in an encyclopedia
 * Broken links


 * [On bad articles] "Opinion interjected in the article. Sometimes Eastern Europeans write about a celebrity like it's a fawning biopic of the person, which is not good quality. It means to me that no one is reading it, editing it, or bothering to know it exists." 
 * – Jason M.


 * "[When it's] very short. There's a style [in good articles], it's a very sort of objective style. A poorly written article that isn't just tiny smacks of someone sitting down to tell the world 'this is how it is!' It sounds like it's coming from one person who is possessed of an opinion and writing that opinion on the internet....rather than an article that is giving information freely on all sides." 
 * – Owen S.


 * "Articles that I think are bad are about something no one knows about, a minor topic without too many sources about it. You can't tell if it's backed up by facts. Or is it someone that decided to make the article and smash the keyboard and expect someone to fix and no one ever does." 
 * – Logan H. (editor)


 * "It just didn't have enough information. It's usually people that aren't as noticed. And sometimes I see words that are red." 
 * – Cindy W.

Going beyond particular criteria, participants also identified "bad" articles as articles that might not be appropriate for an encyclopedia, topics with newly minted reasons for inclusion in an encyclopedia, and topics where they were looking for something other than encyclopedic knowledge (i.e. where Wikipedia might not be their first choice, depending on the level of knowledge they were looking for and their expertise).


 * "[Giving example of a bad article] A biography of someone that recently came in the public eye. There's very little content and it seems to be there just as a placeholder." 
 * – Jason M.


 * "[Giving example of a bad article] They are more obscure. Like a random video game from 1992. The goal of the Wikipedia isn't to throw everything out on the table, its to give a summary. It has to have enough information about it to warrant a detailed article." 
 * – Logan H. (editor)


 * "[Giving example of a bad article] It's hard to generalize. When I'm looking for the meaning of a tax term, I might not use wikipedia and maybe a financial dictionary because it gives you a more precise definition. It's like expertise, but they are able to give you......if I want a very precise and concise meaning, I might not use wiki - it's too long. " 
 * – Gigi L.

Reading Wikipedia
We identified behavioral patterns of readers interactions with Wikipedia that are relevant to our creation of an article feedback tool (and other features or interventions). When reading Wikipedia, users are typically looking for a particular piece of information or looking for a quick and introductory understanding of a topic they are new to. In the former case, they skim the article until they find the section that will answer their question. In the latter case, they reap the benefits from the first paragraph, and then possibly the historical background, and occasionally follow links from the external links or references section. Though, it should be noted that in the interview setting, subject frequently right clicked these links and opened them, but rarely returned to them.


 * "If there's a country or a place I haven't heard of, that's how I [end up on] Wikipedia." 
 * – Jennifer G.


 * "For this topic, my purpose is that I'm trying to do an intake on bee pollen as a supplement. I want to know how medically it helps to enhance my body. So I have all of these unknown answers and question. I won't read every single word, I'll skim it and stop at the point where they talk about medical use." 
 * – Gigi L.


 * "When I browse something, it's very much like "i need to find something". When I want to know something, I don't want to be stopped by anything. I don't want anything to get in the way of me finding that [piece of information]." 
 * – Owen S.


 * "Most of the time I'm just kind of consuming it, I'm not really participating." 
 * – Jason M.


 * "Usually if I"m not that interested, I'll skip over to the next topic." 
 * – Cindy W.


 * Looking for info, broad understanding
 * How would I know if something is not there
 * Links and references or other google hits to expound further, if they even want to. Rarely.
 * Consuming
 * That's why I go there, to learn
 * End of article
 * Look at references, right click to open, did not read in the lab setting
 * Top part

First Paragraph. Maybe Background + Historical Information. Occasionally References/Links. M2/8 participants said they do regularly read an article to it's end, where the "end" of the article was identified as the "see also"/"references"/"external links" trilogy.


 * "I also look at the first paragraph. It annoys me when there's something that should be later in the article...when it goes too deep." 
 * – Jason M.


 * "The first paragraph should be the main thing." 
 * – Cindy W.


 * "They always have a beginning here, like the introduction. Then history or background information and then more in depth information. Then at the bottom there are external links." 
 * – Cindy W.

Existing Practices of Quality Assessment
Though the majority of participants explained they implicitly infer the quality of an article, a few did describe some practices of explicit assessment.

Template Warnings
4/8 of our participants brought up Wikipedia use of "template warnings" (automatically formatted boxes identifying an article or section as a stub, having disputed content, questionable neutrality, etc) and "citation needed" warnings. These boxes were noted and appreciated by readers and editors alike. None of these participants had created such a warning/flag/box/warning but their inclusion was universally appreciated.


 * "For the most part I think it's great....to at least have some information. The best thing is even if it's not a high quality article, all of those tags saying....."this is not a..." "disputed source...." etc" 
 * – Owen S.


 * "Before I read the article, I look for tags and check out the discussion page. To see if it's a legit article before I get too invested in it." 
 * – Logan H. (new editor)


 * "These overt things (neutrality is disputed box)....sometimes I pay attention, other times I do ignore it if I'm trying to read something." 
 * – Jason M.


 * "Some subjects need huge 25 paragraph things. Other times, I see all these little flags - notability guidelines or whatever. Sometimes it only needs to be a few paragraph if it's not as notable." 
 * – Tom M. (editor)

Discussion Pages
3/8 of our participants had, in the past, used the discussion page to get a baseline assessment of the article, clarify a fact, look for contentious subject matter, or hear different sides of a disputed fact. It should be noted that, with one exception, this behavior happens rarely or had only happened once.


 * "That's why I like the discussion page. You can see why people are arguing what they're arguing. People seem educated. Even if they're throwing comments. 
 * – Logan H. (editor)


 * "Occasionally I've gone to a discussion page and I'll see what people have written about. Sometimes it's long and will have a section with the topic I've noticed."
 * – Jason M.

The exception was Logan H. After starting to edit and learning about the "Discussion/Talk" pages he had gotten into the habit of looking at these pages in addition to looking for tags before or after reading the first introductory paragraph. He explained that he looks at the Wikipedia class rating (stub, start, c, b, a, ga, a) and for any warning before diving into the article.


 * "Before I read the article, I look for tags and check out the discussion page. To make sure it's a legit article before I get too invested in it."
 * – Logan H. (editor)

Consuming Ratings, Reviews, etc.
All of our participants visit sites regularly that have some form of rating and reviews as well as sites dedicated to ratings and reviews. Examples brought up include, but are not limited to: Amazon, Yelp, YouTube, IMDb, Rotten Tomatoes, ING, OpenTable, and Vimeo.

We found the greatest value in rating and review on sites for participants is when they are looking to inform a decision. For example, using Yelp to decide between "Italian restaurants in the Mission", using Amazon to see if others' had the experience one was hoping for with a piece of electronic equipment, using Rotten Tomatoes to decide which movie to see, or using ING to decide whether or not to buy a particular video game or album. Second to decision making, ratings and reviews were cited as valuable to get and give feedback (positive and critical) to "talented, hard workers" or "the people that took the time to take and upload a video" or "when I go to a terrible restaurant and feel a need to warm everyone. Finally, ratings and reviews had value to entertain through experiences - both in expressing a personal opinion and in reading the opinions of others, for example sharing feelings or theories of a movie on IMDb. Simply put, people like to say what they think and hear about what others' think.


 * Making a Decision:


 * "I go to yelp to find the best bank. I go there and read the comments." 
 * – Cindy W.


 * "If I'm buying an electronic or smart phone, that's the time I will look at the review. I don't look at just 1-2-3-4 rating. I like the short paragraph then I can read between the lines to see why they come up with that idea. From what they write, I know how to judge." 
 * – Gigi L.


 * "When I'm deciding to buy a game, usually I just like to see what people are saying about it." 
 * – Logan H.


 * Getting and Giving Feedback:


 * "If whoever took the time to put up a video or post something, than I'm sure they're anxious to get feedback from people that are reading it or viewing it. So I'm willing to give that to them. It just takes a second." 
 * – Jason M.


 * "After I watch something [on YouTube], I say you're the best." 
 * – Cindy W.


 * Entertainment:


 * "Sometimes I'll read the reviews for half an hour because I want to see what people are saying about it." 
 * – Owen S.


 * "I just want to see what people have to say." 
 * – Jennifer G.

Participants and users of these sites expressed a lot of reservations about consuming (and creating, see below) both ratings and reviews. The shortcomings were explained as: lack of knowledge about the persons criteria for judgment, general subjectivity or lack of objectivity, strangers, friends of owners/incentive offered for positive reviews, people want different things, one person opinion bubbles up (fanatics, crusaders, disgruntled), insufficient ratings or reviewers. Users took these reviews with a grain of salt, often needing to read multiple reviews before finding use and value.


 * Subjectivity:

(also, lack of personal information or shared interests)


 * "I wouldn't trust my own reviews because they are so subjective......Everyone's got an opinion. I don't have time to hear them all. I don't really care that much. That stuff is not that important or interesting to me." 
 * – Tom H.


 * "I don't like to do rating...I don't trust ratings that much because - like yelp - you don't know how are the background of the user, the demographics, their education, career, you don't know. So it's hard to judge if their opinion would be suitable for you." 
 * – Gigi L.


 * "People aren't looking for the same thing in movies. There are different opinions. I take everything with a grain of salt. I read the first page of reviews." 
 * – Logan H. (editor)


 * One bad review:


 * "Some crusader had deleted the whole thing, so I clicked on his name to see what he had done." 
 * – Tom H.


 * "There's always the danger that if there are only a few reviews and if you have one or two irrationally disgruntled people it can sway things one way or the other. (talks about showing a distribution)." 
 * – Jason M.


 * Improper incentives:


 * "One bad review ruins everything - someone who is easily upset and likes to tell the world about it. Sometimes it's a genuinely bad experience because the company just happened to screw up that one time or the worst is when you have a competitor who pays someone to write bad reviews. Nefarious." 
 * – Owen S.


 * "I like Wikipedia because it's plain text and nothing flashes" 
 * – Claudia, 64, Database Administrator

In consuming, ratings alone were less satisfying to participants than when ratings are coupled with comments, explanations, or full blown reviews. Even in cases when individuals said they didn't like or trust reviews, when coupled with ratings these reviews were used to qualify those ratings. On the flipside, reviews that did not have some type of aggregate rating increased the threshold for consumption.
 * People have to read more stuff to get a quick look/opinion
 * Even though people said they didn't use ratings or read reviews, they do.
 * ING, rotten tomatoes, IMDb.


 * "I like Wikipedia because it's plain text and nothing flashes" 
 * – Claudia, 64, Database Administrator

Creating Ratings, Reviews, etc.
6/8 of our participants had either written a review or contributed a rating to sites such as Amazon, YouTube, Yelp, IMDb. They cited similar motivations to create reviews as they did in consuming reviews. Namely, they wanted to share their relevant experience; express their opinion (in many cases to promote or defame the product/restaurant/video/movie; and to give feedback to the creator of the content and partake in the surrounding discussion.


 * "I wanted to add my own voice to say "hey for all you people out there who are looking to avoid paying $30 for a pair of headphones, this is the real thing. I wanted to contribute my experience to the people that were looking for the same thing I was."" 
 * – Owen S.


 * "I left a comment after I read all of the comments (which took like 20 min). It was people saying I like this scene or what did this mean. I liked seeing that other people had the same feelings towards the movie that I did." 
 * – Jennifer G.


 * "I rate constantly. I'm a rater. I just tend to do that. If whoever took the time to put up a video or post something, than I'm sure they're anxious to get feedback from people that are reading it or viewing it. So I'm willing to give that to them. It just takes a second."" 
 * – Jason M.

Most participants cited a high time and effort threshold as reasons they didn't contribute ratings or reviews to sites, including requiring a written review to accompany a rating and also needing to have an account/login to contribute either rating or reviews. Aside from lack of time or motivation, participants discussed the subjectivity or lack of impact in writing a review.


 * Effort:


 * "I'm not that interested in [writing] reviews, it feels like work to me, it's not fun." 
 * – Tom H.


 * "With Amazon you actually have to write a review too, you can't just rate. As long as I don't have to login, I'll rate it. If I have to write a review, I'll weigh to see if it's worth it." 
 * – Jason M.


 * "I just rate them. I think writing is too much effort sometimes. You have to use your account to rate." 
 * – Cindy W.


 * "I use my cousins account to leave reviews. I'm too lazy to make my own account. It seems like too much effort." 
 * – Cindy W.


 * Subjectivity/Lack of Impact/Opinion:


 * "Yeah, I have an opinion, but who cares besides me." 
 * – Tom H.


 * "I don't like to do rating. It's a bunch of strangers, how do you know? They're not friends. If I have relations, I know how she and he has come up with their decision. With strangers - especially with the internet - everyone can say something. I'm more cautious, lets put it that way." 
 * – Gigi L.

We noticed that of 6/8 participants that had contributed their rating or review online, none of them had gone back to see their rating or review since having submitted it. Votes/Ratings on comments of greater use for future readers than the comment authors.


 * "I left a comment, it wasn't formatted right. I wrote some comment and then I forgot about it." 
 * – Jason M.


 * "Once I'm done, I'm done. Unless there's a reason for me to keep going [back]." 
 * – Tom H.


 * Strong feelings and extreme opinions:

The creation of ratings and reviews are motivated by strong feelings at extremes (awesome restaurant experiences, favorite movies, product failing to meet or supremely exceeding expectations). Participants expressed a lack of desire and motivation to contribute when their feelings were more neutral or less charged.


 * "Crusaders" Motion to banging on the keyboard


 * "There are a lot of things I won't bother rating - if it's mediocre quality. But if it's fantastic or awful, then I'll rate it up or down. 1/3 i'll rate up, 1/3 i'll rate down, the 1/3 I won't bother." 
 * – Jason M.


 * "It's a love hate thing......i love that place, it's awesome and need to say something about it. or that was such a bad experience that I feel an obligation to warn everybody. ." 
 * – Tom H.


 * "That's why I go [to IMDb]. The people feel the same about the movie....or maybe they don't and they hate it. But the people saying things on there have strong good feelings about the movie and none of my friends or boyfriend have even heard of it." 
 * – Jennifer G.


 * "If the place is really good or really bad, it will be beneficial for people reading the yelp page too." 
 * – Cindy W.


 * "I would definitely do this. On any article that I thought was important enough to evaluate. Almost any article. One that I thought really needed a lot of work, I'd feel more inspired to rate. I would do both positive and negative. If I felt more strongly about the article." 
 * – Tom M. (editor)


 * "For me, If I really liked the article, I'd rate it. But I don't know if I'd rate it if I'm neutral or didn't like it." 
 * – Cindy W.

Reputation
Reputation plays a huge role in how people consume ratings and reviews (or any UGC sites), specifically to lending credibility to particular reviews or in helping participants filter through large amounts of reviews/comments/opinions. Authors/Contributors with a familiar or trusted body of work, frequently encountered, or those associated with some larger trusted organization (for example a news paper) are generally given more time and weight. Alternatively, when absorbing a particular review or rating of a contributor, their other contributions are used to qualify its trustworthiness, reliability, or value.


 * "I don't know specific names because they use aliases. I recognize aliases and know what kind of reviewers like what kind of games. There's this one guy that loves shooter and strategy games but doesn't like sports games. I recognize a couple of the big testers, but they're always popping up on this website so I think they're reliable." 
 * – Logan H. (editor)


 * "I like Wikipedia because it's plain text and nothing flashes" 
 * – Claudia, 64, Database Administrator


 * "He cites one of these Wikipedia things. He probably knows what he's talking about. I have the worst time finding these specific regulations. I can almost never find those. " 
 * – Logan H. (editor)


 * "You can see how many reviews someone has written. I would look if there was some particular reason or interest in a person's opinion, I might want to see what they had said on other books." 
 * – Tom M. (editor)

This sense of reputation translates to Wikipedia (i.e. people wonder about the authors of the article, where else they've contributed, and about their other activity on the website and elsewhere), but only 1/8 of our participants were able to navigate the website or find information to help qualify their judgment. The remaining majority might attempt to find information that helps them form an opinion about reputation, but otherwise proceed on the site without it.


 * "Some crusader had deleted the whole thing [his first and only edit], so I clicked on his name to see what he had done." 
 * – Tom H.


 * "I've read enough of the articles here to know who's been around for a while and who I agree with." 
 * – Logan H. (editor)


 * "If I started noticing the same name over and over again that I think were on topic areas that I find interesting, than I might say, so and so is pretty impressive." 
 * – Jason M.


 * "Who uses, edits, and cites Wikipedia builds my confidence level." 
 * – Gigi L.

Expertise
When moving through opinion or fact, "expertise", like (and related to) reputation, is used to lend credibility. We encountered two types of expertise - knowledge expertise and Wikipedia expertise. Knowledge expertise, here, is a reputation validated by societal measures (certain level of education, experience, or exposure). Wikipedia expertise is meant as a higher understanding of or greater experience with the website itself.

The majority of participants in this study and in others carry an assumption that you have to be a 'knowledge expert' on a topic to contribute to Wikipedia. And in turn, assumes it's mostly such experts that are contributing to Wikipedia. This, as an addition to consistent experience, lends reliability and credibility to the information and articles of Wikipedia. For the participants that have attempted to edit, this process is only the start to becoming a 'Wikipedia expert' a title associated with cruising through discussion pages, citing Wikipedia scripture (policy?), or regularly monitoring articles.


 * Knowledge expert:


 * "I would guess that the Publishers' review is more objective. At the same time if someone is well recognizable, I assume they're reliable too. Sometimes I'm only going to look at the first one or the first few. I would trust the Publishers weekly more." 
 * – Tom M. (editor)


 * Wikipedia expert:


 * "(Do you contribute on the talk pages?) Normally I let people who have some sort of expertise do that. I had a bad experience with my first question on talk pages. I'll stay quiet for now." 
 * – Logan H. (editor)


 * Knowledge + Wikipedia Expert:


 * "If were a regular user and I logged in a lot, I would maybe have a sense of who these people are. If I were some kind of expert on this topic......that would affect my decision." 
 * – Tom H.

Sharing, Social, and other Motivational Implications
Though Wikipedia is not a social networking site, it is an online community. Participants carry expectations and look for experiences that mirror their social practices in their offline and online lives. For the majority of our participants, Wikipedia readers and editors are anonymous and the site is free/void of explicit social experience. Though these behaviors and practices are lightly supported (Discussion sites, Wikiprojects, User pages, Watchlists), most of our participants are either not aware of them or use them seldomly. A few participants mentioned trying to find such information or experience, but were unable to do so. Even our editor participants cited difficulty in navigating pages beyond article and discussion pages.


 * On other sites:


 * "[On Amazon reviews] You can connect users to other users by way of experience....base your experience off of another human being, rather than product marketing language." 
 * – Owen S.


 * "I like seeing what other people's opinions are. And sometimes when I have questions, a lot of people have already answered those questions" 
 * – Jennifer G.


 * On Wikipedia:


 * "I'm glad I made an account. [You can] keep track of what you're looking at and talk to people." 
 * – Logan H. (editor)


 * "I'm amazed that people are willing to do that work for almost no recognition. I don't remember a single person, I don't remember what they've done. I'm just not into that world. If it were up in front, maybe I would, but if you have to dig for it....." 
 * – Jason M.


 * "Do you know how many people edited this article? It doesn't say. Is it supposed to say?" 
 * – Cindy W.


 * "Who wrote this entry? Where do I see that? Like a byline, but there have been several people that have worked on it...." 
 * – Tom H.


 * "I assume there are editors that come across different types of content and will check things. I'd want to send my feedback on the grammar or things to those guys. And there are probably other people that are fans of this topic, but I don't know if they're the ones that are doing this research and writing it." 
 * – Jason M.


 * "I have a hard time finding my way around when I'm going beyond articles to user pages and such." 
 * – Logan H. (editor)

Wikipedia "Authors" + Feedback
In discussion hypothetical ratings and reviews of Wikipedia articles, readers and editors we interviews most wanted their ratings, feedback, or review to be seen by or sent to editors contributing to the article under consideration. Editors too found the most potential value as a method to improve the article. Additionally, a few participants mentioned ratings/reviews/feedback also going to 'fans' of the article or topic. While the majority of our participants thought hypothetical rating and review feedback would be of value to readers, they type of feedback that they were able to readily provide centered mostly on "ways to improve" the article. Over half of our participants expressed little need for this information for themselves.


 * Readers:


 * "It would be great to say ' grammar needs clean up' or 'there are big sections missing from here'. There are a lot of types of feedback: easy to fix, clean up the grammar, spelling. Nice to have, but more work would be getting pictures that you have the rights to post, that would help a lot. Occasionally you notice there is something that's just completely missing in an article, which requires more work - research and authoring something from scratch." 
 * – Jason M.


 * "I assume there are editors that come across different types of content and will check things. I'd want to send my feedback on the grammar or things to those guys. And there are probably other people that are fans of this topic, but I don't know if they're the ones that are doing this research and writing it." 
 * – Jason M.


 * "It's nice to leave feedback. I imagine anyone monitoring this page (editors, people that feel strongly about it). If something is low - neutrality - they can work on it." 
 * – Jason M.


 * "I want to know if the person that actually edits this would read it. Maybe it could send a message to their email. " 
 * – Cindy W.


 * "I'd want it to send to the main people that wrote most of the things. Or I could put the [section name in the] subject, and it'll send to the person that edited that part." 
 * – Cindy W.


 * Editors:


 * "What I write on talk pages reflects my edits. I'll do something and then explain why. Sometimes (with a grin) I'll bring something up on the talk page and then someone will do something for me." 
 * – Logan H. (editor)


 * "You could go back and see what you could do better as the author." 
 * – Tom M. (editor)

Overall
After using the current (as of 12/1/2010) article feedback tool, 5/8 participants said that it's something they think they would use if it existed on all Wikipedia pages – especially if it was a topic they felt strongly about or the article was much above or below average. Almost all of our participants said they saw the tool as something that was not necessary, but something that could be potentially valuable (to them and/or the extended community).

As it is, it's current benefits include:
 * Having a low effort threshold (not requiring a login and having short and easy to evaluate criteria)
 * Employing an intuitive and recognizable rating method (5-star system)
 * Providing a new, more interactive Wikipedia experience (rating) while enhancing the current (reading and editing) experience

But the tool has major room for improvement in its current state, and more importantly room to extend it's functionality and to provide a richer experience. The current shortcomings can be summarized as:
 * Lack of comment and other social and demographic information on raters and ratings.
 * Not seamlessly integrating into a Wikipedia reader's normal behavior (See below: "If I were doing this for real....")
 * Placement (also: visibility)
 * Unclear and lackluster data visualization - particularly noted for its lack of dynamic and distributed data.

Praise

 * "It (rating articles) would be so easy. I'd do it all the time." 
 * – Jennifer G.


 * "It's nice to leave feedback. I imagine anyone monitoring this page (editors, people that feel strongly about it). If something is low - like neutrality - they can work on it. I love it and I would pay attention to those numbers if they're there." 
 * – Jason M.


 * "Yeah I'd [rate an article] if I felt strongly. Or if I read the whole thing and had a good handle it." 
 * – Logan H. (editor)


 * "That's a really easy way for me to express my opinion. I don't have to write a long yelp review or change an entry. If I get to do it this way, then I would definitely do it. Again, if I sit there and read the article....not casually if I had just skimmed it." 
 * – Tom H.


 * "Yeah I would definitely do this. On any article that I thought was important enough to evaluate. One that I thought really needed a lot of work, I'd feel more inspired to rate. I would do both positive and negative. If I felt more strongly about the article.." 
 * – Tom H.

Lukewarm

 * "If I had time, sometimes I would and sometimes I wouldn't. Maybe. But now I see the button you can edit it and it will affect the changes right away" 
 * – Gigi L.


 * "I see this as being more interactive. Normally you just read and there's not much interaction, so it does offer more that way." 
 * – Gigi L.


 * "It's still a star 1-2-3-4-5 short rating system without any explanation or elaboration. It seems to me I won't take it that seriously into consideration." 
 * – Gigi L.


 * "Usually I don't look at ratings that much, except for entertainment purposes. Or like [on Yelp] when I'm deciding to go to a restaurant. But with Wikipedia, I would still go there since the information is right there and it's the first link on google. And I kind of trust Wikipedia to show the authentic kind of materials. So I don't think this is that necessary, but a good option to show what do other people think." 
 * – Cindy W.

Need for Improvement

 * "It's so boring. I want to see the ratings and how many people rate it. Or read it. This way you can actually see what other people think. 
 * – Cindy W.


 * "If I'm sort of idylly consuming information, I probably wouldn't rate it. I'll just consume what I wanted to know from the article and then move on. On an average day, I would not use the tool to rate an article. 
 * – Owen W.

Feedback while Reading to Rate (aka freeform feedback)
Participants were asked to pick an article (from the list of articles that currently include the rating tool) and to use the rating tool to rate it. The majority of participants did not immediately find the rating tool (though all did without help) and tended to dive into the article first. As they were reading the article, they offered feedback without any knowledge of the criteria provided by WMF to evaluate articles. This freeform feedback centered around layout issues (templates, photos, captions, sections), links (lots of links, broken links), the first paragraph of the article, length of the article (also noticed by a scroll up and scroll down on loading of the page), and existing warnings (citation needed, neutrality disputed box) already on the article page.


 * "[It's] easy to identify flaws" 
 * – Tom M. (editor)


 * "I also look at the first paragraph. It annoys me when there's something that should be later in the article...when it goes too deep. This should be linked....nitpicky....and it's spelled differently here. Single sentence paragraphs, makes me feel like there's something missing, something wrong. This whole section looks a little weird. Looks like all these different people dumped in a sentence and no one thought about how they could be made into a paragraph." 
 * – Jason M.


 * "That's an immediate red flag (neutrality id disputed box). For probably not a neutral article - it's probably very US centric." 
 * – Logan H (editor)


 * "Lots of links to other articles - to me that means its probably a pretty....{controversial, complex} article. You need to separate it into components so it's not a big block of text. I like that it has these pictures. I don't like when they have military pictures and they don't tell what war it's from. Like, are those up to date masks (for chemical warfare)? I think the images should be integrated into parts of the article. This is cool, but it's not relating to anything within the article itself." 
 * – Logan H. (editor)


 * "The categories are simple, you know, it's like an outline. I like this (template). I always look at this, first, when I go to a page. I rarely use it, because I start at the top. I didn't go back for the other stuff. I was just looking for that one thing and I found it." 
 * – Tom H.

"If I were doing this for real...."
One of the most common expressions upon use of the article feedback tool was that participants would more completely read the article (than they did during the study AND than they normally do when looking at information on Wikipedia) to rate the article. Whether or not they would is not something that we can immediately determine, but we can infer from this desire that their satisfaction and confidence in their ratings would increase with a more thorough reading, which is not typical of most readers.


 * "If I actually read the entire thing, I'd probably have a more.....I would rate it differently. I would read the other parts [to rate]. But generally I just look at the first paragraph to get the meaning." 
 * – Cindy W.


 * "I don't think I gave enough attention to the page to rate it fairly....If I was actually going to rate it, I would be more conscientious or thorough about it." 
 * – Tom H.


 * "Normally I'm trying to find a particular piece of information....." 
 * – Owen W.

Location
All of our users found the article feedback tool without instructions, but many were surprised to find it at the bottom and often expected it to be at the top or side explaining that they (or others) don't always make it to the bottom of an article. For most users, the end of an article is ostensibly the "see also", "external links" or "references" sections. Only one of our users (an editor that had previously encountered the tool) expected the tool to be at the bottom of the page, but even there he thought it should also be present on the discussion page.


 * "I do go to the end. I stop when I see these three (mousing over external links, references, see also)." 
 * – Jennifer G.


 * "This is the tool down here? I thought it would be a box on the side like this. When I saw this, it threw me a bit." 
 * – Tom M. (editor)


 * "I think some people won't go all the way down here and won't see this. You should put it [in the sidebar] or [at the top]. I don't always go to external links, but I don't go down to the very bottom of the page." 
 * – Cindy W.


 * "Oh here we go! I've found the article feedback tool......it is at the bottom. I expected it to be along the top or the side because not everyone reads to the bottom of articles. If it was small, I could see it being along the top. But this is big, this feedback thing." 
 * – Owen W.


 * "Oh duh, it's down here at the bottom. I may not have made it to the bottom of this page." 
 * – Tom H.


 * "This box should be in the discussion page. Or in both. If it has a page rating from users and review board, you can see if there is any discrepancy. If there was correspondence to the FA, GA rating, that would be good. I'm not seeing the correlation. I saw a lot of 3s, but this is start class, which is very low." 
 * – Logan H. (editor)

Criteria/Terms
The reactions to the criteria and terms presented for evaluation was generally positive. Participants thought the criteria were appropriate for the content typically found on Wikipedia and the terms were, for the most part, understandable. "Neutral" was the most confusing metric. "Readable" was the most understandable and easy to rate. Participants had other personal rankings for importance or likability, but there were no other discernible patterns. Participants also frequently discussed the interrelatedness of the terms as it affected their rating (i.e. the lack of completeness affects the neutrality of the article), but this did not affect the completion and submission of their ratings.


 * "I don't know what neutral means. What do you mean by fair representation? The mouseover doesn't help. Is the description the same on all pages, or just this one? [checks another page]. It's the same." 
 * – Cindy W.


 * "Neutral....I don't know, might not rate that one. I'll skip it, it's not something I feel comfortable clicking on not having read it in depth. Everything else I can do having scanned it." 
 * – Jason M.


 * "The two things I think anyone can comfortably rate: well-sourced and readable. Good English, I could rate it as such. Neutral or complete, I'm not and expert on the subject so I'd probably just stay away." 
 * – Logan H. (editor)

Criteria not covered
It should be noted that much feedback naturally given by participants is NOT covered by this criteria, however. Particularly: relevance of images, presence or accuracy of internal linking, the length of the article reflecting notability, and the correct depth of information (breaking off into other sections, articles or external links, where appropriate.

Completeness
The primary complaint of the "completeness" metric was that, as a reader most often there to learn about a topic, it is difficult to ascertain if the topic's coverage is missing something or complete. The heuristic for completeness was the presence and breadth of sections (sometimes getting specific to cite a first paragraph, background, history, external links, and references) and the overall length of the article.


 * "I'm here to learn. Well, I don't know if something's not there. How would I know?" 
 * – Tom H.


 * "Completeness. I'll put a 3, as in maybe. Not enough information. But there are so many sources, maybe there is better sources for these little couple sentences. Might be complete, but I'm not sure." 
 * – Cindy W.


 * "It didn't have a big chunk of information for each (section)." 
 * – Jennifer G.

Well-Sourced
The reasoning behind participants ratings for well-sourced was two-fold 1) number and diversity of references in the references and external links sections and 2) less prevalent, the use of sourcing within the article body. It should be noted that more was not always considered better when it comes to references and being "well-sourced", 3/8 participants complained about over-sourcing and referencing, describing it as excessive and sometimes out of balance with the information. Participants rarely go farther than opening a reference link in a new tab and checking it out (scrolling up and down the page).


 * Reference section:


 * "99% of the time I don't go through each these links, but just seeing a lot of them there is a good sign. It's well cited." 
 * – Jason M.


 * "I don't know if it's well sourced since I didn't go to these sites yet. I don't know if these sources are very good in general, but they do have a lot of information. There are too many to look at. I will look at one. It's very well sourced." 
 * – Cindy W.


 * Article body:


 * "I think they use too many references for this couple of sentences, paragraphs." 
 * – Cindy W.


 * "There should be enough diverse sources spread through the article and [you] say - oh this actually does pertain. I hate it when there are 10 sources for one sentence, or when one source isn't overused too much." 
 * – Logan H. (editor)

Neutrality
Neutral was by far the most difficult criteria to understand. For those that did readily understand it as unbiased, they expressed difficulty in rating an article as such without a more thorough reading or without more knowledge on the subject. Methods described to complete the neutrality rating were writing tone, general trust in the unbiased nature of Wikipedia, presence or lack of neutrality flags, and (when applicable) particular sections of list items from different points of view.


 * "It took me a second to figure out what that (neutral) meant." 
 * – Tom H.


 * "As for neutral, I still don't get what it means. Does it mean my feelings are neutral to this article?" 
 * – Cindy W.

Readability
The "readable" metric was the easiest to complete by our participants. Methods including reading the first paragraph, bits of difference paragraphs, or other sections.


 * "Readable, I like this one." 
 * – Cindy Wu


 * "Complete and readable are the most important to me." 
 * – Jason M.

Comments/Explanation
All (all 8!) of our participants mentioned that they'd like to have the option to leave comments. A majority of these mentions were strong suggestions to include comments to fill out the rating experience. For the most part, comments were brought up as a method to explain their personal rating - especially in the case that it was a low rating or it was one particular thing that led to their rating. IN addition to explaining their personal ratings, participants wanted to see the comments of ratings left by others - both out of interest in hearing what others have to say and to help qualify the rating. Providing ratings without comments, explanation, reviews made those ratings less useful, less interesting, and less trustworthy. A few (3 - 2 editors, 1 reader) of our participants brought up that they could either just do this on the discussion page or edit the article to fix the issue they were commenting on themselves.


 * Wanting the option to leave comments:


 * "[I'd like to] leave some kind of text in addition to all the ratings. Or additional comments [could be] optional." 
 * – Jason M.


 * "I'd like to see some comments. Some people could say that "I thought this line was interesting." Or if their information is wrong. That they should change it, but they could also change it themselves, right?" 
 * – Cindy W.


 * "There's no place for comments. I might want to elaborate on something if I had a serious objection to some of these things...like if I gave only one star or no stars. If everything in the article was fantastic, but there was one thing that was factually wrong or a broken link, I would want to have an opportunity to explain what the issue was." 
 * – Tom H.


 * "I was expecting comments. I would be interested in leaving comments if someone else commented on something and I wanted to add my information as a response to theirs. Sort of as a discussion." 
 * – Owen S.


 * Aggregate ratings without comments:


 * "It's still a star 1-2-3-4-5 short rating system without any explanation or elaboration. It seems to me I won't take it that seriously into consideration." 
 * – Gigi L.


 * "If there was more information, like people arguing, but for this one, there's too little information to actually use this." 
 * – Cindy W.


 * Comments Vs. Discussion/Talk Page:
 * "Whoever does that (rates an article) could leave a comment, they might want to tell why they thought it wasn't readable. But people can do that through the talk page." 
 * – Tom M. (editor)

Rating Summary/Aggregate
Occasionally before, but most often after submitting their ratings, participants noticed or found the ratings summary/aggregate box. Initially this was confusing or unexpected. After a closer look, this part of the display and interface proved interesting (to see others' ratings and how personal ratings compare to aggregates - see below "How'd I do?") but ultimately not satisfying. Participants explained it was illegible (both the nature of its content and the content itself) and deficient (lacking intrigue comments, more demographic and social information). However, our subjects expressed the greatest joy and fun upon comparison of their results to the average results displayed.

Only one user (an editor) went on to look at the rating summary in further articles he viewed, who also wanted to know the correlation between these numbers and Wikipedia's existing evaluation system (GA, FA, A, B, C, Start, Stub). A few participants expressed interest in looking at these numbers before reading further article, but none went on to do so in our study.


 * "Oh interesting! I thought this was the second part of the rating. I didn't look at it well enough to know that these are the exact same categories. I thought it was some other kind of rating on another dimension." 
 * – Jason M.


 * "This one needs a lot for me - where is this information coming from? Is this what other people are thinking? It's hard to read." 
 * – Jennifer G.


 * "[This] doesn't really mean too much. I'd rather see it over here (mouses on discussion page). It's a general consensus right above all of the discussion. Can you check this when you haven't rated? That's helpful if you can check without having to rate yourself." 
 * – Logan H.


 * "I assume this is a summary." 
 * – Tom M. (editor)


 * "If there was more information, like people arguing, but for this one, there's too little information to actually use this." 
 * – Cindy W.


 * "When I first looked at this I was like: what is this? What does this mean (pointing to aggregates) 10? 5? Something like that? I think it should have a number showing or easy to understand. Like how much the little increments means. Maybe you could make it a bit more noticeable." 
 * – Cindy W.


 * "I like this option. I get to see what other people think of this article. It's a little plan. Maybe a map seeing where they came from. And you get to see who else read this article. It will be more interesting to see it." 
 * – Cindy W.

How'd I do?
In looking at the aggregate rating summary, participants most enjoyed the "how'd I do?"-like comparison of their ratings to those of the collective.


 * "Oh, looks like a lot of people agreed with me actually. A lot of people have rated this page. I kind of feel pretty cool, since everyone seemed to be in agreement with me. That's reassuring to know that I didn't botch the rating." 
 * – Logan H. (editor)


 * "See, I looked at that(rating summary) instantly. How'd I do compared to everyone else? I look at this and I think, what do they want?" 
 * – Tom H.


 * "I like knowing where my opinion stands in the sea of other opinions. My perspective isn't completely skewed from the rest of the people rating this article" 
 * – Owen W.

Rerating
6/8 of our participants brought up the idea of re-rating, re-submitting their ratings, or submitting more than one rating on a given article. They were pleased to see that vandal behavior was prohibited, but when trying to "update" their own ratings they received no textual or visual cues that their ratings had been re-submitted. With no confirmation (besides the lack of change in denominator of total ratings), most assumed nothing had happened and that re-submitting was not possible.


 * "What if I want to change my feedback? [Tries] I can't tell if anything changed. I guess I can only rate once." 
 * – Cindy W.


 * "You can change you're rating?.....oh yeah it doesn't change, that's good so people can't rate twice." 
 * – Logan H. (editor)


 * "If someone improved it. I don't know when this changes here (no change in the aggregate)." 
 * – Jason M.

5 Point (Likert) Scale
Participants showed similar patterns and behaviors with the 5-point star scale used in the feedback tool as is seen in other web experiences as well as general 5-point survey responses. It is important to note that users used the 5 point scale as both ordinal (like Likert scales) and interval (for more, see Level of measurement). Users develop their personal thresholds; sometimes blur/confuse their affinity with the topic to their rating; use the middle when they are neutral, don't know, indifferent, or don't have no opinion; and on.


 * Consuming:


 * "The average rating for a decent place is 3.5. If it's 4, it's better than average. If it's more, it must be phenomenal. There's always a certain threshold. If there's no mention of the number, it's always a little sketchy." 
 * – Jason M.


 * "If I see a movie over 90, I know it's an awesome movie. Percentages are universal." 
 * – Logan H. (editor)


 * "It's still a star 1-2-3-4-5 short rating system without any explanation or elaboration. It seems to me I won't take it that seriously into consideration." 
 * – Gigi L.


 * Creating:


 * "Oh my ratings for this (Wikipedia article of her favorite movie) are going to be so good! 5 stars, 5 stars, 5 stars. 
 * – Jennifer G.


 * "If there's some kind of granularity that needs to be explained for ratings, I sometimes read what it has to say. Otherwise I rate based on what I think 1-5 means." 
 * – Jason M.


 * "This is not my domain, this topic, so I would hesitate to say I have that expertise in rating and can back up why I"m comfortable in rating. If it's something I'm really sure, then it's different. So I say, OK neutral." 
 * – Gigi L.


 * "It's either giving good positive feedback or praise. And giving more critical feedback if it's not good overall. To keep the quality, overall." 
 * – Tom M. (editor)


 * "Giving a 5 is a big gushy praise. It's a relatively simple article. It's not trying to explain relativity." 
 * – Tom M. (editor)


 * "[On YouTube] After I watch something, I say you're the best. I liked the stars better, but I don't like the thumbs up, thumbs down. Sometimes I want to give 3 stars." 
 * – Cindy W.


 * "I might have given it a lesser rating, but sometimes I think, well, the person worked very hard on this since they have to add a lot of resources and references, so I just give them a star." 
 * – Cindy W.

Comments

 * Allow users to leave additional comments. Consider expert comments and review possibilities moving forward.
 * Allow users to link a rating or comment with a particular piece of text in the article.
 * Allow users to look at the body of ratings and comments a given user has made.
 * Allow users to filter comments and ratings (by user, date, location, reader/editor, type {readability, neutrality, etc}, etc.)
 * Allow comments to be sorted by helpfulness, critique, or praise.
 * Explicitly prompt users for additional comments when their rating is low. Or high.
 * Integrate comments left with and for ratings into the discussion page (see below).
 * Display aggregate rating on article page and create a new tab for ratings that could include distribution, histograms, comments, lists of public rating contributors, etc.
 * Revive the namespace previously used for comments not appropriate for the discussion page. (See w:Talk:Quantum mechanics/Comments)

Two Onramping Paths

 * Editing: The article feedback tool provides a great opportunity to extend an invitation to edit (that we all love to talk about). Participants naturally launched from rating an article to fixing the mistake they were covering in the rating themselves.


 * "I'd like to see some comments. Some people could say that "I thought this line was interesting." Or if their information is wrong. That they should change it, but they could also change it themselves, right?" 
 * – Cindy W


 * "If I had time, sometimes I would and sometimes I wouldn't. Maybe. But now I see the button you can edit it. And it will affect the changes right away." 
 * – Gigi L


 * Discussion/Talk: A separate, but equal or higher in potential for onramping is introducing users to the type of critical discussion that happens on discussion pages or could be started with comments from user generated ratings.


 * "I look at this and I think, what do they want? I'd probably argue with them....because honestly, I'm not sure I'd want to see a bigger page than this. I think this article is a great example of a very clear, simple, gives enough information, gives enough reference and citations to go beyond, didn't take me long to read. I got everything that was there. I would actually argue with these guys." 
 * – Tom M

Integration with the Discussion Page

 * Place comments left in rating tool in its own unique section on the discussion page
 * Show the rating summary/aggregate on the article page as well as on the discussion page
 * Prompt rating contributors to leave comments, potentially also engaging in discussions on similar issues on the talk page.
 * Make a template for the article rating for the header template section of a discussion page (not really recommended)
 * In lieu of allowing additional text input in the rating tool, provide a throughput /link/guidance to the discussion page
 * Allow users to leave comments via the rating tool. In the interface chosen to display these comments, allow users to explore or expand further in the discussion page explicitly.
 * With a ratings summary or aggregate box in the discussion page, provide a means for users to either rate from there or also leave comments (with or without rating?) from there.
 * Allow comments to be sorted by helpfulness, critique, or praise.
 * not a forum + newbies
 * LiquidThreads!

Correlations with Wikipedia Review Assessment (FA, GA, Etc)

 * Use fancy statistical modeling or machine learning to find an accurate correlation between user generated ratings, expert ratings and reviews, and existing Wikipedia assessments.
 * Provide a correlation or equality legend for rating scale and existing Wikipedia assessment scale
 * Merge two ratings scales into one, more intuitive scale
 * Develop plan to migrate existing Wikipedia Assessment to a fuller expert review system + develop rating tool on a separate and parallel path.
 * Co-locate both types of ratings, rankings, reviews, assessments.

Location

 * Use the same real estate for both the rating input and the rating output/summary/aggregate. Conduct further work to determine which should be displayed by default.
 * Decouple the rating input from the rating output/summary aggregate. Conduct further work and experimentation to determine which belongs where. Consider duplicate locations.
 * In experiment, placing the rating tool at the top of a page, in the left sidebar, in a yet-to-be-created right column, and/or at the the perceived end of an article to see how this affects perception and use.
 * Co-locate user generated ratings with existing Wikipedia assessments if they remain separate (i.e. in the discussion page).
 * Have a rating summary or link in the article body or navigation that leads to a Rating tab that contains the more details information about an articles rating and the interface for submitting a personal rating.

User Reader language for Rating Criteria

 * Use the most universal language to describe metrics.
 * Neutral
 * → Unbiased or Objective
 * Readable
 * → Well-written
 * Completeness
 * Well-sourced
 * Well-sourced

Rating Aggregate/Summary as a Social or Community Experience

 * Rethink or finalize naming of Rating Aggregate/Summary to be more comprehensive and intuitive
 * Provide richer information (public or anonymized) in the rating summary - including but not limited to geographic location, gender, editor/reader, contribution history, comments, etc.
 * Provide rating histogram, distribution etc. Within these, allow users to interact with the ratings with social, reputation and expertise metrics/filters (i.e. filter ratings to show those ratings complete by editors of the page or show average rating of females or certain classes of users...)
 * Allow users to further explore rating contributors to a particular article (i.e. when one clicks on "465 ratings total", one could see a list of those contributors that have rated (which also requires people explicitly allowing their ratings to be public))

Expertise + Reputation + Recommendation
Discussion of expertise, reputation, and taste/recommendations were of natural consequence in talking about rating and reviewing.


 * Whether through rating, editing, contribution history, start building reputation metrics into Wikipedia alongside article ratings.
 * Allow users to rate, review contributors
 * Allow users to rate, review contributions
 * Consider inviting users into the process of identifying experts.
 * Collect data on contributor and contributions data to build an algorithmic reputation metric.
 * Collect data on rating and contribution history to build a recommendation engine. See http://office.wikimedia.org/wiki/User_talk:Parul_Vora/Article_Assessment#Recommender_Systems.
 * Use reputation metrics to build knowledge expert + Wikipedia expert base and groups.
 * Use ratings and comments to intelligently suggest work.
 * Use reputation metrics to invite certain types of editors to contribute to article. See http://office.wikimedia.org/wiki/User_talk:Parul_Vora/Article_Assessment#Article_Quality.

Lighter weight Scale

 * 3 point Likert, with explicit qualification: for example, 'Needs Improvement', 'Neutral', 'Great Job'
 * Flags for grammar, broken links, need for photos and other criteria not covered
 * Percentage completion - universal across languages, objective (less emotional), also best for "corrective" action
 * Letter grades
 * Agree Disagree
 * Bipolar Vs. Unipolar (right now people saw it as mixed)
 * For more (on questionnaires and rating scales), see the work of Krosnick

Consider Including Criteria Not Covered
Participants freely offered constructive criticism on WP articles that isn't necessarily covered by our rating criteria. Coming so naturally, we might want to consider including some means to capture this feedback in the future versions of the tool. This includes, but is not limited to:
 * Linking (Internal Linking, External Links, Broken Links)
 * Photos (number, selection, placement within the article, captions)
 * Formatted content (tables, templates, etc)
 * Grammar

Prioritize Feedback to Editors or Would be Editors while still delivering to Readers
Given readers' general perception of quality in Wikipedia, their intuitive means to judge said quality, and their inclinations on sharing these judgments, the greatest value of consuming user generated ratings/feedback/comments lies with editors. We can choose to take advantage of this given potential or to define a different approach that provides greater value to the readers.


 * Include methods of sharing information, feedback, etc in ratings that is useful to editors and potential editors. Design input/interface to best help editors while still providing this information to readers.
 * Create or define a means to identify editors, contributors, fans or other parties interested in "watching" feedback, rating, or comments on this page.
 * Identify potential editors for a particular article or topic. Hypothetical recommender system (see above).
 * Find and use appropriate communication channels (email, talk pages?, developed watchlist, new lists) to share rating and feedback information directly with editors and fans.
 * Use word analysis on comments to identify most discussed or commented items (see digg labs, yelp, etc).
 * Allow users to vote up and down given comments or suggestions to help editors identify what the most important items might be.
 * Use comments to generate "To do" lists for articles. Share this on the article page.
 * Include some of the comments or "to dos" in the ratings display on the article page so that readers are aware of this when consuming and as a means to encourage them to participate further by editing. And so that they know that you don't have to be an expert to edit (i.e. some tasks are small and easy and only require web/computer knowledge).
 * So much more.

Visibility of Editors and other Community Members
There is value in the anonymity of Wikipedia. But this anonymity comes as the cost of community development. Reader (and the new editors we talked with) have no real connection to or knowledge of the people that are contributing to Wikipedia. A potential rating tool is just one way to give visibility to both reader and editors, catalyze connections between these people and the information they seek and share, help our generally broken newbie experience, increase participation, etc etc.


 * Allow users to see individuals (when permitted, public, logged in) that have rated, left feedback, etc.
 * Liquid Threads!
 * Provide a list of contributors to a given article, ordered by contribution level, contribution ratings, or other metrics.
 * Allow users to filter/view ratings by editors of the article and further investigate individuals (talk pages, formatted profiles)
 * Give users the option, when leaving praise or criticism, to have this go directly to all editors, primary editors, etc.
 * And more.

Integration with Template Warnings

 * When appropriate, allow certain quantities and types of feedback to generate template warnings, citation needed, and other warning similar to those that currently exist on Wikipedia.
 * Connect users with Wikipedia experts to help create template warnings, etc.
 * Standardize Template Warnings. Focus on the message and invitation.