Article feedback/Public Policy Pilot/Feedback Survey Phase2

From mediawiki.org

The second phase of the survey is meant to address the variance in quality of ratings amongst different users and get quantitative feedback about the use and value of the tool in its current state:

1. Please tell us about your experience with the topic of the article you rated:

  • I don't have any prior experience.
  • I am generally interested in this topic.
  • I have studied/am studying this topic or a closely related topic.
  • This topic is part of my professional life.
  • Other (open text box)

2. Have you ever contributed to Wikipedia before (yes/no)

  • No
  • Yes (If Yes, please describe.)

3. Have you contributed to the article that you rated?

  • Yes
  • No

4. Please let us know why you rated this article today (check all that apply):

  • I wanted to contribute to the overall rating of the article
  • I hope that my rating would positively affect the development of the article
  • I wanted to contribute to Wikipedia
  • I like sharing my opinion
  • I didn't provide ratings today, but wanted to give feedback on the feature
  • Other

5. How useful did you find the averages of article ratings?

  • Very Useful
  • Useful
  • Not useful
  • Useless
  • No opinion

UX study pending

6. Question about satisfaction

7. Please let us know if you have any additional comments.

  • Open text box


Use of this Survey[edit]

We will be using the results of the survey to inform our design and development decisions moving forward.

1 We will be using the distribution of experience level to see if there is any correlation with level of experience/expertise on a given topic with the quality of ratings (Do users with more experience give more accurate ratings?)

2 We will use the classification of reader and editor (and perhaps reader and various levels of editors) to look at differences in behavior before, during, and after use of the tool (Do reader and editors have similar or different use cases and behavior patterns with this tool? Both in the rating and in the viewing of the aggregate ratings?). We will also be looking for any correlation with quality of the ratings as above (Do editors rate more? Do editors give more accurate ratings?)

3 This question might not be necessary with the data we have.

4 The distribution of motivations will be used to determine how to present and prioritize existing or non-existent features in this feedback system - including but not limited to: feedback input (here, star ratings on 4 metrics); feedback summary (here, bar graph for each metric); feedback output (potentially reviews, comments, messages to editors, lists, updates to profile pages, and on). Responses to this answer will also highlight areas where further research and design thinking needs to take place (for example, if we see a significant bias in motivations to "opinion sharing" we will further explore iterations on the interface or new interface elements that lead to a richer opinion sharing experience).

5 We will start to evaluate the value that the aggregate ratings have to {readers, editors, expert, non-expert, registered, non-registered} users. If we see unexpectedly high or low results, follow up qualitative research will be conducted.

8 Our users love open text fields. So do we.