Wikimedia Apps/Team/iOS/Breaking Down the Wall

Making Wikipedia Legible
This year the iOS team is working towards the goal of breaking down the wall between our editorial community and process and the average reader. Our focus is on three potential experimental user experiences for the article view in the app, which will help readers gain an understanding of how Wikipedia works without losing trust in Wikipedia.

The intent of this work is to help readers better understand key elements of the Wikimedia content process, empower readers to potentially participate, and to improve their ability to discern between trustworthy and untrustworthy content on the platform without losing brand trust.

The Public Trust
summarize from Perspectives doc, design deck

The relationship between the two, and the valley of distrust

content trust and system understanding are related and not linearly

Discernment in a time of Disinformation
focus on the concept of “discerning” which information is good or bad as a skill to subtiley nudge and develop in users

Learning From Wikipedia, Learning about Wikipedia, Learning about Learning

its good for us to be understood, its good for the user to be more informed consumer

Program Goals

 * Run experiments on English Wikipedia to determine if our product theory has merit
 * Build prototype UIs and support APIs based on collaborative design and development processes
 * Use variant testing methods to expose these prototypes to iOS spp users
 * Measure and report findings in order to guide future investment in this strategic area

Product Theory
Through clear design and enriched presentation of article meta-data we can:


 * 1) Increase users understand of Wikipedia as a participatory and active system
 * 2) Improve their discernment about how trustworthy the content they are reading is
 * 3) Improve their overall confidence in Wikipedia as a source

Methodology
For this program we'll be using a process of delivering limited experimental versions of potential new experiences, presenting user surveys before and after the user is exposed to the experimental feature. We'll use these survey results, as well as engagement metrics with those features, to determine if any of the product theories are supported.

Baseline Survey
Because of the interrelation between system understanding, quality discernment and trust we are asking about user attitudes in all three areas as part of the survey. More complete information about the content of the survey can be found here. After a short delay to allow them to do some basic reading/assessment of the article, the user will be presented with the option to take a short survey. The survey will presented as a full screen call to action using the existing app announcements system.

Targeting Articles
In order to reduce the number of variable and potential confounding factors, and also to limit the cost and scope of prototype development, we will only survey and variant test users in a limited set of high volume articles. This allows us to "fix" one variable (the content) while still randomly sampling users. Article selection will be based on two factors, both derived from pageview counts: "evergreen" articles which have high pageview ranks over a longer period (a year or more) and "highly topical" articles which have high pageview ranks in the recent past. This will give us two known, but different, context to compare user reactions and results and represents two of the most common consumption patterns on Wikipedia.

Exposure
In all cases the experimental UI will be exposed to users in the context of a targeted article. The experiments may vary slightly in how they are first shown to the users, but in order to, again, limit the number of variables, we won't be presenting calls to action outside the articles or using other means of raising awareness of these features. This may limit or slow the amount of data collection, but allows us to focus on the core product theory first.

Post Survey and Measurement
After a user has exited an experimental feature they will be again prompted to take our survey. In addition we'll be using engagement analytics (for opted in users) to track interactions. Using these two data sets of self-reported qualitative responses and quantified engagement metrics, we provide both a measurement of user interest and impact for each prototype, and we'll use these measurements to analyze if and how our product theories are supported.

Planned Experiments
Based on an extensive review of existing research in this area, the team used a design sprint process to conceive three initial experimental experiences. Each user story is given below along with an initial design proposal. Much more detail can be found on the phabricator tickets linked from th

Source Navigator
Development task: https://phabricator.wikimedia.org/T241254

Story: As a reader of Wikipedia, I would like to know more about the citations used in the articles that I read so that I can learn more about how Wikipedia is built using reliable sources and explore similar articles or resources.

Concept: Provide richer and more consumable information about citations in an article in order to highlight and expose the importance of sourcing and citations for verifiabitliy and therefore trustworthiness of content.

Initial Mock:

Article Inspector
Development task: https://phabricator.wikimedia.org/T241255

Story: As a reader of Wikipedia, I would like to be able to learn more about a specific area of the article that I am reading, without having to stray too far from my current reading experience, so that I can better understand how articles are written.

Concept: Building on the "Who Wrote That" tool, developed in response to this Community Wishlist item, this experience allow the user to highlight specific text in an article to learn about its editing history, to make the evolution and origins of specific content legible.

Initial Mock:

Article as a Living Document
Development task: https://phabricator.wikimedia.org/T241253Story: As a reader of Wikipedia, I would like to see how an article has changed over time so that I can better understand how Wikipedia articles evolve over time  without being overwhelmed by too much granular information.

Concept: By showing a more visual presentation of key events in an articles editing history, users can better consume the changes being made to an article and have more insight into how an article has evolved and clearer understanding that articles are evolving collaborative efforts. We think this is particularly relevant for highly controversal or current events articles which are undergoing frequent change.

Initial Mock: