Wikimedia Apps/iOS Suggested edits project

Background
Suggested edits presents opportunities for small but vital contributions to Wikipedia. The aim is to raise awareness of the various ways people can edit Wikipedia while making quality contributions easier and more accessible.

Suggested edits was initially introduced on Android, and was later incorporated into the Growth experiments for the newcomer homepage. The iOS app was initially created as a reading focus app. Over the last year we focused on adding communication features. This year's annual plan created the perfect opportunity to bring Suggested edits to the iOS app.

Objective
By July 2024 we aim to release a Suggested edits task that:


 * Increase unreverted mobile contributions from iOS by 10%
 * 20,000 articles enhanced using Suggested Edits

Hypothesis 1 Alt text Suggested Edit
We believe we can achieve the objective above by releasing a suggested edit task focused on adding alt text to images.

Presently, 50% of images on Wikipedia have captions, 10% of images have alt text, and only 3% have effective alt text. Users of the Wikipedia apps can go into settings when they are in low bandwidth environments and choose to not load images. If an image has alt text, the user in the low bandwidth environment will be able to read the alt text and get an idea of what the image is about.

The iOS Wikipedia app was Apple's editor's choice in 2017 as a result of the app's user accessibility features. Accessibility is an important factor to our design and development process on the apps teams, so a task to fill the gap in images with quality alt text is fitting for our team.

Due to this concept being a new suggested edit type, it is important to start with a proof of concept that will allow us to evaluate if the task will be effective.

Fundamental requirements for proof of concept

 * Entry point in Settings


 * Prominent guidance for writing good alt-text
 * Users ability to get context about the image from the article
 * Users ability to access relevant metadata (can take user to Web)
 * Detection of which images do not have alt-text
 * Ability to store the responses we get to evaluate if they are good alt-text
 * Users should be able to give feedback about the feature over all

Nice to haves

 * Positive Reinforcement
 * Exposing to users how many tasks or edits they've completed
 * Ability to publish alt-text to Wikipedias
 * Limits on number of task can be completed in a given day or session if it is being published to Wikipedia
 * Entry point is easily discoverable
 * Do not allow users to copy and paste in image caption
 * Users should be prompted provide feedback about the feature
 * Suggest Alt-text (think Machine Assisted Article Descriptions)
 * Can playback what was written in preview

Short term:

 * As a participant at the GLAM Conference in Uruguay, I want to test out an alt-text Suggested Edits Task, to get a concrete idea of the concept and provide meaningful feedback.
 * As the Director of Product, I want concrete proof an alt-text Suggested Edits would increase edits on iOS, without creating a burden for patrollers.
 * As a accessibility specialist, I want to evaluate the quality of alt text submitted through Suggested edits tool, so that I can advise if it is a feature that would be helpful or harmful to low vision users

After proof of concept:

 * As a user with limited data, I want to read alt-text, so even if images are not loaded, I am aware of what is in the image.
 * As a user navigating articles with a screen reader, I want alt-text to be available, so that I have the same additional context about an article as users that are not using screen readers.

Consultation Strategy

 *  November 2023:  During the 2023 GLAM conference we will have attendees test the proof of concept
 *  December 2023:  Partner with accessibility specialists to evaluate submissions through the proof of concept
 *  January 2024:  Decide whether to build the full featured version of an alt text suggested edit task or pivot to another suggested edit task type

Decision Matrix

 * If less than 45% of edits are scored as a 3 or higher then we will pivot to a different suggested edit. If 46%-70% of edits are scored a 3 or higher we will improve guidance or use AI to better assist users. If 71% or higher of edits are scored a 3 or higher we will scale feature.
 * If average edit per unique user is under 3 we will pivot to a different suggested edit. If it is 3-6, we will consider interventions to reduce friction. If it is 7 or higher we can scale
 * If user edit through feature a second day we should proceed with improvements and scaling
 * If less than 55% or less of users are satisfied with feature we will not scale without making changes
 * If we do not have at least 50 people try the feature we will do direct outreach to gain more edits
 * If more there are more than 30% of users find the task too difficult we will create an intervention to reduce difficulty before scaling. If 80% or more users find the task too difficult we will consider abandoning depending on supplementary responses
 * If the skip rate is 20% higher than image captions on Android we will consider pivoting to a different suggested edit unless evidence points to an intervention that could reduce this rate

Updates
Usability test summary-November 2023

The unmoderated usability tests have been performed with this protocol and prototype on userlytics.com with 5 participants.

Pool of participants


 * Age: 21-37
 * Countries: Philippines, United Kingdom, Brazil, United States, Mexico
 * Devices: iPhone 6s, iPhone 8, iPhone 11, iPhone SE (2nd gen), iPhone 13 Pro Max
 * Sex: 1x female, 4x male

Findings


 * 1 user read through the onboarding screens but still was surprised that the task was around describing images.
 * 1 user did not notice the image information and had a hard time differentiating between article and image information
 * 1 user wanted to edit the alt text on the image info page
 * 1 user did not initially see the CTA to add alt text due to its position on the page (under the scroll)
 * ‘View examples’ is not tappable, but there is a task around this. This wasn't very clear for 1 user.
 * 3 of 5 users were confused about the preview screen – they believed they had published after the initial submission on the editing screen
 * 1 user suggested being able to type the text directly on the first screen and then publish vs. going to a secondary screen, seeing a preview, and then having the text be published