Wikimedia Apps/Team/Android/Add an image MVP

Objective
The Android, Structured Data, and Growth teams aim to offer "Add an Image" as a “structured task”. More about the motivations for pursuing this project can be found on the main page created by the Growth team. In order to roll out Add an Image and have the output of the task show up on wiki, a "minimum viable product" (MVP) for the Wikipedia Android app will be created. The MVP will enhance the algorithm provided by the research team and answer questions about behavior usage to further explore the concerns raised by the community.

The most important thing about this MVP is that it will not save any edits to Wikipedia. Rather, it will only be used to gather data, improve our algorithm, and improve our design.

With the Android app being where "suggested edits" originated, and our team has a framework to build new task types easily. The main pieces include:


 * The app will have a new task type that users know is only for helping us improve our algorithms and designs.
 * It will show users image matches, and they will select "Yes", "No", or "Skip".
 * We'll record the data on their selections to improve the algorithm, determine how to improve the interface, and think about what might be appropriate for the Growth team to build for the web platform later on.
 * No edits will happen to Wikipedia, making this a very low-risk project.

The Android team will be working on this in February and March 2021. Our hope is the Growth team will learn enough to deploy the feature on mobile web. Based on the success and lessons of the Growth team's deployment, the Android team will refine the MVP and turn it into a feature that produces edits to Wikipedia.

Product Requirements

As a first step in the implementation of this project, the Android team will develop a MVP with the purpose of:


 * 1) Improving the Image Matching Algorithm developed by the research team by answering "how accurate is the algorithm"?  We want to set confidence levels for the sources in the algorithm -- to be able to say that suggestions from Wikidata are X% accurate, from Commons categories are Y% accurate, and other Wikipedias are Z% accurate
 * 2) Learn about our users by evaluating:
 * 3) * The stickiness of Add an Image across editing tenure, Commons familiarity, and language
 * 4) * The difficulty of Add an Image as a task and if we can determine if certain matches are harder than others
 * 5) * Learn the implications of language preference on the ability to complete of the task
 * 6) * Accuracy levels of users judging the matches because we’re not sure how accurate the users are, we want to receive multiple ratings on each image match (i.e. “voting”).
 * 7) * The optimal design and user workflow to encourage accurate matches and task retention
 * 8) * What, if any, measures need to be in place to discourage bad matches

How to Follow Along
We have created T272872 as our Phabricator Epic to track the work of the MVP. We encourage your collaboration there or on our Talk Page.

There will also be periodic updates to this page as we make progress on the MVP.

2021 March 25 - User Testing Results and Analysis
The team released an update to production that included minor bug fixes for TalkPage and Watchlist. We also show non-main name space pages in-app through a mobile web treatment.

The Android team leveraged usertesting.com to gain a better understanding of what aspects of the Image Recommendations MVP worked well and what things should be improved prior to release in English, German, French, Portuguese, Russian, Persian, Turkish, Ukrainian, Arabic, Vietnamese, Cebuano, Hebrew, Hungarian, Swedish, Polish, Czech, Basque, Korean, Serbian, Armenian, Bangla and Spanish.

We completed the analysis in partnership with the Growth team. Below is the Android team analysis.

Analysis of tasks T277861
🥰 = Good — Participant had no issues 😡 = Bad — Participant had issues 🤔 = Not sure if good or bad — Participant might had difficulties understanding the question, did not explicitly interact with it or ignored the task completely Onboarding and understanding of Suggested edits

Do participants understand the tooltip? 😡 Can participants explain the difference between tasks? 🥰 Do participants understand what the 'Train image algorithm' task is all about? 🥰 What do participants associate with the robot icon? 🥰 Train AI task - Onboarding and understanding
 * 2/5 discovered the tooltip but had issues understanding it.
 * 2/5 did not see the tooltip since it disappeared too quickly.
 * 1/5 discovered and understood the tooltip completely.
 * 5/5 were able to explain their understanding of the tasks in a sufficient way.
 * 5/5 were able to describe the task in their own words well.
 * 4/5 associated the robot icon with an algorithm, artificial intelligence (AI) or computer program
 * 1/5 didn’t know what it means

Do participants understand the two onboarding screens? 🥰 How do participants interact with onboarding tooltips? 🥰 Is the tooltip copy clear enough? How’s the timing and positioning of the tooltips on various devices / screen sizes? 🤔 Do participants know what to do after all these onboarding measures? 🥰 Train images task
 * 4/5 understand both onboarding screens.
 * 1/5 wasn’t reacting to the second onboarding screen (opt-in).
 * 3/5 understand the task due to the tooltips.
 * 1/5 mentioned that the tooltips are very helpful to understand the task.
 * 1/5 understands the task but did not pay attention to the tooltips.
 * 1/5 probably did not see or understand the tooltips.
 * 3/5 read and understand the tooltip copy.
 * 2/5 did not interact with the tooltips.
 * 2/5 had tooltip display issues on a smaller phone.
 * 1/5 likes that the tooltip mentions the impact (help readers understand a topic)
 * 5/5 understand what to do now.

Do participants interact with the prototype naturally? 🥰 Do participants know how to navigate to the file detail page? 🥰 How helpful is the meta information on the file detail page? 🥰 Do participants know how to enlarge / zoom an image? 🥰 Do participants know how to go back and forth between image suggestions? 🥰 Do participants understand the 'Not sure' options? 🥰 Do participants understand the 'No' options? 🥰 Do participants scroll or know how to reveal more of the article contents? 🥰 Do participants know how to access the FAQ? 🤔 How do participants interpret the element of positive reinforcement? 🥰 Do participants notice the element of positive reinforcement that has been added to the card? 🥰
 * 4/5 are mostly comfortable interacting with the UI and make educated decisions.
 * 3/5 do not navigate to the file page without being prompted.
 * 2/5 navigate between the article and file page intuitively and without issues.
 * 1/5 is intimated to make decisions that affect Wikipedia articles, doesn’t know how to interact with the article (RS: possible due to small screen size) and doesn’t use file detail page intuitively.
 * 5/5 successfully navigated to the file detail page after being prompted.
 * 1/5 tapped the 'info i' icon in the feed view first.
 * 3/5 consider the information on the file page as helpful.
 * 2/5 mention that the author is helpful.
 * 2/5 mention that the date is helpful.
 * 1/5 mentions that licensing info is helpful.
 * 1/5 mentions that the image description is helpful.
 * 5/5 tapped the image and used a pinch to zoom gesture to zoom the image.
 * 2/5 tried to zoom the image directly from the feed experience.
 * 5/5 use swipe gestures to navigate back and forth between image suggestions.
 * 2/5 tapped the back button at the top left before using the swipe gesture.
 * 1/5 tapped the 'info i' button at the top right before using the swipe gesture.
 * 5/5 understand the 'Not sure' options.
 * 3/5 were selecting multiple reasons at once.
 * 5/5 understand the 'Not sure' options.
 * 4/5 were successful in scrolling the article to reveal more information
 * 2/5 wanted to use the pull indicator at the top of the image suggestion to reveal the article below before they scrolled the article
 * 2/5 tried to the tap the article title (1/5 scrolled afterwards)
 * 1/5 looked for a 'More' button to reveal more of the article’s content, then tapped the 'info i' button at the top right
 * 3/5 tap the 'info i' button at the top right to reveal the FAQ.
 * 1/5 explained that she would tap the back button and look for an FAQ there (RS: a possible way to success as there’s an FAQ section in the SE home screen)
 * 1/5 did not notice the 'info i' button at the top right
 * 5/5 understand what it is and identified the element as motivational, encouraging and/or daily goal
 * 1/5 wasn’t 100% sure about it but then identified it as a motivational element.
 * 5/5 participants identified the added progress indication in the card

3. Analysis of rating scale

1 = Not at all useful information 5 = Very useful information 4. Analysis of follow-up questions

1. How do you think the suggested images for articles are being found? And how would you rate the overall quality of the suggestions? 2. Was there anything that you found frustrating or confusing, that you would like to change about the way this tool works? 3. How easy or hard did you find this task of reviewing whether images suggested were a good match for articles? 4. Would you be interested in adding images to Wikipedia articles this way? Please explain why or why not.
 * 5/5 mentioned that the images presented were relevant.
 * 4/5 associated the image suggestions with an algorithm or computer program.
 * 2/5 mentioned that the suggestions are associated with keywords.
 * 1/5 mentioned these are random suggestions.
 * 3/5 replied that it’s easy to use.
 * 1/5 that it’s tedious and cumbersome.
 * 1/5 suggested to show more than 1 image choice per article.
 * 4/5 find it very easy to evaluate if it’s a good match for the article.
 * 1/5 think it’s hard and time consuming but well worth it.
 * 4/5 are interested in such a feature
 * 1/5 mentions he would not be interested
 * 1/5 mentions that she wants to know how accurate she is when reviewing images

5. Appendix

Detailed notes and evaluations per participant from usertesting.com.

Participant #1: Battybrit

Tasks

Task #1-5 (instructions)

Task #6 🤔

Tooltip disappeared before she was able to interact with it.

Task #7-8 (instructions)

Task #9 🥰

Understands the rough concept of it.

Task #10 🥰

Understands the concept of it.

Task #11 🥰

Associates the robot icon with AI.

Task #12 (instructions)

Task #13 🥰


 * Understands first onboarding screen.
 * Understands second onboarding screen.

Task #14 (instructions)

Task #15 🥰

Task #16 🥰

Task #17 🥰

Understands the tooltips and what to do now.

Task #18 🥰


 * Comfortable with using the interface
 * Does not use meta data intuitively
 * Notices the reach the goal message


 * 1) 🥰 Does not see a rodent on the image. Chooses low quality.
 * 2) 😡 Even though an aviator is shown, she chooses not relevant. Realizes that she got it wrong in the next one.
 * 3) 🤔 Not 100% sure if she’s correct, but chooses yes.
 * 4) 🥰 Is confident about the choice.
 * 5) 🥰 Makes informed decisions based on the image.
 * 6) 😡 Does not check meta data.
 * 7) 🥰 Not sure if a person is suited for a surname.
 * 8) 🥰 No, Not enough information
 * 9) 🥰 No, Not relevant
 * 10) 🥰 Not sure, Not enough information
 * 11) 🥰 Yes
 * 12) 🥰 No, Low quality
 * 13) 😡 No, Not enough information
 * 14) 🥰 Not sure, Not enough information
 * 15) 🥰 No, Not enough information
 * 16) 🥰 No, Not relevant
 * 17) 🥰 Not sure, Not enough information (different language)
 * 18) 🥰 Yes

Task #19 🤔


 * Taps the 'info i' button first.
 * Then figures out that she needs to tap the image itself.

Task #20 🥰


 * Mentions the info about the author, licensing and the date is helpful,

Task #21 🥰

Wants to zoom in the image by pinching (but it’s not yet implemented)

Task #22 🥰

Easily swipes to go back to the previous and next suggestions.

Task #23 🥰

Explains all options well.

Task #24 😡 (instructions)

Didn’t know that she can select more than one reason.

Task #25 🥰

Explains all options well.

Task #26 🤔


 * Would click a button to expand the article (RS: Not sure which button she’s referring to as there is none).
 * Then further explains that she would tap the article’s title.

Task #27 🥰

Would tap the 'info i' button to reveal more information

Task #28 🥰


 * Wasn’t 100% sure about it
 * Identifies it as a motivational element

Task #29 (instructions)

Task #30 🥰

Had no issues identifying what’s changed and what the additional number and bar means.

Task #31 (instructions)

Task #32-36 (see rating scale)

Task #37 (open format)

Answer/rating from other people that are using the feature to validate/correct her own choice.

Final thoughts

None

Follow-up questions
 * 1) I think they are being found by tags on the internet? maybe keywords? I would rate overall quality as good, some obviously were off but that makes sense since many places have similar names or multiple meanings.
 * 2) I didn't find anything confusing but at times I was disappointed with myself because I didn't know if one fit into a yes or no. I like to be accurate and sometimes didn't trust my own judgement.
 * 3) I found some very easy and clearly related or not related but some were tricky because I didn't know the subject well.
 * 4) I would be interested but as mentioned verbally, I'd like to know how accurate I am. I just would not want to be adding wrongly and would like to know my accuracy rate.

Participant #2: brad.s

Tasks

Task #1-5 (instructions)

Task #6 🥰

Understands the tooltip at the bottom.

Task #7-8 (instructions)

Task #9 🥰

Explains the 'Train image algorithm' task well and knows where images are positioned (on Desktop Wikipedia). Also knows what the others mean.

Task #10 🥰

Mostly understands the task.

Task #11 🥰

Associates robots with algorithms, even though he thinks that it doesn’t mean anything.

Task #12 (instructions)

Task #13 🥰


 * Understands first onboarding dialog.
 * Opts in to displaying his name on the algorithm training page.

Task #14 (instructions)

Task #15 🥰

Understands the task perfectly due to the tooltip. Says the tooltips are very helpful to understand the task.

Task #16 🥰

Likes that the tooltip mentions the impact that his choice has (help readers understand the topic)

Task #17 🥰

Mentions tooltips were very helpful. Similar to Google Maps feature (e.g. “does this shop have toilet facilities?”)

Task #18 🥰


 * Navigates intuitively between article and file details screen
 * Doesn’t consider 'Suggestion reason'
 * Selects multiple items in 'No' reason
 * Selected 'Not sure' only once


 * 1) 🥰 No, Not enough information — doesn’t check for the details of the image though
 * 2) 🥰 Yes, because of Filename
 * 3) 🥰 No, Not enough information — checks out the detail page of the image
 * 4) 😡 Yes (RS: I don’t think it’s suitable)
 * 5) 🤔 Yes
 * 6) 🥰 No, Not enough information
 * 7) 🥰 No, Not relevant
 * 8) 🥰 Yes
 * 9) 🥰 No, Not enough information, Don’t know the subject (RS: Article excerpt’s too short)
 * 10) 🥰 No, Not relevant, Don’t know this subject
 * 11) 🥰 Yes
 * 12) 🥰 No, Not relevant, Not enough information, Don’t know this subject
 * 13) 🥰 Yes
 * 14) 🥰 No, Other
 * 15) 🥰 No, Not relevant
 * 16) 🥰 No, Not relevant, Other
 * 17) 🥰 Not sure, Not enough information
 * 18) 🥰 Yes

Task #19 🥰

Intuitively taps on image card.

Task #20 🥰

Task #21 🥰

Zooms in the image on the detail page not in the main feed (RS: There are multiple paths to success).

Task #22 🥰

Easily swipes left and right between image suggestions.

Task #23 🥰

Explains all 'Not sure' options well.

Task #24 (instructions)

Task #25 🥰

Explains all 'No' options well.

Task #26 🥰


 * Knows how to scroll
 * Wants to use the pull indicator

Task #27 🥰

Would press the 'info i' icon to reveal more information

Task #28 🥰

Identified what the counter is at the bottom (encouragement).

Task #29 (instructions)

Task #30 🥰

Had no issues identifying wha the counter is.

Task #31 (instructions)

Task #32-36 (see rating scale)

Task #37 (open format)

Nothing to specific (audio broke up in the beginning)

Final thoughts

—

Follow-up questions
 * 1) using AI. the suggestions were very good and the majority were accurate.
 * 2) N/A
 * 3) very easy and user intuitive
 * 4) yes, as it is so easy and simple to do

Participant #3: 147qb

 * Uses quite a small phone
 * Uses accessibility setting that increases the font size in the app

Tasks
Task #1-5 (instructions)

Task #6 🤔

Reads it out loud and funny, not sure if he understood the tooltip though.

Task #7-8 (instructions)

Task #9 🥰

More or less describes the task’s correctly.

Task #10 🥰

Understands the task, doesn’t mention AI though.

Task #11 😡

Doesn’t know what it means.

Task #12 (instructions)

Task #13 🥰


 * Understands the first onboarding screen.
 * Identifies a copy error “We would you like (...)”

Task #14 (instructions)

Task #15 🤔

Not sure if he understands completely.

Task #16 🤔

Sees the tooltip.

Task #17 🥰

Taps 'Yes' accidentally but it seems that he understood the task.

Task #18 😡


 * Assumes that he needs to know about the article’s topic to make an educated decision if the image fits it or not.
 * Is intimated to make decisions for articles in their doctoral thesis's.
 * Does not interact with the article
 * Does not interact with the file detail page


 * 1) 😡 Not sure, Don’t know the subject
 * 2) 😡 Not sure, Not enough information
 * 3) 😡 Not sure, Not enough information
 * 4) 😡 Yes
 * 5) 😡 Yes
 * 6) 😡 Yes
 * 7) 😡 No, not enough information
 * 8) 😡 Not sure, Don’t know the subject
 * 9) 😡 No, Not enough information
 * 10) 😡 No, Don’t know the subject
 * 11) 😡 No, Don’t know the subject
 * 12) 😡 No, Don’t know the subject
 * 13) 😡 No, Don’t know the subject
 * 14) 😡 No, Low quality, Don’t know the subject
 * 15) 😡 No, Don’t know the subject
 * 16) 😡 No, Don’t know the subject
 * 17) 😡 No, Don’t know the subject
 * 18) 😡 Not sure, Not enough information
 * 19) 😡 No, Don’t know the subject
 * 20) 😡 No, Don’t know the subject

Task #19 🥰


 * Taps the image for more information
 * Mentions that he would use Google search to find more about the image.

Task #20 🤔

Task #21 🥰

Zooms the image on the detail page.

Task #22 🥰


 * Was looking for back functionality first
 * Swipes back and forth to navigate between suggestions.

Task #23 🥰


 * Understands the options presented
 * Understood that he can select multiple

Task #24 (instructions)

Task #25 🥰

Understand the options

Task #26 🥰

Realizes that he could scroll the article here

Task #27 😡

Didn’t see the 'info i' at the top right.

Task #28 🥰

More or less understands the concept

Task #29 (instructions)

Task #30 🥰

Notices the added counter

Task #31 (instructions)

Task #32-36 (see rating scale)

Task #37 (open format)

—

Final thoughts

—

Follow-up questions

 * 1) it's a random thing and I would say that the overall quality is good.it's a random thing and I would say that the overall quality is good.
 * 2) it's pretty tedious, and it's pretty cumbersome. I guess it has to be because Wikipedia is something that is very serious, and I think it should be taken seriously.
 * 3) pretty hard but time consuming and I would have to say it was well worth it
 * 4) probably not is that I have a weak bladder and I have to use the men's room much much much too often to be a good at any of this. I am sorry take care

Participant #4: TestMaster888

 * Did not notice the 'Suggestion reason'

Tasks
Task #1-5 (instructions)

Task #6 🤔

Doesn’t see the blue tooltip.

Task #7-8 (instructions)

Task #9 🥰

Describes the tasks properly.

Task #10 🥰

Describes the task properly.

Task #11 🥰

Associates the robot with a computer program / algorithm.

Task #12 (instructions)

Task #13 🤔


 * Seems like he understands the first onboarding dialog
 * Doesn’t react to the second onboarding dialog

Task #14 (instructions)

Task #15 🥰


 * Describes what he sees (article and image)
 * Hasn’t really paid attention to the tooltips

Task #16 🤔

Hasn’t really read the tooltips.

Task #17 🥰

Knows what he needs to do now.

Task #18 🥰


 * Makes educated choices
 * Reads meta information of the image
 * Zooms the image intuitively
 * Navigates back and forth on detail page and article
 * Makes educated decisions


 * 1) 🥰 No, Not relevant
 * 2) 🥰 Yes
 * 3) 🥰 Not sure, Not enough information
 * 4) 🥰 No, Not relevant
 * 5) 🤔 Yes
 * 6) 🤔 Yes
 * 7) 🤔 Yes
 * 8) 🥰 Yes
 * 9) 🤔 Yes
 * 10) 🥰 No, Not relevant
 * 11) 🥰 Yes
 * 12) 🥰 Not sure, Not enough information
 * 13) 🥰 Yes
 * 14) 🥰 No, Not relevant, Low quality
 * 15) 🥰 Not sure, Not enough information
 * 16) 🥰 No, Low quality
 * 17) 🤔 Yes
 * 18) 🥰 Yes
 * 19) 🥰 Not sure, Not enough information
 * 20) 🥰 Yes

Task #19 🥰


 * Uses file detail page intuitively
 * Mentions that he’d Google the image

Task #20 🤔

Task #21 🥰

Used pinch to zoom intuitively

Task #22 🥰


 * Taps the button at the top right first
 * Then swipes between suggestions

Task #23 🥰

Explains the options well

Task #24 (instructions)

Task #25 🥰

Explains the options well

Task #26 🥰

Scrolls down with no issues

Task #27 🥰

Taps on the 'info i' icon at the top right

Task #28 🥰

Identifies it as an element of positive reinforcement

Task #29 (instructions)

Task #30 🥰

Sees that the counter has been added

Task #31 (instructions)

Task #32-36 (see rating scale)

Task #37 (open format)

—

Final thoughts

—

Follow-up questions

 * 1) The algorithm is programmed to look at keywords in the article and do an image search based on the keywords and try to return an image that best matches its search parameters. 65% the images I was presented were relevant.
 * 2) Nothing was confusing. though it might work better if the algorithm presented the human editor with more than 1 image choice.
 * 3) very easy.
 * 4) Yes it would be much better than scouring the web for images manually.

Tasks
Task #1-5 (instructions)

Task #6 😡


 * Describes the Explore Feed card
 * Reads out the tooltip loud but does not understand it

Task #7-8 (instructions)

Task #9 🥰

Understands the tasks

Task #10 🥰

Understands the task

Task #11 🥰

Associates the robot icon with an algorithm.

Task #12 (instructions)

Task #13 🥰

Understands both onboarding screens

Task #14 (instructions)

Task #15 🥰

Sees onboarding tooltip and knows what to do.

Task #16 🥰

Reads tooltips and understands what to do.

Task #17 🥰

Knows what do now.

Task #18 🥰


 * Does not interact with file detail page at all, probably due to time pressure (mentions that these tests should only be 15mins long)
 * Analyzes the information she sees thoroughly
 * Scrolls the article intuitively
 * “Would you add this image to the article?” is cut off → Display issues on smaller screens
 * Selects multiple answers right away


 * 1) 🥰 No, Not relevant
 * 2) 🥰 Yes
 * 3) 🥰 Not sure, Not enough information
 * 4) 🤔 Yes
 * 5) 🥰 Not sure, Not enough information
 * 6) 🥰 Not sure, Not enough information
 * 7) 🥰 Yes
 * 8) 🤔 Not sure, Other
 * 9) 🤔 Not sure, Not enough information
 * 10) 🥰 Not sure, Not enough information
 * 11) 🤔 Yes
 * 12) 🥰 Not sure, Other
 * 13) 🥰 Yes
 * 14) 🥰 No, Low quality
 * 15) 🥰 No, Not relevant
 * 16) 🥰 Not sure, Not enough information
 * 17) 🤔 Yes
 * 18) 🥰 Yes
 * 19) 🥰 Not sure, Not enough information, Don’t know this subject

Task #19 🥰

Taps on the image card

Task #20 🥰


 * Date taken
 * Read full image description
 * Author (to contact them)

Task #21 🥰


 * Tries zooming the image from the feed
 * Taps on the image and zooms in on detail page

Task #22 🥰

Uses swipes gestures

Task #23 🥰

Understands the options

Task #24 (instructions)

Task #25 🥰

Understands the options

Task #26 🥰


 * Tries to tap the article title
 * Also tries to scroll the article
 * Taps the info bubble as the last step

Task #27 🤔

Would tap the back button and look for a help page (RS: A possible way to success)

Task #28 🥰

Likely understands it

Task #29 (instructions)

Task #30 🥰

Sees it and likely understands it

Task #31 (instructions)

Task #32-36 (see rating scale)

Task #37 (open format)


 * Easy access to article
 * More image information

Final thoughts

2021 February 23 - Finalizing Designs ahead of Usability Testing
The Android team has created designs that are currently being turned into a prototype for usability testing prior to deployment.

Once the prototype is created for user testing we will update this page with a link that anyone following along with this project can use and provide us feedback on our talk page.

2021 February 1 - Designs, Product Decisions and APIs
This week the Platform Engineering Team began building the API needed for this project with the projection of completion in early March, which is when we hope to deploy the MVP.

There were open Product questions the team's new Product Manager answered in T273055

Initial Product Decisions


 * We will have one suggested image per article instead of multiple images
 * This iteration of the MVP will not include Image Captions
 * There are no language constraints for this task. As long as there is an article available in the language we will surface it. We want to be deliberate in ensuring this task is completed by a variety of languages. For this MVP to be considered a success, we want the task completed in at least five different languages including English, an indic language and Latin language.
 * We will have a check point two weeks after the launch of the feature to check if the feature is working properly and if modifications need to be made in order to ensure we are getting the answers to our core questions. The check point is not intended to introduce scope creep.
 * We aren't able to filter by article categories in this iteration of the MVP, but it could be a possibility in the future through the PET API
 * We will surface a survey each time a user says no to a match and sparingly surface a survey when a user clicks Not Sure or Skip
 * We need three annotations from 3000 different users on 3000 different matches. By having these three annotations, the tasks will self grade.
 * We will know people like the task if they return to complete it on three distinct dates we will compare frequency of return by date across user type to understand if there was more stickiness for this task by how experienced a user is
 * Once we pull the data we will be able to compare the habits of English vs. Non English users. We can not / do not need to show the same image to both non English and English users. Non English users will have different articles and images. We will know if a task was hard due to language based on their response to the survey if they click no or not sure. We will check task retention to see how popular the task is by language.
 * In order to know if the task is easy or hard, we would like to be able to see how long it is taking them to complete it. ****NOTE: This only works if we can see if someone backgrounds the app. Of the people that got it right, how long did it take them?
 * In order to know if the task is easy or hard we should also track if they click to see more information about the task, in order to make a decision
 * We determined that it is not worth adding extra clicks to see what metadata is used that is found helpful. Perhaps we allow people to swipe up for more information and it generally provides the meta data??? Will need to see designs to compare this
 * It is too hard, at least for this MVP, to track if experienced users use this tool to add images to articles manually without using the tool, so we aren't going to track that.
 * In the designs we want to track if someone skips or press no on an image because the image is offensive in order to learn how often NSFW or offensive material appears

The Android Designer began work on mockups for the MVP and has started to receive feedback at T269594. The user stories the designer is creating mockups in response to include:

2.1. Discovery
When I am using the Wikipedia Android app, am logged in,

and discover a tooltip about a new edit feature,

I want to be educated about the task,

so I can consider trying it out.

2.2. Education
When I want to try out the image recommendations feature,

I want to be educated about the task,

so my expectations are set correctly.

2.3. Adding images
When I use the image recommendations feature,

I want to see articles without an image,

I want to be presented with a suitable image,

so I can select images to add to multiple articles in a row.

2.4. Positive reinforcement
When I use the image recommendations feature,

I want feedback/encouragement that what I am doing is right/helping,

so that I am motivated to do more.