Growth/Personalized first day/Structured tasks/Add an image/tr

Bu sayfada, Büyüme ekibinin yeni gelen ana sayfası aracılığıyla sunacağı bir yapılandırılmış görev türü olan "resim ekle" yapılandırılmış görev üzerindeki çalışmayı açıklar. Android ekibi de aynı temel bileşenleri kullanan Vikipedi Android uygulaması için benzer bir görev düşünüyor. Ek olarak, Yapısal Veri daha deneyimli kullanıcıları hedefleyen ve Commons'da Yapısal Veri'den yararlanarak benzer bir şeyi keşfetmenin ilk aşamalarındadır. Bu sayfadaki tartışma ve güncellemeler tüm ekiplerin çalışmaları ile ilgilidir.

Bu sayfa önemli varlıkları, tasarımları, açık soruları ve kararları içerir.

İlerleme konusundaki artımlı güncellemelerin çoğu, burada yayınlanan bazı büyük veya ayrıntılı güncellemeler ile birlikte genel Büyüme ekibi güncellemeleri sayfasında yayımlanacaktır.

Mevcut durum

 * 2020-06-22: Resimleri önermek için basit bir algoritma oluşturmak için fikirler hakkında ilk düşünme
 * 2020-09-08: İngilizce, Fransızca, Arapça, Korece, Çekçe ve Vietnamca dillerinde bir eşleştirme algoritmasında yapılan ilk denemeyi değerlendirdi
 * 2020-09-30: İngilizce, Fransızca, Arapça, Korece, Çekçe, Vietnamca dillerinde ikinci bir eşleştirme algoritması denemesini değerlendirdi
 * 2020-10-26: resim öneri hizmeti için olası fizibilite hakkında dahili mühendislik tartışması
 * 2020-12-15: yeni gelenlerin bu görevde başarılı olup olmayacağını anlamaya başlamak için ilk kullanıcı testleri turunu çalıştırmak
 * 2021-01-20: Platform Mühendisliği ekibi, resim önerileri için kavram kanıtı API oluşturmaya başladı
 * 2021-01-21: Android ekibi, öğrenme amaçları için minimum uygun sürüm üzerinde çalışmaya başlar
 * 2021-01-28: kullanıcı test sonuçlarını gönderildi
 * 2021-02-04: topluluk tartışmalarının ve kapsam istatistiklerinin gönderilmiş özeti

Özet
Yapısal görevler, düzenleme görevlerini yeni başlayanlar için anlamlı olan ve mobil cihazlarda anlam ifade eden adım adım iş akışlarına bölmek içindir. Büyüme ekibi, bu yeni tür düzenleme iş akışlarının tanıtılmasının, daha fazla yeni insanın Vikipedi'ye katılmaya başlamasına izin vereceğine inanıyor; bunlardan bazıları daha önemli düzenlemeler yapmayı ve topluluklarına dahil olmayı öğrenecek. Yapılandırılmış görevler fikrini topluluklarla tartıştıktan sonra, ilk yapılandırılmış görevi oluşturmaya karar verdik: "bağlantı ekle".

İlk görevi oluştururken bile, bir sonraki yapılandırılmış görevin ne olabileceğini düşünüyorduk ve görüntü eklemenin yeni başlayanlar için iyi bir uygun olabileceğini düşünüyoruz. Buradaki fikir, basit bir algoritmanın, Commons'tan görüntüleri olmayan maddelerin üzerine yerleştirilmesini tavsiye etmesidir. Başlangıç ​​olarak, yalnızca Vikiveri'de bulunabilen mevcut bağlantıları kullanır ve yeni gelenler, resmi maddeye yerleştirmek veya yerleştirmek için kendi yargılarını kullanırdı.

Bunun nasıl çalışacağına dair pek çok açık soru olduğunu biliyoruz, doğru gitmemesi için birçok potansiyel neden var. Bu nedenle, birçok topluluk üyesinden haber almayı ve nasıl ilerleyeceğimize karar verirken devam eden bir tartışma yapmayı umuyoruz.

Neden resimler?
Önemli katkılar arıyoruz

Topluluk üyeleriyle yapılandırılmış görevleri ilk tartıştığımızda, birçok kişi vikibağlantı eklemenin özellikle yüksek değerli bir düzenleme türü olmadığına dikkat çekti. Topluluk üyeleri, yeni gelenlerin nasıl daha önemli katkılar sağlayabileceğine dair fikirler ortaya attı. Bir fikir görüntüler. Wikimedia Commons 65 milyon resim içerir, ancak çoğu Vikipedi'lerde maddelerin %50'sinden fazlasında resim yoktur. Commons'tan birçok görselin Vikipedi'yi önemli ölçüde daha resimli hâle getirebileceğine inanıyoruz.

Yeni gelenlerin ilgisi

Birçok yeni kişinin Vikipedi'ye resim eklemekle ilgilendiğini biliyoruz. "Resim eklemek", yeni gelenlerin neden hesaplarını oluşturduklarına ilişkin karşılama anketinde verdikleri yaygın bir yanıttır. Ayrıca en sık sorulan yardım paneli sorularından birinin, birlikte çalıştığımız tüm vikilerde geçerli olan resimlerin nasıl ekleneceği ile ilgili olduğunu görüyoruz. Bu yeni gelenlerin çoğu muhtemelen eklemek istedikleri kendi imajlarını getiriyor olsalar da, bu resimlerin nasıl ilgi çekici ve heyecan verici olabileceğine dair ipuçları veriyor. Yeni gelenlerin katıldığı diğer platformların (Instagram ve Facebook gibi) resim ağırlıklı unsurları göz önüne alındığında bu mantıklıdır.

Resimlerle çalışmanın zorluğu

Resimler hakkındaki birçok yardım paneli sorusu, onları makalelere ekleme işleminin çok zor olduğunu yansıtır. Yeni gelenler Vikipedi ve Commons arasındaki farkı, telif hakkıyla ilgili kuralları ve görselin doğru yere yerleştirilmesi ve başlıklandırılmasının teknik kısımlarını anlamalıdır. Resimsiz bir madde için Commons'ta bir resim bulmak, Vikiveri ve kategoriler bilgisi gibi daha da fazla beceri gerektirir.

"Fotoğraf İsteyen Vikipedi Sayfaları" kampanyasının başarısı

Fotoğraf İsteyen Vikipedi Sayfaları (WPWP) şaşırtıcı bir başarıydı: 600 kullanıcı 85.000 sayfaya görseller ekledi. Bunu, resimleri olmayan sayfaları belirleyen ve Vikiveri aracılığıyla olası resimler öneren topluluk aracı adlı bir çiftin yardımıyla yaptılar. Yeni gelenlerin resim ekleyerek başarılı olmalarına nasıl yardımcı olacağına dair öğrenilecek önemli dersler olsa da, bu bize kullanıcıların resim ekleme konusunda hevesli olabileceği ve araçlarla desteklenebileceği konusunda güven veriyor.

Hepsini bir araya getirmek

Tüm bu bilgileri birlikte düşündüğümüzde, hem yeni gelenler için eğlenceli hem de Vikipedi'ker için üretken bir "resim ekleme" yapılandırılmış görev oluşturmanın mümkün olabileceğini düşünüyoruz.

Algoritma
Resim eklemek için yapılandırılmış bir görev oluşturma becerimiz, yeterince iyi öneriler üreten bir algoritma oluşturup oluşturamayacağımıza bağlıdır. Kesinlikle yeni gelenleri maddelere yanlış görseller eklemeye teşvik etmek istemiyoruz, bu da devriyelerin arkalarını temizlemesine neden olacak. Bu nedenle, üzerinde çalıştığımız ilk şeylerden biri iyi bir algoritma yapıp yapamayacağımızı görmektir.

Mantık
Wikimedia Araştırma ekibi ile çalışıyoruz ve şu ana kadar doğruluğa ve insan yargısına öncelik veren bir algoritmayı test ediyoruz. Beklenmedik sonuçlar üretebilecek herhangi bir bilgisayar görüşü kullanmak yerine, yalnızca Vikiveri'deki mevcut bilgileri bir araya getirir ve deneyimli katılımcılar tarafından yapılan bağlantılardan yararlanılır. Bunlar, resimsiz maddelere eşleştirme önermesinin üç ana yoludur:


 * Madde için Vikiveri öğesine bakın. Bir resmi varsa (P18), o resmi seçin.
 * Madde için Vikiveri öğesine bakın. İlişkili bir Commons kategorisi varsa (P373), kategoriden bir resim seçin.
 * Aynı konuyla ilgili başka bir dildeki Vikipedi maddelerine bakın. Bu maddelerden bir ana resim seçin.

Algoritma ayrıca, olası simge olan veya bir maddede bir gezinti kutusunun parçası olarak bulunan resimleri dışlamak gibi şeyler yapmak için mantık içerir.

Doğruluk
Aralık 2020 itibariyle, her seferinde altı dildeki makalelerle eşleşmelere bakarak algoritmayı iki tur test ettik: İngilizce, Fransızca, Arapça, Vietnamca, Çekçe ve Korece. Değerlendirmeler, her dilde anadili olan ekibimizin elçileri tarafından yapıldı. Her dilde 50 eşleşmeye baktığımızda bunları inceledik ve şu gruplara ayırdık:

A question that runs throughout the work on an algorithm like this is: how accurate does it need to be? If 75% of matches are good is that enough? Does it need to be 90% accurate? Or could it be as low as 50% accurate? This depends on how good the judgment is of the newcomers using it, and how much patience they have for weak matches. We'll learn more about this when we user test the algorithm with real newcomers.

In the first evaluation, the most important thing is that we found a lot of easy improvements to make to the algorithm, including types of articles and images to exclude. Even without those improvements, about 20-40% of matches were "2s", meaning great matches for the article (depending on the wiki). You can see the full results and notes from the first evaluation here.

For the second evaluation, many improvements were incorporated, and the accuracy increased. Between 50-70% of matches were "2s" (depending on the wiki). But increasing the accuracy can decrease the coverage, i.e. the number of articles for which we can make matches. Using conservative criteria, the algorithm may only be able to suggest tens of thousands of matches in a given wiki, even if that wikis has hundreds of thousands or millions of articles. We believe that that kind of volume would be sufficient to build an initial version of this feature. You can see the full results and notes from the second evaluation here.

We are continuing to make improvements to the algorithm, and in December 2020, we are trying a third evaluation, which you can follow along with here.

Coverage
The accuracy of the algorithm is clearly a very important component. Equally important is its "coverage" -- this refers to how many image matches it can make. Accuracy and coverage tend to be inversely related: the more accurate an algorithm, the fewer suggestions it will make (because it is only making suggestions when it is confident). We need to answer these questions: is the algorithm able to provide enough matches that it is worthwhile to build a feature with it? Would it be able to make a substantial impact on wikis? We looked at 22 Wikipedias to get a sense of the answers. The table is below these summary points:


 * The coverage numbers reflected in the table seem to be sufficient for a first version of an "add an image" feature. There are enough candidate matches in each wiki such that (a) users won't run out, and (b) a feature could make a substantial impact on how illustrated a wiki is.
 * Wikis range from 20% unillustrated (Serbian) to 69% unillustrated (Vietnamese).
 * We can find between 7,000 (Bengali) and 155,000 (English) unillustrated articles with match candidates. In general, this is a sufficient volume for a first version of the task, so that users have plenty of matches to do. In some of the sparser wikis, like Bengali, it might get into small numbers once users narrow to topics of interest. That said, Bengali only has about 100,000 total articles, so we would be proposing matches for 7% of them, which is substantial.
 * In terms of how big of an improvement in illustrations we could make to the wikis with this algorithm, the ceiling ranges from 1% (cebwiki) to 9% (trwiki). That is the overall percentage of additional articles that would wind up with illustrations if every match is good and is added to the wiki.
 * The wikis with the lowest percentage of unillustrated articles for which we can find matches are arzwiki and cebwiki, which both have a high volume of bot-created articles. This makes sense because many of those articles are of specific towns or species that wouldn't have images in Commons. But because those wikis have so many articles, there are still tens of thousands for which the algorithm has matches.
 * In the farther future, we hope that improvements to the image matching algorithm, or to MediaSearch, or to workflows for uploading/captioning/tagging images yield more candidate matches.

Open questions
Images are such an important and visible part of the Wikipedia experience. It is critical that we think hard about how a feature enabling the easy adding of images would work, what the potential pitfalls might be, and what the implications would be for community members. To that end, we have many open questions, and we want to hear of more that community members can bring up.


 * Will our algorithm be sufficiently accurate such that plenty of good matches are provided?
 * What metadata from Commons and the unillustrated article do newcomers need in order to make a decision about whether to add the image?
 * Will newcomers have sufficiently good judgment when looking at recommendations?
 * Will newcomers who don't read English be equally able to make good decisions, given that much of Commons metadata is in English?
 * Will newcomers be able to write good captions to go along with images that they place in the articles?
 * How much should newcomers judge images based on their "quality" as opposed to their "relevance"?
 * Will newcomers think this task is interesting? Fun? Difficult? Easy? Boring?
 * How exactly should we determine which articles have no images?
 * Where in the unillustrated article should the image be placed? Is it sufficient to put it at the top of the article?
 * How can we be mindful of potential bias in the recommendations, i.e. perhaps the algorithm will make many more matches for topics in Europe and North America.
 * Will such a workflow be a vector for vandalism? How can this be prevented?

Notes from community discussions 2021-02-04
Starting in December 2020, we invited community members to talk about the "add an image" idea in five languages (English, Bengali, Arabic, Vietnamese, Czech). The English discussion mostly took place on the discussion page here, with local language conversations on the other four Wikipedias. We heard from 28 community members, and this section summarizes some of the most common and interesting thoughts. These discussions are heavily influencing our next set of designs.


 * Overall: community members are generally cautiously optimistic about this idea. In other words, people seem to agree that it would be valuable to use algorithms to add images to Wikipedia, but that there are many potential pitfalls and ways this can go wrong, especially with newcomers.
 * Algorithm
 * Community members seemed to have confidence in the algorithm because it is only drawing on associations coded into Wikidata by experienced users, rather than some sort of unpredictable artificial intelligence.
 * Of the three sources for the algorithm (Wikidata P18, interwiki links, and Commons categories), people agreed that Commons categories are the weakest (and that Wikidata is the strongest). This has borne out in our testing, and we may exclude Commons categories from future iterations.
 * We got good advice on excluding certain kinds of pages from the feature: disambiguations, lists, years, good, and featured articles.. We may also want to exclude biographies of living persons.
 * We should also exclude images that have a deletion template on Commons and that have been previously removed from the Wikipedia page.
 * Newcomer judgment
 * Community members were generally concerned that newcomers would apply poor judgment and give the algorithm the benefit of the doubt. We know from our user tests that newcomers are capable of using good judgment, and we believe that the right design will encourage it.
 * In discussing the Wikipedia Pages Wanting Photos campaign (WPWP), we learned that while many newcomers were able to exhibit good judgment, some overzealous users can make many bad matches quickly, causing lots of work for patrollers. We may want to add some sort of validation to prevent users from adding images too fast, or from continuing to add images after being repeatedly reverted.
 * Most community members affirmed that "relevance" is more important than "quality" when it comes to whether an image belongs. In other words, if the only photo of a person is blurry, that is usually still better than having no image at all.  Newcomers need to be taught this norm as they do the task.
 * Our interface should convey that users should move slowly and take care, as opposed to trying to get as many matches done as they can.
 * We should teach users that images should be educational, not merely decorative.
 * User interface
 * Several people proposed that we show users several image candidates to choose from, instead of just one. This would make it more likely that good images are attached to articles.
 * Many community members recommended that we allow newcomers to choose topic areas of interest (especially geographies) for articles to work with. If newcomers choose areas where they have some knowledge, they may be able to make stronger choices.  Fortunately, this would automatically be part of any feature the Growth team builds, as we already allow users to choose between 64 topic areas when choosing suggested edit tasks.
 * Community members recommend that newcomers should see as much of the article context as possible, instead of just a preview. This will help them understand the gravity of the task and have plenty of information to use in making their judgments.
 * Placement in the article
 * We learned about Wikidata infoboxes. We learned that for wikis that use them, the preference is for images to be added to Wikidata, instead of to the article, so that they can show up via the Wikidata infobox.  In this vein, we will be researching how common these infoboxes are on various wikis.
 * In general, it sounds like a rule of "place an image under the templates and above the content" in an article will work most of the time.
 * Some community members advised us that even if placement in an article isn't perfect, other users will happily correct the placement, since the hard work of finding the right image will already be done.
 * Non-English users
 * Community members reminded us that some Commons metadata elements can be language agnostic, like captions and depicts statements. We looked at exactly how common that was in this section.
 * We heard the suggestion that even if users aren't fluent with English, they may still be able to use the metadata if they can read Latin characters. This is because to make many of the matches, the user is essentially just looking for the title of the article somewhere in the image metadata.
 * Someone also proposed the idea of using machine translation (e.g. Google Translate) to translate metadata to the local language for the purposes of this feature.
 * Captions
 * Community members (and Growth team members) are skeptical about the ability of newcomers to write appropriate captions.
 * We received advice to show users example captions, and guidelines tailored to the type of article being captioned.

Plan for user testing


Thinking about the open questions above, in addition to community input, we want to generate some quantitative and qualitative information to help us evaluate the feasibility of building an "add an image" feature. Though we have been evaluating the algorithm amongst staff and Wikimedians, it is important to see how newcomers react to it, and to see how they use their judgment when deciding on whether an image belongs in an article.

To that end, we are going to run tests with usertesting.com, in which people new to Wikipedia editing can go through potential image matches in a prototype and respond "Yes", "No", or "Unsure". We built a quick prototype for the test, backed with real matches from the current algorithm. The prototype just shows one match after another, all in a feed. The images are shown along with all the relevant metadata from Commons:


 * Filename
 * Size
 * Date
 * User
 * Description
 * Caption
 * Categories
 * Tags

Though this may not be what the workflow would be like for real users in the future, the prototype was made so that testers could go through lots of potential matches quickly, generating lots of information.

To try out the interactive prototype, use this link. Note that this prototype is primarily for viewing the matches from the algorithm -- we have not yet thought hard about the actual user experience. It does not actually create any edits. It contains 60 real matches proposed by the algorithm.

Here's what we'll be looking for in the test:


 * 1) Are participants able to confidently confirm matches based on the suggestions and data provided?
 * 2) How accurate are participants at evaluating suggestions? Do they think they are doing a better or worse job than they are actually doing?
 * 3) How do participants feel about the task of adding images to articles this way? Do they find it easy/hard, interesting/boring, rewarding/irrelevant?
 * 4) What information do participants find most valuable in helping them evaluate image and article matches?
 * 5) Are participants able to write good captions for images they deem a match using the data provided?

Concept A vs. B
In thinking about design for this task, we have a similar question as we faced for "add a link" with respect to Concept A and Concept B. In Concept A, users would complete the edit at the article, while in Concept B, they would do many edits in a row all from a feed. Concept A gives the user more context for the article and editing, while Concept B prioritizes efficiency.

In the interactive prototype above, we used Concept B, in which the users proceed through a feed of suggestions. We did that because in our user tests we wanted to see many examples of users interacting with suggestions. That's the sort of design that might work best for a platform like the Wikipedia Android app. For the Growth team's context, we're thinking more along the lines of Concept A, in which the user does the edit at the article. That's the direction we chose for "add a link", and we think that it could be appropriate for "add an image" for the same reasons.

Single vs. Multiple
Another important design question is whether to show the user a single proposed image match, or give them multiple images matches to choose from. When giving multiple matches, there's a greater chance that one of the matches is a good one. But it also may make users think they should choose one of them, even if none of them are good. It will also be a more complicated experience to design and build, especially for mobile devices. We have mocked up three potential workflows:


 * Single: in this design, the user is given only one proposed image match for the article, and they only have to accept or reject it. It is simple for the user.
 * Multiple: this design shows the user multiple potential matches, and they could compare them and choose the best one, or reject all of them. A concern would be if the user feels like they should add the best one to the article, even if it doesn't really belong.
 * Serial: this design offers multiple image matches, but the user looks at them one at a time, records a judgment, and then chooses a best one at the end if they indicated that more than one might match. This might help the user focus on one image at a time, but adds an extra step at the end.



User tests December 2020
Background

During December 2020, we used usertesting.com to conduct 15 tests of the mobile interactive prototype. The prototype contained only a rudimentary design, little context or onboarding, and was tested only in English with users who had little or no previous Wikipedia editing experience. We deliberately tested a rudimentary design earlier in the process so that we could gather lots of learnings. The primary questions we wanted to address with this test were around feasibility of the feature as a whole, not around the finer points of design:


 * 1) Are participants able to confidently confirm matches based on the suggestions and data provided?
 * 2) How accurate are participants at evaluating suggestions? And how does the actual aptitude compare to their perceived ability in evaluating suggestions?
 * 3) How do participants feel about the task of adding images to articles this way? Do they find it easy/hard, interesting/boring, rewarding/irrelevant?
 * 4) What metadata do participants find most valuable in helping them evaluate image and article matches?
 * 5) Are participants able to write good captions for images they deem a match using the data provided?

In the test, we asked participants to annotate at least 20 article-image matches while talking out loud. When they tapped yes, the prototype asked them to write a caption to go along with the image in the article. Overall, we gathered 399 annotations.

Summary

We think that these user tests confirm that we could successfully build an "add an image" feature, but it will only work if we design it right. Many of the testers understood the task well, took it seriously, and made good decisions -- this gives us confidence that this is an idea worth pursuing. On the other hand, many other users were confused about the point of the task, did not evaluate as critically, and made weak decisions -- but for those confused users, it was easy for us to see ways to improve the design to give them the appropriate context and convey the seriousness of the task.

Observations

''To see the full set of findings, feel free to browse the slides. The most important points are written below the slides.''
 * General understanding of the task matching images to Wikipedia articles was reasonably good, given the minimal context provided for the tool and limited knowledge of Commons and Wikipedia editing. There are opportunities to boost understanding once the tool is redesigned in a Wikipedia UX.
 * The general pattern we noticed was: a user would look at an article's title and first couple sentences, then look at the image to see if it could plausibly match (e.g. this is an article about a church and this is an image of a church). Then they would look for the article's title somewhere in the image metadata, either in the filename, description, caption, or categories.  If they found it, they would confirm the match.
 * Each image matching task could be done quickly by someone unfamiliar with editing. On average, it took 34 seconds to review an image.
 * All said they would be interested in doing such a task, with a majority rating it as easy or very easy.
 * Perceived quality of the images and suggestions was mixed. Many participants focused on the image composition and other aesthetic factors, which affected their perception of the suggestion accuracy.
 * Only a few pieces of image metadata from Commons were critical for image matching: filename, description, caption, categories.
 * Many participants would, at times, incorrectly try to match an images to its own data, rather than to the article (e.g. "Does this filename seem right for the image?"). Layout and visual hierarchy changes to better focus on the article context for the image suggested should be explored.
 * “Streaks” of good matches made some participants more complacent with accepting more images -- if many in a row were "Yes", they stopped evaluating as critically.
 * Users did a poor job of adding captions. They frequently would write their explanation for why they matched the image, e.g. "This is a high quality photo of the guy in the article." This is something we believe can be improved with design and explanation for the user.

Metrics


 * Members of our team annotated all the image matches that were shown to users in the test, and we recorded the answers the users gave. In this way, we developed some statistics on how good of a job the users did.
 * Of the 399 suggestions users encountered, they tapped "Yes" 192 times (48%).
 * Of those, 33 were not good matches, and might be reverted were they to be added to articles in reality. This is 17%, and we call this the "likely revert rate".

Takeaways


 * The "likely revert rate" of 17% is a really important number, and we want this to be as low as possible. On the one hand, this number is close to or lower than the average revert rate for newcomer edits in Wikipedia (English is 36%, Arabic is 26%, French is 22%, Vietnamese is 11%).  On the other hand, images are higher impact and higher visibility than small changes or words in an article.  Taking into account the kinds of changes we would make to the workflow we tested (which was optimized for volume, not quality), we think that this revert rate would come down significantly.
 * We think that this task would work much better in a workflow that takes the user to the full article, as opposed to quickly shows them one suggestion after another in the feed. By taking them to the full article, the user would see much more context to decide if the image matches and see where it would go in the article.  We think they would absorb the importance of the task: that they will actually be adding an image to a Wikipedia article.  Rather than going for speed, we think the user would be more careful when adding images.  This is the same decision we came to for "add a link" when we decided to build the "Concept A" workflow.
 * We also think outcomes will be improved with onboarding, explanation, and examples. This is especially true for captions.  We think if we show users some examples of good captions, they'll realize how to write them appropriately.  We could also prompt them to use the Commons description or caption as a starting point.
 * Our team has lately been discussing whether it would be better to adopt a "collaborative decision" framework, in which an image would not be added to an article until two users confirm it, rather than just one. This would increase the accuracy, but raises questions around whether such a workflow aligns with Wikipedia values, and which user gets credit for the edit.

Metadata
The user tests showed us that image metadata from Commons (e.g. filename, description, caption, etc.) is critical for a user to confidently make a match. For instance, though the user can see that the article is about a church, and that the photo is of a church, the metadata allowed them to tell if it is the church discussed in the article. In the user tests, we saw that these items of metadata were most important: filename, description, caption, categories. Items that were not useful included size, upload date, and uploading username.

Given that metadata is a critical part of making a strong decision, we have been thinking about whether users will need to be have metadata in their own language in order to do this task, especially in light of the fact that the majority of Commons metadata is in English. For 22 wikis, we looked at the percentage of the image matches from the algorithm that have metadata elements in the local language. In other words, for the images that can be matched to unillustrated articles in Arabic Wikipedia, how many of them have Arabic descriptions, captions, and depicts? The table is below these summary points:


 * In general, local language metadata coverage is very low. English is the exception.
 * For all wikis except English, fewer than 7% of image matches have local language descriptions (English is at 52%).
 * For all wikis except English, fewer than 0.5% of image matches have local language captions (English is at 3.6%).
 * For depicts statements, the wikis range between 3% (Serbian) and 10% (Swedish) coverage for their image matches.
 * The low coverage of local language descriptions and captions means that in most wikis, there are very few images we could suggest to users with local language metadata. Some of the larger wikis have a few thousand candidates with local language descriptions.  But no non-English wikis have over 1,000 candidates with local language captions.
 * Though depicts coverage is higher, we expect that depicts statements don’t usually contain sufficient detail to positively make a match. For instance, a depicts statement applied to a photo of St. Paul’s Church in Chicago is much more likely to be “church”, than “St. Paul’s Church in Chicago”.
 * We may want to prioritize image suggestions with local language metadata in our user interfaces, but until other features are built to increase the coverage, relying on local languages is not a viable option for these features in non-English wikis.

Given that local-language metadata has low coverage, our current idea is to offer the image matching task to just those users who can read English, which we could ask the user as a quick question before beginning the task. This unfortunately limits how many users could participate. It's a similar situation to the Content Translation tool, in that users need to know the language of the source wiki and the destination wiki in order to move content from one wiki to another. We also believe there will be sufficient numbers of these users based on results from the Growth team's welcome survey, which asks newcomers which languages they know. Depending on the wiki, between 20% and 50% of newcomers select English.

Android MVP
After lots of community discussion, many internal discussions, and the user test results from above, we believe that this "add an image" idea has enough potential to continue to pursue. Community members have been generally positive, but also cautionary -- we also know that there are still many concerns and reasons the idea might not work as expected. The next step we want to in order to learn more is to build a "minimum viable product" (MVP) for the Wikipedia Android app. The most important thing about this MVP is that it will not save any edits to Wikipedia. Rather, it will only be used to gather data, improve our algorithm, and improve our design.

The Android app is where "suggested edits" originated, and that team has a framework to build new task types easily. These are the main pieces:


 * The app will have a new task type that users know is only for helping us improve our algorithms and designs.
 * It will show users image matches, and they will select "Yes", "No", or "Skip".
 * We'll record the data on their selections to improve the algorithm, determine how to improve the interface, and think about what might be appropriate for the Growth team to build for the web platform later on.
 * No edits will happen to Wikipedia, making this a very low-risk project.

The Android team will be working on this in February and March 2021, hopefully allowing the Growth team to begin learning quickly.

Engineering
This section contains links on how to follow along with technical aspects of this project:


 * Work on the "proof of concept" API by the Platform Engineering team, built to back the Android MVP
 * Phabricator tasks around the Android team's MVP
 * Phabricator tasks and evaluations of the image matching algorithm