Growth/Personalized first day/Structured tasks/vi

Trang này mô tả công việc của Nhóm Tăng trưởng đối với dự án "nhiệm vụ có cấu trúc", một dự án liên quan tới các dự án "nhiệm vụ người mới" và "trang nhà người mới". Trang này chứa các sản phẩm, thiết kế, câu hỏi mở và quyết định chính. Hầu hết các cập nhật thêm vào sẽ được đăng trên trang cập nhật chung của Nhóm tăng trưởng, còn một số các cập nhật chi tiết hoặc lớn hơn sẽ được đăng ở đây.

Tình hình hiện tại

 * 2020-05-01: lên kế hoạch và ghi chép những lưu ý đầu tiên
 * 2020-05-17: begin community discussion
 * 2020-05-29: initial wireframes
 * 2020-08-24: week of planning meetings
 * 2020-09-08: call for community discussion on latest designs
 * 2020-10-21: user testing of desktop designs

Tóm tắt
Nhóm Tăng trưởng triển khai dự án "nhiệm vụ người mới" vào tháng 11 năm 2019, thứ cung cấp cho người mới một danh sách các bài viết gợi ý để sửa đổi trên trang nhà người mới. Cho đến tháng 4 năm 2020, các bài viết gợi ý chỉ được lấy nguồn từ các bài viết có bản mẫu bảo trì do các biên tập viên có kinh nghiệm thêm vào, nó không hề đem lại cho người mới đến một chỉ dẫn cụ thể xem phải lưu ý vào câu, từ hay đoạn nào cả. Dù cho không có phương hướng như vậy nhưng chúng tôi rất vui khi thấy rằng người mới đến vẫn đang tạo ra những sửa đổi gợi ý có ích. As of April 2020, the suggested articles are sourced only from articles that have maintenance templates applied by experienced editors, which do not give newcomers particular direction on which sentences, words, or sections specifically need attention. Despite this lack of direction, we are happy to see that newcomers have been making productive suggested edits.

Mặc dù bản mẫu bảo trì cung cấp nhiều loại sửa đổi đa dạng cho người mới đến nhưng có thể chúng quá bao quát và bỏ ngỏ để có thể giúp người mới đến thành công. Và trên thiết bị di động, giao diện sửa đổi trực quan hoặc wikitext có thể khiến người mới đến bị ngợp trên màn hình nhỏ.

Do đó, chúng tôi muốn thử nghiệm một ý tưởng mang tên "nhiệm vụ có cấu trúc". Nó có nghĩa là bẻ nhỏ luồng công việc sửa đổi ra thành một chuỗi các bước mà người mới đến có thể hoàn thành một cách dễ dàng. Theo sau những ví dụ đầy thành công từ thành quả của nhóm Android và Ngôn ngữ, chúng tôi nghĩ người mới đến sẽ dễ dàng thực hiện những loại sửa đổi này hơn trên điện thoại di động, từ đó giúp càng nhiều người mới đến thực hiện càng nhiều sửa đổi hơn. Người mới đến có thể tiếp cận được những nhiệm vụ có cấu trúc này như là một phần của dự án nhiệm vụ người mới.

Sửa đổi là một công việc phức tạp
Thông qua kinh nghiệm của nhóm Tăng trưởng, chúng tôi tin rằng những giây phút đầu tiên của người mới đến trên wiki có thể nhanh chóng quyết định liệu họ muốn ở lại hay bỏ đi. Chúng tôi cũng tin rằng người mới đến muốn ở lại khi họ có thể nhanh chóng tạo sửa đổi và có một trải nghiệm tích cực. Nhưng đóng góp cho Wikipedia -- gần như bất kỳ loại đóng góp nào -- là một việc phức tạp, và nó khiến cho người mới khó thành công một cách nhanh chóng. Ví dụ, có cả tá các bước cần thiết để làm một việc đơn giản như là thêm một câu vào một bài viết:


 * 1) Tìm kiếm bài chính xác.
 * 2) Xem xem liệu thông tin bạn muốn thêm đã có trong bài chưa.
 * 3) Chọn đoạn văn bạn muốn thêm câu.
 * 4) Click để bắt đầu sửa đổi.
 * 5) Gõ câu vào đúng vị trí.
 * 6) Click nút chú thích.
 * 7) Quay trở lại nguồn tham khảo để lấy liên kết hoặc thông tin chú thích.
 * 8) Điền và lưu chú thích.
 * 9) Click đăng sửa đổi.
 * 10) Điền tóm tắt sửa đổi.
 * 11) Xuất bản.

Người mới đến nhìn vào trình sửa đổi trực quan hoặc wikitext lần đầu tiên sẽ không biết những bước đó là gì, phải thực hiện chúng theo thứ tự nào, hay phải click vào nút nào để thực hiện chúng. Nói cách khác, trải nghiệm của họ không được "cấu trúc hóa". Có thể họ cảm thấy choáng ngợp và bỏ đi. Hoặc có thể họ sẽ thử-và-sai, mắc sai lầm rồi nhận phản hồi tiêu cực từ người dùng có kinh nghiệm. Dự án này chính là về điều đó: làm thế nào để chúng ta có thể giúp người mới đến bước qua luồng công việc này theo đúng thứ tự?

Xây dựng dựa trên những kiến thức từ các nhóm khác
Thêm cấu trúc vào các luồng công việc sửa đổi đã là một phần của các dự án Wikimedia từ lâu. Dưới đây là một số ví dụ:


 * HotCat: cho phép người dùng lựa chọn thể loại để thêm vào bài viết chỉ với một vài cú click, thay vì phải sửa đổi wikitext bằng tay.
 * Commons Upload Wizard: bẻ quy trình tải phương tiện lên Commons thành một chuỗi các bước đơn giản.
 * Citoid: có sẵn tại Sửa đổi trực quan, công cụ này bẻ quy trình thêm chú thích thành các bước bao gồm các thuật toán để tự động sản sinh ra chữ và bản mẫu chú thích.

Gần đây nhất, ý tưởng về "nhiệm vụ có cấu trúc" đã hoạt động tốt trên ứng dụng Android Wikipedia và trên công cụ Biên dịch nội dung. Các công trình của họ đã truyền cảm hứng cho chúng tôi.

Với dự án "sửa đổi gợi ý" của mình, nhóm Android đã bẻ nhỏ quy trình thêm miêu tả bài viết vào một bài viết Wikipedia thành một bước dễ dàng là gõ vào một hộp thoại. Từ đó họ đã làm tương tự với việc biên dịch miêu tả bài viết xuyên suốt các ngôn ngữ. Để có thể làm cùng nhiệm vụ đó mà không có luồng công việc có cấu trúc thì người dùng sẽ phải đi tới Wikidata và trải qua nhiều bước khác nhau để thực hiện nó. Nhóm đã biết được rằng phương pháp này có hiệu quả: nhiều người dùng Android đã tạo hàng trăm đóng góp nhỏ kiểu này.

The Language team built the Content Translation tool, which does several things to structure the process of translating an article. It offers a side-by-side interface built for translations, it breaks the translation down into sections, and it automatically applies machine translation algorithms. Though Wikipedians could translate articles before the existence of the tool, the number of manual steps required made it very difficult. This tool is successful, with hundreds of thousands of translations completed. We learned that when translating an article is broken down into steps, with rote parts (e.g. running machine translation) taken care of automatically, more articles get translated.

The Growth team is thinking about applying these same principles to content edits in articles, like adding links, adding images, adding references, and adding sentences.

Phác họa một nhiệm vụ có cấu trúc
The best way to explain how we're thinking about structured tasks may be through showing a quick sketch. The first structured task we've thought about is "add a (wiki)link". But the same ideas could apply to structured tasks for "add an image", "add a reference", or even "add a fact".

In the newcomer tasks feature, lots of newcomers complete "add a (wiki)link" tasks -- in which they add internal blue links in articles that don't have many. This seems like a simple editing task to get started. But we think that many newcomers may not understand how to go through the steps of adding a link and may not know which words to make into links. We're imagining a workflow that walks them through it, step-by-step, with the assistance of an algorithm that can guess which words or phrases might make the best links.

In the sketch below, the newcomer arrives on an article, and is given a suggestion of a word that might make a good (wiki)link. If they agree that it should be made a link, they are walked through the steps of making the link. This will hopefully teach them to add links on their own in the future -- and perhaps they'll enjoy continuing to receive these algorithmic link suggestions. Regarding the algorithm, the WMF Research team has done some preliminary work that makes us confident that such an algorithm is possible.



In thinking further about this, we sketched a second idea. Instead of being aimed toward teaching the newcomer to add links using the visual editor, this next workflow lets the user quickly confirm or reject recommendations from the algorithm, directly editing the article. While it does not teach them how to add links via the editor, it might help a newcomer edit at high volume, and might be a better fit for a user who is trying to be productive with simple tasks while they are on the go. Or perhaps might be a good fit for users who only are interested in very simple edits, similarly to how the Android app has many editors who only want to write title descriptions.



In thinking about structured tasks, it looks like this might be a big question: should workflows be more aimed toward teaching newcomers to use the traditional tools, or be more aimed toward newcomers being able to do easy edits at higher volume?

Tại sao ý tưởng này lại được ưu tiên
We think that quickly making productive edits is what leads to newcomer success. Once they've done some edits, the rest of the wiki experience quickly becomes richer. Newcomers can then see their impact, get thanked, ask informed questions to their mentors, create their userpage, etc. Therefore, we want lots of newcomers to make their first edits as soon as possible. We have already seen from the newcomer tasks project that many newcomers are looking for easy tasks to do. But we also have observed these things:


 * Only about 25% of the newcomers who click on a suggestion actually edit it.
 * Only about 25% of those who do a suggested edit do another one.
 * There are a handful of newcomers who really thrive on suggested edits, doing dozens of them every day. This shows the potential for newcomers to accomplish a lot of wiki work.
 * In live user tests, when newcomers are told to copyedit an article or add links to an article, they frequently want to know exactly which sentence or words need their attention. In other words, attempting to edit the full article is too open-ended.

Taking these points along with the experiences described above of the Android and Content Translation teams, we think we could increase the number of newcomers editing and continuing to edit by structuring some of the content editing workflows in Wikipedia.

Các cơ hội với nhiệm vụ có cấu trúc
When we break down editing workflows into steps, we call them "structured tasks". Here are some of the possible benefits we think could come from structured tasks:


 * Make it easy for newcomers to make meaningful contributions.
 * Develop editing workflows that make sense for mobile. Mobile design principles tell us that users should see one step at a time, not a complicated workspace.
 * Let newcomers increase their skills incrementally. They could take on successfully more challenging types of tasks.
 * Let people find an editing experience that fits them. By giving newcomers a feed of structured tasks, they could find the type of tasks that they prefer.
 * Perhaps similar workflows could be opened to experienced editors in the future.

Các mối lo ngại và mặt trái của nhiệm vụ có cấu trúc
Bất cứ khi nào chúng ta thêm vào những cách mới để mọi người sửa đổi Wikipedia, sẽ có rất nhiều thứ trở nên không đúng:


 * Sửa đổi quá nhanh chóng và dễ dàng sẽ thu hút những kẻ phá hoại, hoặc những người dùng không mấy quan tâm khi sửa đổi.
 * Cung cấp cho người mới đến những luồng công việc đơn giản có thể ngăn họ khỏi việc học các công cụ sửa đổi truyền thống, thứ cần thiết để thực hiện công việc wiki có sức ảnh hưởng nhất.
 * Nhiệm vụ có cấu trúc có thể không tốt khi đương đầu với sự khác biệt giữa nhiều ngôn ngữ, các đặc điểm riêng của wikitext, và có thể gây ra nhiều loại bọ khác nhau.
 * Các thuật toán của nhiệm vụ có cấu trúc có thể không đủ chính xác, từ đó khuyến khích người mới đến thực hiện những sửa đổi mà họ không nên thực hiện một cách sai lầm.

Community discussion
In May 2020, we conducted discussions with community members in six languages (English, French, Korean, Arabic, Vietnamese, Czech) about the above ideas for structured tasks. The English discussion mostly took place on the discussion page here, with other conversations on English Wikipedia, and local language conversations on the other five Wikipedias. We heard from 35 community members, and this section summarizes some of the most popular and interesting thoughts. These discussions heavily influenced our next set of designs.


 * Community members were generally positive about the potential for structured tasks to help newcomers start editing. But it was also a widely expressed view that it's important for newcomers to be introduced to the conventional source and visual editors during the process.  Community members want to make sure that newcomers are not siloed in a separate editing experience, and that they can find their way to more valuable edits.
 * The Czech community talked about ideas for how the structured tasks can place inside the visual editor, so that newcomers can start getting used to being in the editor. Perhaps the editing tools that are not needed for the structured task can be grayed-out.
 * Community members asked why we are choosing "add a link" as our first structured task, as opposed to higher-value types of edits. We talked about how this task is one of the easiest for us to build, which will help us prototype and learn from structured tasks sooner, and how it is a comparatively low-risk task, with fewer opportunities for newcomers to damage articles.
 * Several communities mentioned that spelling corrections would be a particularly valuable task, and we talked about technical options for how to generate lists of potential spelling mistakes. See these notes for more details.
 * We also talked about whether reverting vandalism is a good fit for newcomers. It doesn't seem like the answer is clear, and this will have to be discussed more in the future.
 * An idea that was mentioned multiple times is how to "step newcomers up" to progressively more challenging tasks, perhaps while giving them rewards for successfully completing easier ones.

Types of tasks
There are many different editing workflows that have the potential to become structured. We began to list workflows when we first designed the newcomer tasks workflow here, and we have since narrowed down to a shorter list of task types that seem best suited to being structured. The table below contains that short list, ranked in a potential priority order.

Prioritizing "add a link"
The Growth team currently (May 2020) wants to prioritize the "add a link" workflow over the other ones listed in the table above. Although other workflows, such as "copyedit", seem to be more valuable, there are a set of reasons we would want to start first with "add a link":


 * In the near term, the most important thing we would want to do first is to prove the concept that "structured tasks" can work. Therefore, we would want to build the simplest one, so that we can deploy to users and gain learnings, without having to invest too much in the first version. If the first version goes well, then we would have the confidence to invest in types of tasks that are more difficult to build.
 * "Add a link" seems to be the simplest for us to build because there already exists an algorithm built by the WMF Research team that seems to do a good job of suggesting wikilinks (see the Algorithm section).
 * Adding a wikilink doesn't usually require the newcomer to type anything of their own, which we think will make it particularly simple for us to design and build -- and for the newcomer to accomplish.
 * Adding a wikilink seems to be a low-risk edit. In other words, the content of an article can't be as compromised through adding links incorrectly as it could through adding references or images incorrectly.

Notes on "copyedit"
In conversations with community members on this project's discussion page, many people brought up the question of how to make a structured task around copyediting. Correcting spelling, grammar, punctuation, and tone seemed to everyone to be a clearly useful task that should be prioritized. The Growth team initially shied away from this workflow because of scaling concerns: even if we were able to find or develop an algorithm that could reliably find copyedits in one language, would we be able to do that in dozens of other languages?

We began to learn more about this by talking with User:Beland, who developed the "moss" script for English Wikipedia's Typo Team. We wanted to understand how the process works, and what it might look like to do something similar in other languages. In short, it sounds like the most promising avenue is through existing open-source spellcheckers and dictionaries. Two examples are the aspell and hunspell libraries. Below are our notes from learning about "moss" with User:Beland.


 * Prospects for doing something similar in other languages
 * A process like this should theoretically work in other languages, given that other languages also have Wiktionaries and open-source spellcheckers.
 * But it would not be possible to deploy in a new language without native speakers validating it. There would likely need to be customization for many languages.
 * Likely more challenges for languages without word segmentation (e.g. Japanese).
 * Likely more challenges for agglutinative languages.
 * Different projects have differing manuals of style, which may cause issues.
 * If an algorithm is performing poorly, it should always be possible to change its thresholds so that it identifies fewer potential errors, but with higher confidence.
 * How does moss work?
 * Download the dump files of all of English Wikipedia every two weeks.
 * In order to cut down on false positives, remove templates and everything inside quotation marks, etc.  Only want to work on the main text in the article: the things written “in Wikipedia’s voice”.
 * Check that every word is in English Wiktionary.
 * Uses Python NLTK (natural language toolkit) for word segmentation.
 * Looks at edit distance to classify misspellings.  e.g. “T1” is one edit distance (95% precision).  Also classifies “TS” whitespace errors.
 * Also includes an English open-source spellchecker to narrow the search space so that the algorithm can run faster.
 * He has also started trying to add grammar rules (e.g. identifying passive voice), but that’s more experimental, and much more difficult than spelling.
 * At the end of the process, it produces a list of articles and likely typos.  The user opens the article and searches for the likely typo.

Many copyedit requests are also editors whose native language is not English, asking for English polishing. See WikiProject Guild of Copy Editors.

Design
While the "structured task sketch" section above contains some quick initial sketches to demonstrate the idea behind structured tasks, this section contains our current design thinking. To look into the full set of thinking around designs for the "add a link" structured task, , which contains background, user stories, and initial design concepts.

Comparative review
When we design a feature, we look into similar features in other software platforms outside of the Wikimedia world. These are some highlights from comparative reviews done in preparation for Android’s suggested edits feature, which remain relevant for our project.


 * Task types – are divided into five main types: Creating, Rating, Translating,  Verifying content created by others (human or machine), and Fixing content created by others.
 * Visual design & layout – incentivizing features (stats, leaderboards, etc) and onboarding is often very visually rich, compared to pared back, simple forms to complete short edits. Gratifying animations often compensate for lack of actual reward.
 * Incentives – Most products offered intangible incentives grouped into: Awards and ranking (badges) for achieving set milestones, Personal pride and gratification (stats), or Unlocking features (access rights)
 * Users motivations – those with more altruistic motivations (e.g., help others learn) are more likely to be incentivized by intangible incentives than those with self-interested motivations (e.g., career/financial benefits)
 * Personalization/Customization – was used in some way on most apps reviewed. The most common customization was via surveys during account creation or before a task; and geolocalization used for system-based personalization.
 * Guidance – Almost all products reviewed had at least basic guidance prior to task completion, most commonly introductory ‘tours’. In-context help was also provided in the form of instructional copy, tooltips, step-by-step flows,  as well as offering feedback mechanisms (ask questions, submit feedback)

Initial wireframes
After organizing our thoughts and doing background research, the first visuals in the design process are "wireframes". These are simply meant to experiment and display some of the ideas we think could work well in a structured task workflow. For full context around these wireframes, see the.

Mobile mockups: August 2020
Translate this section

Our team discussed the wireframes from the previous section. We considered what would be best for the newcomers, taking into account the preferences expressed by community members, and thinking about engineering constraints. In August 2020, we took the next step of creating mockups, meant to show in more detail what the feature might look like. These mockups (or similar versions) will be used in team discussions, community discussions, and user tests. One of the most important things we thought about with these mockups is the concern we heard consistently from community members during the discussion: structured tasks may be a good way to introduce newcomers to editing, but we also want to make sure they can find and use the traditional editing interfaces if they are interested.

We have mockups for two different design concepts. We're not necessarily aiming to choose one design concept or the other. Rather, the two concepts are meant to demonstrate different approaches. Our final designs may contain the best elements from both concepts:


 * Concept A: the structured task edit takes place in the Visual Editor. The user can see the whole article, and switch out of "recommendation mode" into source or visual editor mode.  Less focused on adding the links, but easier access to the visual and source editors.
 * Concept B: the structured task edit takes place in its own new area. The user is shown only the paragraph of the article that needs their attention, and can go edit the article if they choose.  Fewer distractions from adding links, but more distant access to the visual and source editors.

Please note that the focus in this set of mockups is on the user flow and experience, not on the words and language. Our team will go through a process to determine the best way to write the words in the feature and to explain to the user whether a link should be added.



Static mockups

To view these design concepts, we recommend viewing the full set of slides below.



Interactive prototypes

You can also try out the "interactive prototypes" that we're using for live user tests. These prototypes, for Concept A and for Concept B, show what it might feel like to use "add a link" on mobile. They work on desktop browsers and Android devices, but not iPhones. Note that not everything is clickable -- only the parts of the design that are important for the workflow.

 Essential questions 

In discussing these designs, our team is hoping for input on a set of essential questions:


 * 1) Should the edit happen at the article (more context)?  Or in a dedicated experience for this type of edit (more focus, but bigger jump to go use the editor)?
 * 2) What if someone wants to edit the link target or text?  Should we prevent it or let them go to a standard editor?  Is this the opportunity to teach them about the visual editor?
 * 3) We know it’s essential for us to support newcomers discovering traditional editing tools. But when do we do that? Do we do it during the structured task experience with reminders that the user can go to the editor? Or periodically at completion milestones, like after they finish a certain number of structured tasks?
 * 4) Is "bot" the right term here? What are some other options? "Algorithm", "Computer", "Auto-", "Machine", etc.?"   What might better help convey that machine recommendations are fallible and the importance of human input?

Mobile user testing: September 2020
Background

During the week of September 7, 2020, we used usertesting.com to conduct 10 tests of the mobile interactive prototypes, 5 tests each of Concepts A and B, all in English. By comparing how users interact with the two different approaches at this early stage, we wanted to better understand whether one or the other is better at providing users with good understanding and ability to successfully complete structured tasks, and to set them up for other kinds of editing afterward. Specific questions we wanted to answer were:


 * Do users understand how they are improving an article by adding wikilinks?
 * Do users seem like they will want to cruise through a feed of link edits?
 * Do users understand that they're being given algorithmic suggestions?
 * Do users make better considerations on machine-suggested links when they have the full context of the article (like in Concept A)?
 * Do users complete tasks more confidently and quickly in a focused UI (like in Concept B)?
 * Do users feel like they can progress to other, non-structured tasks?

Key findings


 * The users generally were able to exhibit good judgment for adding links. They understood that AI is fallible and that they have to think critically about the suggestions.
 * While general understanding of what the task would be ("adding links") was low at first, they understood it well once they actually started doing the task. Understanding in Concept B was marginally lower.
 * Concept B was not better at providing focus. The isolation of excerpts in many cases was mistaken for the whole article. There were also many misunderstandings in Concept B about whether the user would be seeing more suggestions for the same term, for the same article, or for different articles.
 * Concept A better conveyed expectations on task length than Concept B. But the additional context of a whole article did not appear to be the primary factor of why.
 * As participants proceed through several tasks, they become more focused on the specific link text and destination, and less on the article context. This seemed like it could lead to users making weak decisions, and this is a design challenge. This was true for both Concepts A and B.
 * Almost every user intuitively knew they could exit from the suggestions and edit the article themselves by tapping the edit pencil.
 * All users liked the option to view their edits once they finished, either to verify or admire them.
 * “AI” was well understood as a concept and term. People knew the link suggestions came from AI, and generally preferred that term over other suggestions. This does not mean that the term will translate well to other languages.
 * Copy and onboarding needs to be succinct and accessible in multiple points. Reading our instructions is important, but users tended not to read closely. This is a design challenge.

Outcome


 * We want to build Concept A for mobile, but absorbing some of the best parts of Concept B's design. These are the reasons why:
 * User tests did not show advantages to Concept B.
 * Concept A gives more exposure to rest of editing experience.
 * Concept A will be more easily adapted to an “entry point in reading experience”: in addition to users being able to find tasks in a feed on their homepage, perhaps we could let them check to see if suggestions are available on articles as they read them.
 * Concept A was generally preferred by community members who commented on the designs, with the reason being that it seemed like it would help users understand how editing works in a broader sense.
 * We still need to design and test for desktop.

Ideas

The team had these ideas from watching the user tests:


 * Should we consider a “sandbox” version of the feature that lets users do a dry run through an article for which we know the “right” and “wrong” answers, and can then teach them along the way?
 * Where and when should we put the clear door toward other kinds of editing?  Should we have an explicit moment at the end of the flow that actively invites them copyedit or do another level task?
 * It’s hard to explain the rules of adding a link before they try the task, because they don't have context. How might we show them the task a little bit, before they read the rules?
 * Perhaps we could onboard the users in stages?  First they learn a few of the rules, then they do some links, then we teach them a few more pointers, then they do more links?
 * Should users have a cooling-off period after doing lots of suggestions really fast, where we wait for patrollers to catch up, so we can see if the user has been reverted?

Desktop mockups: October 2020
After designing, testing, and deciding on Concept A for mobile users, we moved on to thinking about desktop users. We again have the same question around Concepts A and B. The links below open interactive prototypes of each, which we are using for user testing.


 * Concept A: the structured task takes place at the article, in the editor, using some of the existing visual editor components. This gives users greater exposure to the editing context and may make it more likely that they explore other kinds of editing tasks.
 * Concept B: the structured task takes place on the newcomer homepage, essentially embedding the compact mobile experience into the page. Because the user doesn't have to leave the page, this may encourage them to complete more edits. They could also see their impact statistics increase as they edit.

We are user testing these designs during the week of October 23. See below for mockups showing the main interaction in each concept.

Link recommendation algorithm
See this page for an explanation of the link recommendation algorithm and for statistics around its accuracy.. In short, we believe that users will experience an accuracy around 75%, meaning that 75% of the suggestions they get should be added. It is possible to tune this number, but the higher the accuracy is, the fewer candidate link we will be able to recommend. After the feature is deployed, we can look at revert rates to get a sense of how to tune that parameter.

Link recommendation service backend
To follow along with engineering progress on the backend "add link" service, please see this page on Wikitech.