Topic on Talk:Growth/Personalized first day/Structured tasks

MMiller (WMF) (talkcontribs)

This is a thread to talk about the newest set of mockups, shown in a presentation linked from the project page. There are four essential questions that the Growth team is thinking about as we work with these mockups, listed below. We hope community members weigh in on any of these questions. You are also welcome to just say what you do and don't like about the designs, ask questions, or give ideas of your own.

  1. Should the edit happen at the article (more context)?  Or in a dedicated experience for this type of edit (more focus, but bigger jump to go use the editor)?
  2. What if someone wants to edit the link target or text?  Should we prevent it or let them go to a standard editor? Is this the opportunity to teach them about the visual editor?
  3. We know it’s essential for us to support newcomers discovering traditional editing tools. But when do we do that? Do we do it during the structured task experience with reminders that the user can go to the editor? Or periodically at completion milestones, like after they finish a certain number of structured tasks?
  4. Is "bot" the right term here? What are some other options? "Algorithm", "Computer", "Auto-", "Machine", etc.?"
  What might better help convey that machine recommendations are fallible and the importance of human input?
Zoozaz1 (talkcontribs)

One thing I think could enhance the user experience is by integrating the topics into the categories system. I would suggest adding an unobtrusive search bar for users to search for other topics that would be populated with all the categories on wikipedia (or ones containing a certain amount of articles) so if a, say, linguistics expert comes along or any expert on a more obscure topic they can more easily participate, be engaged and be more likely to edit.

I generally like concept A better, as it has the simplicity of both but also has the ability for complexity and introduction to broader editing. Some of the concepts in B, though, can be integrated into A like the summary screen. I also think it could be clearer for the user that the pencil icon is meant to edit the link suggestions.

Regarding question 5, I think that is the ultimate goal of the experience, making editors comfortable and confident enough to edit on their own, and that editing the article in ways not considered by the program should be easy and encouraged. With 7, I don't have specific suggestions, but generally that it would be a good idea to slowly introduce it and give them more tools and traditional ways of doing things so when you eventually get to an experience without the tasks/bot where it feels easy and natural. Of course, that would be optional for the user, but easing it in would be a great way to get newcomers understanding wikipedia.

Just as another suggestion, with copy editing there is a chance a user will change the page from, for example, British to American english and it should be specified somewhere that you shouldn't change it from one to another.

MMiller (WMF) (talkcontribs)

Thanks for checking out the designs, @Zoozaz1. I like your idea about topics. We've considered having a free-text field, just as you say. One model for that is the way that Citation Hunt works, which searches categories.

I also agree that both design concepts can be made to seem more like the other one, which gives us some flexibility.

Sadads (talkcontribs)

@Zoozaz1 Personally, I am not a big fan of the Category system for newcomer workflows like this -- its very hit or miss in what it covers, and what we discovered really quickly with citation hunt is that newcomers work much better from larger more inclusive categories (i.e. WikiProjects or Custom sets). I have been involved in #1lib1ref in my professional capacity, and organizing in general with my volunteer hat on -- and my impression is that category navigation tooling is rarely "inclusive" enough for the kinds of "relatable topics" that folks are looking for. Some topics you need Wikidata, or WikiProjects, or whole category trees, or broader undefined sets that only machine learning can create.

Zoozaz1 (talkcontribs)

That's a fair point. I was thinking more of just giving newcomers the option to use it (and maybe collapsing it at the start) just if they are deeply interested in a specific category or aren't interested in the other listed category so it would sort of be an option of last resort.

MMiller (WMF) (talkcontribs)

@John Broughton @Sdkb @NickK @Nick Moyes @Galendalia @Barkeep49 @Pelagic @Czar @LittlePuppers @HLHJ -- thank you all for participating so helpfully in our previous discussion about structured tasks (the summary from that conversation is here). We took your thoughts seriously in making the next set of designs, and I wanted to call you all back to this page to check out our progress and let us know your reactions. We'll be making some engineering decisions in about three weeks, and hope to have as much community input as we can get! The new materials are in this section, and include static mockups, interactive prototypes, and questions that we're thinking about. Thank you!

NickK (talkcontribs)

Thanks @MMiller (WMF): for the ping! I am strongly in favour of the concept A. I can list at least three problems of the concept B:

  • creating a yet another editing mode would make experience more confusing (vs same editing mode but with hints in concept A)
  • step B-08 is a very un-wiki: while any mistake on wiki can be easily fixed, correcting this one becomes difficult
  • in addition, B-08 means a newbie will likely start by having an edit conflict with themselves. Edit conflicts are already frustrating, creating favourable conditions to start with one (an AI edit in parallel with a regular visual edit) is really, really bad.
MMiller (WMF) (talkcontribs)

Hi @NickK -- it's been a long time since you posted this comment, but we've made some progress and I wanted to get back to you. We ran user tests on both Concept A and B to decide which to build. The summary of the findings is here, and we decided to build Concept A (while incorporating a couple of the good parts of Concept B). About your ideas on edit conflicts: I think that's a good point. When the user switches out of "suggestions mode", we will probably want to prompt them to either publish what they've done so far or explicitly discard the work, before switching to the full editor.

Next, we'll be finalizing mobile designs and testing desktop designs. I'll be posting those things as we have them, and I'll ping you to take a look if you have time.

Sdkb (talkcontribs)

Thanks for the ping, MMiller! My initial thought is that context can be pretty important; it's much harder to tell whether a link is appropriate or not when e.g. you can't see if it or something similar has been linked above.

MMiller (WMF) (talkcontribs)

Thanks for the quick response, @Sdkb. Having the context of the whole article probably enriches the experience for the newcomer, and maybe helps them understand, "I am editing Wikipedia right now." For the specific concern around seeing the link has been made above, we're able to program that into the algorithm: only suggest the link if it's the first occurrence. But yes, I think the broader point about context makes sense.

Barkeep49 (talkcontribs)

My gut tells me the sooner we can get them into a real experience the better. So that would be my answer to Q1 & 3 but I would think some A/B testing is really the right answer to that.

MMiller (WMF) (talkcontribs)

Thanks, @Barkeep49. We are actually running user tests of both design concepts this week, which means we'll have videos of people new to editing using both of them. That may help us figure out which design concept to engineer with first. The option I think you're talking about, though, is building both designs, and giving each to half the users. I'll talk with our engineers about how easy it would be to do that. Perhaps some large portion of the work is shared between the two of them.

John Broughton (talkcontribs)

I think you need something in between the two options you present. So, a couple of principles: (1) you want to isolate the user from the full editing experience (for example, they don't having to select the text to link, then going to a menu to tell VE that you want to create a link), (2) you want to provide explanatory material - which could well include what they would do in the "real world", and (3) you want what the user actually does to resemble what would do in the real world.


Specifics:

(1) If you want to allow the user to go into full VE edit mode to fix something [why not?], then after the user clicks the (general edit) icon, and you confirm that he/she wants to do copyediting or whatever, save the edits that the user has done [do the full "Publish" process], then let the user do whatever he/she wants, and then provide a way for the user to continue on with doing linking. Don't build a separate navigation system for jump from linking to general editing to back to linking. (So A-13 leads directly to A-16.)

(2) "B" has some explanatory material (B-02); A is lacking.But neither explain that in the "real world", you select the text to link, then going to a menu to tell VE that you want to create a link. A brief screencast would be ideal, but even just showing a couple of screenshots would do. And, of course, it's critical to not force the user to go through all of this when he/she comes back for another editing session.

(3) "A" isn't good, and "B" is worse, at mimicking the real linking process. The real process looks much more like A-10 than A-07.

MMiller (WMF) (talkcontribs)

I can tell you really looked closely and thought about these design, @John Broughton. Thank you!

For (1), I think your idea to let the user publish their suggested edits before switching to the full editor makes a lot of sense. Especially when designing for mobile, the priority is to only ask users to do one thing at a time -- and your idea is in the spirit of reducing how many things the user is juggling (their "cognitive load"). I will definitely bring that up with our team's designer.

For (2) and (3), this is sort of core to our challenge here. Like you said, we want to isolate the user from the full editing experience, but we also want them to somehow be able to learn about the full editing experience and how to add a link the traditional way. I'm worried that if we were to explain the traditional method before sending users through the streamlined method, it would be confusing ("Why show this to me if I'm not about to use it?"). Perhaps a better way is to conclude the workflow with the option to learn the traditional method ("Learn how to do this task on your own with the Visual Editor!")

What do you think?

John Broughton (talkcontribs)

I don't think it's that confusing to tell a new user "Normally, you'd start the linking process by selecting some text, then going to the menu and selecting the link icon (small screenshot)". However, for this structured task, we've already selected the text and told the editing software that you're looking at creating an internal link."

If you're really concerned about throwing too much at the user, then make this optional (click on "How are links normally created?".

Also, as an aside, I disagree with calling this "streamlined". I think this should be though of a "truncated" or "shortened". Streamlined implies something that is better in most aspects. (Who objects to "streamlining" a process?) But in this case, there are tradeoffs.

As another aside, if you really wanted to provide the user with something closer to the full experience, while providing guidance, then the user clicking to start the process, or the user clicking to go to the next suggestion, would result in the user seeing the software (a) select text, and then (b) mark it as a possible internal link. Then and only then would control of the screen be yielded to the user in order to do the next steps.

John Broughton (talkcontribs)

I also have several relatively minor points:

(1) The mockup keeps using the term “AI suggestions", but why not just “Suggestions”?

(2) The sample edit summary is way more detailed than any human would provide - something like "Added N internal links, using computer-generated suggestions".

(3) Regarding “Linked article is of poor quality” [reason not to link], the implication is that the user will check the quality of each suggested link before linking (how?) More importantly, that reason is directly contradicted by this, from https://en.wikipedia.org/wiki/Wikipedia:Manual_of_Style/Linking says: “Do not be afraid to create links to potential articles that do not yet exist.”

(4) Why doesn’t software (“AI”) handle “Already linked earlier in the article” [and how does the user know about such prior linking if he/she is only seeing part of the article]?

MMiller (WMF) (talkcontribs)

@John Broughton -- thanks for these thoughts! Here are my responses and questions back to you:

(1) Another WMF product manager who I showed these to actually asked the same thing, and I think it's a good point. Most software and apps don't tell you that suggestions come from AI -- they're just "suggestions". For instance, the Facebook or Instagram feeds aren't the "AI feeds"; they're just "feeds". But on the other hand, we talked about how transparency is a core value in our movement, and so we want users to know where information comes from and which work is being done by machines. Therefore, we're trying to figure out a way to convey that these suggestions come from machine learning, but without being cumbersome.

(2) Do you think that having the more detailed edit summary is a bad thing? We wouldn't want to do something that would end up being a burden for patrollers. Or maybe it conveys to newcomers an unrealistically detailed idea of what they should be putting into their own edit summaries later on? One thing that I'm noticing from our initial user tests is that users seem to like seeing a review of all the edits they've done before they publish -- it helps them confirm that they like what they edited. But that doesn't necessarily have to happen in the edit summary.

(3) We've actually iterated on that list of "rejection reasons" a bit since making that mockup. Here are the ones that we're working with now. What do you think of these?

  • Everyday concept that does not need explanation (e.g. "sky")
  • Already linked earlier in article
  • Selected text too narrow (e.g. "palm" instead of "palm tree")
  • Selected text too wide (e.g. "tall palm tree" instead of "palm tree")
  • Incorrect link destination (e.g. linking "sun" to "star")

(4) Yes, we intend to program the algorithm so that it only recommends a link for the first occurrence of the word in the article. I suppose then we should not need to include that as one of the "rejection reasons". On the other hand, maybe we should keep it in as a check to make sure the algorithm is in fact behaving as expected. I will bring it up with the researcher who works on the algorithm.

John Broughton (talkcontribs)

(1) If you simply disclose, somewhere in the background/contextual information (and you may already do this) that suggestions are AI-generated (though I prefer "machine-generated"), I think you've satisfied any need for disclosure. There is real value in simplification (avoiding distractions).

(2) It's not a question of a burden for patrollers, it's that you're showing new users something that (a) they should not do [the community absolutely does not want this level of detail in edit summaries], and (b) could be intimidating. By (b), I mean that a user could easily say "Wow, I have to do a lot of bureaucratic work - describing in detail ALL my changes - if I want to change something. I'm going to find something else to do that has less busywork involved."

(For patrollers, a tag that says "Structured task" or similar would be helpful, if not already in place; but tags are invisible to users until an edit is published.)

(3) I don't understand the last of these bullet points - if the user thinks the link is wrong, he/should can pick another one. Maybe you mean "Cannot find a good link destination"?

(4) Thank you for checking with the research working on the algorithm. However, I'm not sure that this addresses the point of whether the user can easily see/scan the prior parts of the article - or, perhaps more to the point, whether you're going to imply, to users who want to do everything right, that they always need to check the prior part of an article before creating a link, even when doing so is time-consuming.

As far as this being feedback on the algorithm, I would hope that you'd be tracking reverts on edits made by new users (again, tagged edits), particularly for reverted edits because more experienced editors think the new user has overlinked.

MMiller (WMF) (talkcontribs)

Hi @John Broughton -- I'm sorry it's taken me a while to get back to you. I wanted to have the results from our actual user tests of the two design concepts so that I could give you some good responses. We posted the summary of the user test findings here, and we've decided to go with Concept A (plus some of the best parts of Concept B). Here are some responses:

(1) I think it's a good idea to make it clear at the outset that the suggestions come from AI, and then save space by not including the phrase "AI" for the rest of the workflow. That does sound simpler. In our user tests, every user clearly understood what we meant by "AI suggestions", and so I'm less worried that the concept will be confusing.

(2) This makes sense. Perhaps the edit summary should say something like, "Added 3 wikilinks". How about that?

(3) These various "rejection reasons" actually were somewhat confusing in the user tests, and we have some re-wording to do. We will be prompting the user to choose a better link destination if they think it's wrong. Perhaps a plainer way to phrase that reason would be, "Link goes to wrong article".

(4) Yes, I think we will definitely make sure the user knows they should make sure to only link the phrase the first time they see it. In Concept A, the user will be able to scan the whole article, but it's a good reminder that we should tell them it is appropriate to read the whole article through as they make edits to it.

Regarding reverts, we'll definitely be tracking that. It's already something we track for the "classic" suggested edits feature that already exists (which encourages users to copyedit and add links on articles that have the corresponding maintenance templates.). Right now, the revert rate on suggested edits is about equal to the revert rate on the edits newcomers make on their own without suggested edits. We think that is a good sign that suggested edits is not encouraging shoddy edits, and we'll need to make sure that "add a link" does not increase the revert rate.

In terms of next steps, we'll be posting and testing designs for the desktop version of this feature (the designs you saw before were only for mobile), and I'll tag you for your thoughts on those, if you have time.

John Broughton (talkcontribs)

(2) Anything less than 30 or 40 characters is fine, including what you suggested.

(3) I think the point I was trying to make was that the list, I think, is of reasons why the user didn't create a wikilink when a link was suggested. If that's in fact what the list is for, then the final reason for not making a link is that the user couldn't find a good choice (and yes, the suggested link wasn't useful, then, but the point is that the user, searching for a good link, couldn't find one).

Overall, I'm a firm believer that actual user experience is "ground zero" for making good decisions about a UI and online-process, so I'm glad to hear that you're learning so much from user testing.

Czar (talkcontribs)

Answers to the listed "essential questions" and then general thoughts

  • Guided editor experience (teaching-centered structure) definitely appears to be the way to go if the goal is to integrate new editors to make bigger edits (is it? I expanded on this below), i.e., yes, first edit, whether editing a suggestion or of one's own initiative, is the opportunity to learn VE
    • If the point is not to make bigger edits later, but just to recruit mobile editors into mobile-friendly tasks, is adding a link the way to go? Per the Android structured tasks, that wouldn't be about recruiting desktop editors but a different category of low-effort maintenance tasks.
    • I would need to see the relative benefit of the latter to know what opportunity is there. When we talk about editor decline, we're mainly referring to content editors (whether that implication is correct that the glut of editors from a decade ago were mainly productive content editors) and not simply those making corrections. (And is there evidence that those who partake in the Android tasks are any more likely to adopt full-featured editing?) It's two very drastic perspectives on how to grow the editor pool. If that decision remains to be made, I have some ideas on how to resolve that with community input.
  • For the question of when to introduce the VE, would this be solved with user testing? Recruit from a pool of unregistered readers who are interested in making first edits and offer them this "AI suggestion" flow vs. VE with guardrails.
  • I wouldn't count on new users recognizing the bot icon or knowing what bot or AI are. For me, they're just "Suggested edits" or "Recommended edits"—the user doesn't need to know how they were generated except that they're coming from the software. To the larger point of the recs being fallible, I think this would need to be pretty high confidence of being a worthwhile edit before editing communities would want to implement it. At that point, the caveat wouldn't be needed.
  • For future mock-ups, would be more realistic to pull from en:w:Category:Articles with too few wikilinks, as most random articles are messier and I imagine such a tool would be most effective when paired with a cleanup category (if one is active on that language's Wikipedia) vs. adding links to articles where they're not needed. I.e., the suggestions are going to be more like the ones you've listed for "croissant" than for "dutch baby pancake"
    • "Our first foray into newcomer task recommendations has shown new users will attempt suggested edits from maintenance templates." (from May design brief) This is a much more interesting entry point in my opinion. If I'm reading an article on mobile, do we even show the maintenance templates right now? If instead it gave entry into a guided method of resolving the maintenance template, there's much more mutual benefit than receiving a random article based on my viewing history or otherwise.
  • The dot progression in B encourages skipping between options, which I think is good here.
  • I'd wager I'm more likely to say "I don't know" to an edit than to give a firm "yes" or "no" as a new editor. Might be useful to have that as a skip button.
  • The contextual highlighting in the text doesn't feel strong/striking in A or B.
  • In B (excerpt view), the reader would not be able to answer whether the link is already in use elsewhere in the article (we typically only link the first usage of a word in an article too, which they wouldn't know). For that reason alone, I think the whole article is needed for context, though I do like how B lets me focus on just the sentence at hand, when approaching a task on mobile. Feels unrealistic to have the user click out to another tab to view the full article on mobile.
  • I totally would have missed that I had to click the blue button arrow to actually commit my edits. I would have thought that clicking "Yes" on an edit was sufficient for submitting the edit without any other visual indication.


General thoughts

  • I'm skeptical that, at scale, new users want to go through a series of repetitive tasks, like first do X, then Y. That gets into tutorial territory (like en:w:WP:TWA) vs. aid with first edits.
    • "Only about 25% of the newcomers who click on a suggestion actually edit it." How do you know whether this is because users only clicked in because they were curious and are not actually interested in the task vs. because users were turned off by the interface and need a better intro? I imagine it's more the former than the latter but would be interested in what the data says there
  • This feature necessitates close integration with existing editors beyond the mentor/welcome committee.
  • In general, if this tool is to link simple words on articles that have already been mostly linked, it is likely bound to clash with editorial practices on overlinking. For anything that generates load on existing editors, I recommend getting broad community input before implementing. I know this is designed for smaller Wikipedias, but I can picture, for instance, English WP maintainers going berserk at the flurry of cleaning up wikilinks like "oven" or "stove top", which would be seen as overlinking. They'd sooner throw out the whole feature. In general, that community would have to have interest in semi-automated edits. Some communities have rejected this sort of aid outright as creating more clutter/work than benefit.
  • to the open question "should workflows be more aimed toward teaching newcomers to use the traditional tools, or be more aimed toward newcomers being able to do easy edits at higher volume?" I'd be interested in an analysis of established editors to this effect. If an editor does more "gnome" edits, did they get started by making easy edits at high volume. If an editor is interested more in writing, ostensibly this won't be as helpful. Do you have survey data on what new editors were trying to do on their first edit? I'd wager that most new editors are coming to make a correction, in which case this interface should be aiding them in accomplishing that rather than entering them into a high-volume workflow in the absence of indication that they're looking to do those types of edits. My hunch, if most editors are coming for the single correction, that our best chance is to show them how easy it was to edit, which would increase their likelihood of a second edit. That would be a vastly different intent than the high-volume workflow.
  • The main difference between this project/feature and Android's structured tasks, it seems, is that the former is about introducing the act of editing whereas the latter is about adding structured metadata. Android benefits from not having to teach/learn the editor at all, making it simple to do the one targeted, mobile-friendly task.
MMiller (WMF) (talkcontribs)

@Czar -- thank you for these detailed and helpful thoughts. I'm going to re-read through and respond tomorrow.

Czar (talkcontribs)

Sounds good and no reply needed! Only if you feel the need to follow-up or want to discuss. Otherwise just passing along my feedback.

MMiller (WMF) (talkcontribs)

Hi @Czar -- I read through all your notes, and I have some reactions and follow-up questions. Thank you for thinking about our work in detail!

  • Regarding whether the objective of this work is to recruit editors into higher-value content edits or whether to help them do many small tasks:
    • I think this work has the potential to do both. For instance, with Growth's existing suggested edits (in which we point users to articles with maintenance templates), we see users on both paths. There are some users who have done hundreds of copyedits or link additions, and who keep going day after day. There are also many users who do a few suggested edits, and then move on to Content Translation or creating new articles. In general, we want users to be able to find their way to the best Wikipedian they can be, giving them opportunities to either ascend or to stay comfortable where they are. We think this is the route to finding and nurturing the subset of newcomers who can be prolific content creators.
    • I also think structured tasks could open up another possible route to content generation. If we are able to create many different types of structured tasks -- like adding links, images, references, infoboxes -- it's possible that we may have enough of them to string together into the construction of full articles, making article creation a lot easier (this is more needed in small and growing wikis than in English Wikipedia).
    • But you mentioned that if a decision needs to be made between pursuing many small scale editors or pursuing "content editors" that you have some ideas. I'm curious what they are.
  • Regarding the question of whether to use the term "AI" or "bot": this is something a few people have brought up. @John Broughton said something similar above: why not just call them "suggestions"? I agree that it would be simpler, but we are also trying to increase transparency and make sure users know where information is coming from. We've been thinking that it's important for humans to know how artificial intelligence is affecting their experience. What do you think?
  • About using a cleanup category: yes, in our mockups, we are just showing a toy example that has lots of links to begin with. We've thought about pairing this with a maintenance template in production, but part of the motivation for building this task is that some wikis don't have a maintenance template for adding links (e.g. Korean and Czech Wikipedias). I'm thinking that we'll want to do something like use the link recommendation algorithm to identify articles that lack links, and then recommend those to the user. I will check with the research scientist to make sure the algorithm could do that.
  • About entry points: we definitely want to try making suggested edits available from reading mode. Let's say you're a newcomer and you've already done a couple suggested edits from the homepage. Then the next day, you're browsing and reading Wikipedia, and it's an article that could be found in the suggested edits feed (either because it has a maintenance template or has link recommendations). We could then say to the newcomer, "This article has suggested edits!" This has the added benefit that (as you say) the newcomer is already interested in that topic, which we know because they went to the article to read it. Does that sound like what you're thinking of? Do you think that would work well?
  • Regarding "Only about 25% of the newcomers who click on a suggestion actually edit it." This number has actually changed a lot in the past months, and that change is instructive. That number has now doubled to about 50%! We attribute this increase to the topic matching and guidance capabilities that we added. Topic matching made it so that newcomers would land on articles more interesting to them, and guidance give the newcomers instructions on how to complete the edit. The fact that these increased the proportion of newcomers saving an edit makes me believe that many newcomers would have wanted to save an edit, but were turned off either by the content or the task once they arrived on the article. And this makes me believe that there is room for further increases in the future.
  • You said "this feature necessitates close integration with existing editors beyond the mentor/welcome committee." What kind of integration are you thinking of?
  • About newcomer intentions, you asked if we have data on what newcomers intend to do with their first edit. We do have this data, from the welcome survey. This report shows responses from Czech and Korean newcomers on why they created their account. It turns out that a lot of newcomers intend to create an article or add information to an article. One of our challenges has been to steer them toward simpler edits where they can pick up some wiki skills, before they try the more challenging edits and potentially fail.
Czar (talkcontribs)
  • If there's a place you're tracking the top-level outcomes you mentioned, I'd be interested in following along and I imagine many others would too. Stuff like return rates for those who do not make a suggested edit vs. those who do (and what timeframe); new user rates of other desired actions (adding a citation, CXT, etc.) after engaging in the suggested edit flow vs. those who skip it and start editing directly; if you're tracking self-reported willingness to edit after any of these interactions. In general, this work (and editor growth in general) is more relevant to WP editor communities than a lot of other WMF work, with no disrespect to that work. Sharing the wins from this work is good for both the WMF and the editor community who would be administering the tools.
  • re: encouraging small/corrective edits vs. cultivating content writers as a prioritization decision, how closely do you currently work with communities? And is it mainly focus grouping with mentors and new editors, or do you have community discussions on your target wikis? I.e., I know you had seen wikifying text as one of the actions new users commonly take, but is it what the community needs? The growth team is essentially building out a wikifying AI that has to reach a level of accuracy that it could be run semi-automatically by users with no experience. At that point, it wouldn't be far off to open the same AI to experienced users who will write tools that run the AI as part of doing general fixes on an article. On one hand, great, but I'm curious if that's what the community would say was among its top problems in need of fixes. I can't think of a place where, for example, the enwiki community takes stock of its biggest problems. Those discussions usually happen about specific problems as patchwork rather than as a community ranking (apart from maybe the annual WMF community tools survey?) In my experience, enwiki usually depends on creating backlogs until someone announces how the backlog is causing a problem or that they could do more if given some script/bot tools. I think there's an overlap between that and this work. If you were to ask specific communities what their biggest editing problems are, if I'd take enwiki as an example, if we could magically juice our volunteer count, I don't know if we'd say we need "more content" or even necessarily more watchlist-watching activity to catch things being missed. I'd hazard you'd hear we need help with, for example, reducing promotional tone and removing dead external links. Lack of wikilinks is a far smaller problem. Anyway, there's an opportunity for overlap in the choice of the new user "suggested action". Not to mention the benefits that come from this sort of community–WMF common understanding of where a lack of active users is actually a problem. There's a lot more to say on effectively setting that up, but yeah, my wall of text over here.
  • ""this feature necessitates close integration with existing editors beyond the mentor/welcome committee" Going out of order but a similar note: Any tool that semi-automates edits will require community consensus to implement. I don't have full knowledge of how each language WP works but I imagine the tiny ones are happy to receive any/all aid and are grateful, while established communities have to justify whether adding the feature is worth the cost. If it creates more work for users, they will quickly vote to kill it (CXT is a great example) and then the benefit is null. (Side note: This also compounds the idea that the WMF is not listening to the community and is working on features that do not benefit it.) If an established community is only informed of a tool near its completion, they have no time to guide the development to make the tool most useful to their community. So I know these growth features are built for smaller wikis, but if eventually you'd like to see them applied to larger communities, I'd recommend collecting their requirements/feedback early in the process, both so that they can see progress and also have some investment in the success of the shared project.
    • The AI-based wikilinking in specific would need to have an extremely low error rate to be turned on by one of the established Wikipedias, nevertheless to be applied by new users. Otherwise established editors will hate that they have more work to revert, and new users won't make progress if their edits are reverted and they don't know why it was suggested in the first place.
  • "Suggestions" is sufficient copy to my eyes. Absolutely, I think it's important to distinguish between whose suggestion it is (AI's vs. community's) but the difference is not material to most users. I think that could be solved with design treatment, e.g., (i) info icon and overlay, so it doesn't clutter the field.
  • re: the Korean and Czech Wikipedias, my question would be whether they need to add a "too few wikilinks" maintenance tag in order for this intervention to be successful, or if their wikilink issues are spread generally evenly across their articles. This goes back to whether wikilinking is among their top activity concerns, i.e., would your help be to identify articles for wikilinking internally, within the tool, or would it be more beneficial to the community to run that algorithm externally, populating the public maintenance tag both for anyone in the community and the new user suggested edit tool
  • re: entry points, that makes sense to have an entry point on an article you're reading—I'd even say it's nicer to have it within the article itself as a CTA instead of distracting at the top of the article, e.g., that paragraph you just read has two suggested edits, would you like to review? I was thinking of the dashboard/start page though and why it'd list kouign-amann vs. something else. For my first edit, instead of using a random article within the broad topics I just selected, I imagine I'd be more interested in topics related to the last article I just read (by related I mean linked from that article, not in the same topic area). Being interested in "food" doesn't mean I'm interested in editing kouign-amann, but having a curiosity about Beyonce would likely mean I'm interested in the wikilinks other viewers visit when reading her article. A thought.
  • re: 25% to 50%—that sounds great! I imagine that's this green line going from 2.5% to 5%, so then it's the green line divided by the red line? I was asking more about page views vs. the blue line: How many new users care about making a suggested edit or, specifically, a wikitext edit as their first edit.
  • Fascinating that "Fix a typo or error in an article" was among the lowest motivations of the surveyed Czech/Korean new users. I wonder whether those users, when given this suggested edit tool, are retained better than those who answered otherwise because they're being given what they want. I'd also be curious if any other part of the editor flow actually tells new users that writing an article without making prior edits is not recommended and why. Sounds like it would be smart to set expectations appropriately.
Reply to "August 2020 mockups"