Topic on Talk:Flow Portal/Archive2

Community consent?

8
WolfgangRieger (talkcontribs)

Sorry, but the search offered only Create the page "Community consent ondiscussionpage:Talk:Flow Portal" on this wiki!.

  • Is there a community consent, that a change like this is wanted?
  • The claims on the portal page (e.g. "Users expect a modern and intuitive discussion interface.") are backed by what? Or is this just someones opinion?
  • "We believe that a modern user-to-user discussion system will improve the projects." Who is "we"? And what if, believe it or not, just being "modern" (whatever that may mean) is not strong enough to justify imposing such a change on a user community?
  • "Talk pages—as a discussion technology—are antiquated and user-hostile." Antiquated in relation to what? Existing tools, which are inadequate for the task (otherwise, the implementation a new tool would lack justification)? Or a figment of imagination called "Flow"?

These are just some minor points, but I think you will get my drift: If this is again a solution looking for a problem, which is implemented and delivered without seeking broadest community consent, the next shitstorm after VE is already scheduled.

And, as an aside, be advised that, in case the Flow-tool should be lacking performance on deployment day, what will it be called by the community? Hint: It rhymes.

Quiddity (talkcontribs)

Hi, here are a few replies:

Yes, we (the editors and the WMF) have long wanted newcomers to be less-confused or put-off by:

  • Colon-Indenting in discussion threads
  • Manually added signatures
  • The problem of where to reply, when someone leaves a message on their usertalkpage
  • Having to watchlist all userpages/articles/etc when they leave a comment at the associated talkpage.
  • the difficulty in determining whether a comment has ever been edited by someone other than the original author
  • etc

The En.wiki w:Wikipedia:Flow portal has a few more details than the one here, which you might find useful. Also, all the subpages in the {{Flow Navigation}} navbox, contain a lot more details (many of which are old, or on a pure "brainstorming" level, but are still useful reading).

Regarding the comparison to the VE rollout - Jorm has recently clarified here that the Flow rollout will be a lot slower.

Hope that helps.

KaiMartin (talkcontribs)

Even though the roll-out as envisioned by Jorm will be a lot slower, it still follows essentially the same route as the VE. Specifically, Jorm does not mention any intention to seek broad community consent beforehand. Failure to do so was arguably the reason for the VE crash. You may read the comments editors gave in RFCs in Dutch, German and English Wikipedia.

WhatamIdoing (talkcontribs)

Do you believe that it is possible for normal, non-programmer people to give informed consent to try out unfinished software they have no experience with, no knowledge of, and no true ability to judge whether it meets their needs? How do you see that working? Have you ever seen it work at any Wikipedia?

Diego Moya (talkcontribs)

Of course it's possible for normal, non-programmer people to test unfinished software, it's even the recommended approach to achieve usability. It's never seen at Wikipedia because WP software roll-outs rarely follow best practices, but it works really well for big companies like Google (who gets usability fairly well). The way it's done at other places is with an iterative process for gathering feedback:

  1. You develop surveys and interviews with users from your target audience, or perform a similar field study to ask for ways that a tools like yours would be used. In a place like Wikipedia, where users form a huge world-wide community, you publish a well-publicized survey and analyze what respondents have to say about the tool's purpose.
  2. You select a really small set of users (three to five is enough) and present them with the first, non-functional interface (preferably offline, in the same room). Without telling them anything of how to use the tool (this part is important), you ask them whether they can make sense of the proposed interface, and how they would use it to achieve whatever goal is in their minds.
  3. You annotate all the proposed ways the users intended to use the application, and where they didn't understand something. You fix anything that didn't make sense, and try to change the design to accommodate as many new proposals as possible, before showing the tool again to the next five users. Iterate steps 2 and 3 until no more major problems are found.
  4. You implement a prototype with the design that emerged from the initial stage. You randomly select an average-sized and diverse group of people and repeat steps 2 and 3 with the functional prototype. You register any proposal of new ways to use the tool, and annotate the places where the initial software architecture wouldn't accommodate those ways of use.
  5. You throw away the prototype (that step is also important), and design a new architecture that is adequate for all the new usages that were originally unexpected and for which the prototype architecture was a bad fit. At this point you have a software design that suits the needs for a moderately large set of users and that can be expanded with small functional increments.
  6. Now comes the step that Google does particularly well. You announce worldwide the roll out of a new beta service or interface, explaining that it will be made available in the near future at incremental stages. At each stage you increase the number of users exposed to the new tool, either by providing an invite-only system or by randomly providing opt-ins to larger and larger groups each time. Any user that doesn't want to use the beta software should be allowed to easily revert back to the old system until the end of the beta.
  7. During the beta phase you add polish, polish and more polish, fix abolutely all workflow-stopper bugs, and include those new functions that are frequently requested by users during the beta stages.
  8. Finally, you should be confident that your tool is polished, almost bug-free and that it accommodates the needs of the majority of your user base (as compiled from user feedback during the whole process); only the most obscure use cases should be not supported by the tool at this point. This, and no earlier, is when you can do a final roll-out for all users. If you did a proper work, there should be no backlash, because the tool really provides an improved workflow, and nobody complains when they're given a better tool (provided that it is really better for them, not just better for the developer). If there's widespread backlash anyway, it means that you failed somewhere at the previous steps; you then keep track of that failure, so that the next time you don't repeat the same mistakes.

It's a lot of work, and it requires that developers are willing to re-design the whole product according to users specifications; if you instead start from an almost-finished design and architecture, and only try to accommodate the few features that make sense for developers, the end result does not satisfy user needs. As far as I can tell the VE design completely missed steps 1 and 5, and was coerced by the community to implement step 6 (the initial roll-out was nowhere like this best practice, but it's somewhat there now). This means that there's hope for a final release that won't be rejected in full, but that the current editor is not based on a complete understanding of all user needs, and that it stood in the prototype stage - so the current architecture will not be able to accommodate all the expectations from the community without a major redesign.

...and no true ability to judge whether it meets their needs. WhatamIdoing, I find that comment is incredibly dismissive of non-programmer people, for someone who claims to represent the interests of newcomers. Nobody better than the final user can judge whether a software meets their needs; if you assume what should be the needs of users, and later claim that users don't understand those assumed needs, you'd be doing everything backwards.

WhatamIdoing (talkcontribs)

I appreciate the well-constructed and thoughtful answer, but I don't think that your response answers my question. My question was this: "Do you believe that it is possible for normal, non-programmer people to give informed consent to try out unfinished software that they have no experience with, no knowledge of, and no true ability to judge whether it meets their needs?"

The question you seem to have answered was, "Do you believe people are capable of trying out unfinished software?" The question I actually asked was more like, "Do you believe that it's possible to give en:informed consent to try out unfinished software, given these serious limitations, i.e., that your allegedly informed consent is not what most people would call 'informed' about the subject matter?"

There are well-respected ethicists on both sides of this question, so I can't claim that there is a "right" answer and a "wrong" one. However, I'd like to know your answer.

It may be easier to understand with an example. Consider this case:

The user is an average 20-year-old. He has no experience with software stuff except everyday normal-user things, like sending e-mail and listening to music. Being an approximately median active Wikipedia editor, he makes about five or ten edits a month, most of them trivial changes like typos or adding the name of the latest album released by a band he likes. He receives a message on his talk page from someone he doesn't know, who says, "Hey, go to Special:Preferences and try the new experimental Thing! I really like Thing, and I hope you choose to test it, too!"

At this point in time, given the information he has, is this user capable of giving informed consent to use Thing? Also, is this user capable of giving informed consent to refuse to use Thing?

Diego Moya (talkcontribs)

The reason why I started my post with "of course" and "it's even the recommended approach to achieve usability" it's because it can be done, and it really is the recommended and standard approach to achieve usability. There are several professional fields (user-centered design, user experience, information architecture, interaction design, and anything with an UX in its description) which are all based around processes that depend on it being true.

The tl;dr version of the long post below is that users can give informed consent if you don't expect them to achieve anything useful with the broken/unfinished software; the consent should be given to provide feedback about what is actually built at any given time, not what is in the head of the production team as future possibilities. (And again, the "no true ability to judge whether it meets their needs" is a false assumption - the user can always judge whether the unfinished product meets their needs, and that is exactly the question that must be asked). The good thing about asking about the current version, and not your planned fixes, is that you don't just keep talking about the great design that you think will solve all the user's problems - you have to actually test whether it solves them or not. This is the scientific process at its best.

Of course if all the information you provide is "Hey, go to this page to test this really cool new thing" you're not gaining anything. But you originally asked me if the user could give informed consent to "try out" the software, and now you've shifted to ask whether they can "use" it. Those are very different things, and by conflating them you're asking the wrong question and thus making a serious mistake in the way you approach end-user involvement. The message to the user as well as the way to test the software should be "Do you want participate in an experiment to help us build a better version of Thing? The experiment consists in that you try out this Thing, we see how you try to use it, and you tell us what you think of it".

The goal of these early interactions must not be to assess whether the user can use the software, which means that they could successfully complete tasks that meet their goals with it. Of course they can't do that with an early version; it's unfinished, and it has been designed with only a cursory understanding of their needs, based only on early field research (and that if there's luck; most software is created just from developers' gut feelings).

No, the purpose of early testing is to let users try out the software with the only goal to get feedback from people using the software, watching exactly where they have problems with the design, and asking them if they can make sense of the interface.

If your only goal is to get this feedback so that the product can be later improved, of course users can give informed consent for this kind of sessions. See this example of the process required to ask for this consent. When users are guided through a session like this to use the software, they are perfectly capable to assess whether they're willing to proceed and to answer whether it's useful to them; an answer that for these early sessions should always be "no, I can't use it ([which is expected at this time]), because of reasons X, Y, Z ([which is the valuable thing to learn]). A technician performing the sessions that is experienced in the field can make the most of this feedback so that few users need to be tested. The most important part of the test is not only what users say, but what they do - i.e. whether their use of the software corresponds to how developers thought it would be used.

Of course for a site the size of Wikipedia you need more than a few in-place user tests; the kind of unsupervised feedback you talked about is what I described as step 6, for which users can also give informed consent of the exact kind you asked, provided you've built a system that reasonably fulfill their major needs through steps 1-5; and, much earlier, a lot of research must be done even before design begins (that was step 1, creating surveys to compile user needs from the wide user base). But you'll need actual users testing the unfinished product through steps 2 to 4 to be sure that the software makes sense and that there aren't any obvious gaps in your understanding of the problem, which is something much more common than what software developers are willing to admit.

The link to the usability test script above comes from Don't Make Me Think, a short and very useful introduction to the methods of Usability, and its sequel Rocket Surgery Made Easy (I have no relation with the author, I just think they're very good introductory texts). Although they deal primarily with web pages, the test methods they describe are valid for any kind of software development, and they answer your questions much better than I ever could.

Arthur Rubin (talkcontribs)

Interesting question. That the WMF seems not to be capable of correctly deciding whether software is usable makes the question moot, doesn't it?