It says “An Audience is a set of features developed according to specific user flows and audiences.”, but doesn’t make sense. Any better definition for this clause? —Omotecho (talk) 12:43, 15 November 2017 (UTC)
Defininition for “audience” is confusing
Well, makes sense to me. If it does not for you, explain why/how?
I thought Audience is pointing to all the individuals/institutions from editors to donors as the ecosystem of Wikimedia. See Audience research.
"An Audience is a set of features developed according to specific user flows and audiences for (each/some?) WMF departments.".
I find the sentence very strange as well. Was it really intended to define an audience as a set of features? I would expect an audience to be a set of people.
Maybe it should be something like one of these:
- An Audience is the expected users of a set of features.
- An Audience is the expected users of a set of features developed according to specific user activities.
OK, I've re-written this to be:
An Audience team covers a set of features developed according to specific user flows and needs.
… and also re-done the markup so this page is properly translate-able. Hope this helps!
new wikitext editor
The page discusses a 'new wikitext editor' which is somehow integrated in the visual editor workflow, and apparently it's mostly finished?
I've searched around and I can't find any pages anywhere with any documentation on this. Can someone give a link, assuming a page exists somewhere for it?
If all goes well, it might make Beta Feature status by the end of September. (For example, getting it to copy plain old text from it is a challenge, because your web browser is copying HTML and doesn't want to give you just the plain text.)
It will certainly not be "feature complete" by that point, but it might be far enough along for interested editors to poke at and figure out what they dislike about it.
This is quite an important development, and so obviously the community of volunteer editors will have been involved in the initial planning, scoping, decision to go ahead, and early dsign stages -- after all, we are all anxious to avoid another Media Viewer / Superprotect type debacle. Please would somebody could point to where that engagement took place, and the planning documents that resulted?
The video I found was very helpful. It cleared up where this was going, and alleviated a lot of potential concerns.
Breaking copy-paste would be a complete blocker though. Copying to a text file, editing, and pasting back, is a critical workflow. I'm also wary that a fake-plaintext editor is going to have other cracks. It sounds like someone making up a bad parody of Flow's awful fake-wikitext problems.
Alsee, I believe that everyone fully agrees with you that breaking copy-paste is a complete blocker. ;-)
One of the questions is how to handle non-plaintext. If I deliberately copy formatted HTML text from another website (e.g., the title of a book in italics), then should it convert the HTML into wikitext, or drop it and make me re-format everything? And if it converts anything, then should it convert everything (e.g., tables and text color), or only some things (e.g., character formatting such as bold, italics and <code> and links, but not tables and text color)?
This question is probably best answered after interested people can poke around with it and report how it differed, in practice, from their expectations.
A good question. What steps are being taken to answer it, other than your mentioning it here in passing? Your solution appears to be to write the code one way, see whether people like it and if not then do something (or nothing). How about asking people in some way before the code is written how they would like it to work? Is there not some forum in which such questions can be asked and answered before investing time and effort in what may turn out to be nugatory work?
I'll support what Whatamidoing said below. The Agile development model is that it's cheap to fix minor details like paste during beta testing. I trust the WMF to fix that however we want :)
The project really should have a page somewhere, I cringed when I saw it put up a VE-loading bar, and I'm wondering if editing an HTML screen instead of a raw text box may have wonky side effects, but based on the video I'm not seeing any important concerns. It looks to be a "genuine" wikitext editor with harmless extra buttons to insert chunks of wikitext. We'll just have to give it a try in beta.
Apparently it was "chatted about informally".
The system in which you develop detailed documentation about the requirements, then create the software, and later let the users get their first look at it – and then, usually, get complaints that "it's just what I asked for, but not what I want" – is called the waterfall model. Waterfall approaches are not normally used by the WMF devs, as they're more suitable for projects with specific, limited, easily described objectives (e.g., software to control a machine on a factory floor).
Instead, the plan is to create something reasonable, to let interested people try it, and then to adjust based upon their feedback. This approach is generally called agile software development. This approach usually works well for our diverse environment of largely non-technical stakeholders, because different users can try it at different points and give their feedback, based upon actual attempts to use the software for real work, rather than having to find a bunch of people who can and will sit down and think carefully through every step of what they do and what they want and write down their requirements.
If editors dislike the initial state, then it's not really "wasted work". First, the goal is to get a few people testing it early enough, so that very little time and effort could be "wasted", even if every bit of it needed to be scrapped. Second, much of the coding work may be applicable to the next iteration. (For example: Imagine that the first copy-paste approach keeps bold and italics but discards links. There's very little "wasted" if editors then say that they want links kept, too.) And third, conclusively demonstrating that "X" is not the best approach is a desirable outcome.
 This particular project has been announced since at least 2013, mentioned at three Wikimanias and multiple Hackathons, described in e-mail messages, listed in Phabricator, named as a quarterly goal, chatted about informally for years, etc. From what I gather about discussions (e.g., at in-person events), the only questions that truly interest editors are:
- "Will I have to use this?" (Answer: No – or, at least, not for years and years), and
- "Will it just be plain old wikitext, and the wikitext on the screen is the wikitext that gets saved?" (Answer: Yes, that's how a wikitext editor works).
Beyond that, nobody really cares. From their POV, this is basically backend work, and of no more interest than the previous several times that the color of the editing toolbar was changed.
Thank you for explaining software development to me: it might help you in future to know that I have heard of such things. The suggestion that the community be involved in the design and planning stage is not synonymous with proposing "waterfall" to the exclusion of "agile", since in either model somebody has to describe the initial state. In the WMF version of the Agile model, that initial vision appears to be generated by staff behind closed doors. I believe that the community could and hould be engaged at that stage too.
I note that you assert that the WMF approach "usually works well", whereas the community might not take such an optimistic view about such flagship projects as MediaViewer, VisualEditor, Flow, Gather and Workflow. I am not asking you to explain what method the WMF uses, so much as asking to explain why you think it achieves an optimal level of community engagement.
I also note that this particular project has been "mentioned" in various places and "chatted about informally". That does not sound like a serious and sustained attempt to secure participation by the community. Perhaps that's because you have assessed this as a project about which "nobody really cares". If so, why not just say so. Your response could have been summarised as: we do thing the way we do them and we don't think you'll care.
I agree with your point, but let's not get combative here, over minor details which the WMF will happily tweak to our liking. It's more effective to make the case where it's a real issue.
I almost want to cry. In the above discussion I really really really wanted to give the WMF the benefit of the doubt, assuming good faith that the WMF would build a legitimate wikitext editor with no nasty surprises.
Whatamidoing, can you please please please go to the head of the New Wikitext Editor project and tell them that a new "Wikitext editor" that doesn't have genuine wikitext support is a deal breaker? It has the same Parasoid-based fake wikitext that Flow has.
I suspect there would be a consensus against even having it in Beta-features until that "bug" is fixed. I've already saved a screenshot showing the New Editor botching the preview in nine different ways at once, and it's not hard to show lots more. A new wikitext editor that can't give accurate previews is a non-starter.
We were assured that "Will it just be plain old wikitext, and the wikitext on the screen is the wikitext that gets saved?" (Answer: Yes, that's how a wikitext editor works). Is that not the case? Perhaps the software development model that involves planning in consultation with users before rather than after doing the work has its merits after all.
I should clarify.
I didn't do in-depth testing because serious and assorted bugs made testing slow and frustrating. I didn't catch any cases of New Editor mangling the wikitext itself the way Flow does, although I didn't focus on that. It's very possible that it does save the wikitext properly, at least for now.(*) It is however very clearly using Parasoid for the preview screen. This means there are a multitude of problems with the preview such as dumping raw wikicode onto the screen, splitting one line into two, REF issues, wrong color links, failing to show external links as external, broken links, all the way up to showing entire paragraphs in the wrong order and more.
(*) The secret-which-must-not-be-spoken (but which occasionally gets mentioned anyway) is that the WMF is interested in the idea of eliminating wikitext as the underlying definition of pages. At that point every page would work like Flow, constantly mangling the wikitext itself. The ability to edit via the fake-wikitext would get progressively more broken over time. This New Wikitext editor is definitely a step in that direction, trying to shift wikitext editing over to Parasoid. Basically, various flaws in the New Editor would be "fixed" once articles are no longer saved in wikitext.
The wikitext on the screen is the wikitext that gets saved, with the usual exception that the old parser (not Parsoid) performs all the usual pre-save transformations that you're used to from all of the old wikitext editors (e.g., to turn ~~~~ into your signature).
Parsoid is used for the preview, if you paste HTML into the wikitext editor, and (briefly) to render citations if you use citoid to generate a citation template. But if you don't like the wikitext that Parsoid provided you with, then you can change it manually. No matter how it got there – whether you typed it manually, copied it from another page, or used a Parsoid-oriented tool to create it – whatever wikitext is on the screen when you save the page is the wikitext that gets saved in the database.
@Whatamidoing (WMF), thanx for confirming the problem.
Can you answer whether they willing to fix preview? I see little chance the community is going to be willing to replace a working editor with a broken editor. I see no point in even enabling it as a beta feature unless this is fixed.
Edit: Added image showing a sample of how badly broken the 2017 New Editor's preview is. The left side of the image shows the New Editor preview, the right side shows the current editor preview, for the exact same wikitext. Note that problems range all the way up to dumping raw wikitext onto the screen, or even displaying lines in the wrong order!
Can you post the wikitext that you used to create these screenshots?
I tried pasting it into Flow's wikitext mode. Flow mangled/destroyed it in four places when previewed. (And the same would happen on save.)
I tried pasting it into Flow's VE mode, to see if it would accept it as WYSIWYG raw text, with whatever nowikis were needed to show it as-is. Nope, Flow decided the text was wikicode, converted it and destroyed it.
I figured I'd manually nowiki the whole thing. I did that in one tab, but I wanted to double check that Flow wouldn't still somehow mangle it. So I opened a second tab to paste it there, to do a preview-and-return-to-wikimode, so I could double check that it was identical in both tabs. In the new tab, Flow was broken. It was missing the button to switch between VE and wikitext modes. (I've hit this bug before.) Repeated reloads of the page didn't fix it.
Eventually I managed to get a second working Flow tab. I pasted it in with nowikis. The instant I clicked VE-mode to preview it, Flow mashed it into a mess filled with arrows.
@Whatamidoing (WMF): Can you post the wikitext that you used to create these screenshots?
After all of that, I hope you'll forgive me for taking a sliver of enjoyment in saying "No, I can't".
Building Flow with fake wikitext support was a severe design error. Aside from being god-awful for experienced editors, it effectively sabotages new users who attempt to learn/use wikitext. Even if it's unintentional sabotage.
In any case, the exact wikitext isn't really important. I just picked a few of the examples I happened to know. I'm sure the Parasoid team can give you a far longer list of examples than I could.
Why didn't you save the page on the Beta Cluster, so that other editors could see what you were doing?
Because I was testing preview, because there was no need to save to get test results, and because each additional load I had to do of new/old editors required constantly flipping preferences back and forth. It never crossed my mind to waste time doing so.
I don't think that showing other editors what you've tested is a waste of time.
May I suggest that arguing with a volunteer about the precise way they react to a series of demonstrated bugs, features and infelicities in a product under development is not the best way of encouraging the unpaid debugging effort that the WMF's interpretation of "Agile" requires. Would it not be more polite and more effective at some point to say "Thanks for pointing out all those problems, please can we have some more information to help us fix them more efficiently"?
As I understand it, the design principle behind the WMF's suite of current and proposed wikitext editors is that the database is and will continue to be wikitext and that there will continue to be an editor which will allow contributors to directly edit the stored wikitext. It would be helpful to have a confirmation that this is correct, or if not, a public statement about what the prevailing design principle is. (I gather that for Flow, this is not the case, and that the database stores some other data format.) I asked the ED for a statement on this subject in June It turned out that we were largely in agreement in principle and Katherine said that WMF "will need to consistently assess our efforts against our intentions, and calibrate accordingly". However she has not yet had time to give the statement that I asked for.
Please would someone point to a current and authoritative statement on the editor product design philosophy currently agreed between the developers and the community?
The database is currently, and for the foreseeable future will continue to be, wikitext. For the foreseeable future, there will continue to be an editor which will allow contributors to directly edit the stored wikitext. Whether that will change next century, or even next decade, is beyond the abilities of my crystal ball.
I don't know anything about Flow's internals. Originally, there was talk about storing Flow's contents as HTML5, but I don't think that ever happened. I've never had the problems that Alsee reports with Flow's wikitext mode. (On the other hand, I don't use the visual mode as a preview; when I write a post in wikitext, I just save it. I suspect that Alsee would have been successful with his efforts if he'd just pasted his wikitext into wikitext mode and clicked the Reply button, without repeatedly trying to preview it first.)
From Flow/Architecture#Interactions with other systems – "A Flow board is different from a wiki page, it stores its content, revisions, and metadata in an external cross-wiki Flow database. If you query the MediaWiki API for a Flow board's content, or use Special:Export of a Flow board, you will see only a pointer to a UUID in this external database" and "On WMF wikis, Flow stores posts as html, using the output from Parsoid. When you edit a post, you see the original wikitext stored by Parsoid in HTML attributes." I hope that helps.
Thanks for the assurance. I was not asking for a prediction, or even a promise, but for a plan. Please will the WMF publish to the community the current agreed roadmap for development of editing software and engage in a discussion with the editor community about whether the WMF plans and proposals are inline with the current needs, expectations and aspirations of the community?
If there is a plan but no such agreement, then the need for community engagement becomes that much more pressing. If there is no such roadmap, then I suggest that the WMF needs to stop development and start planning as a matter of considerable urgency.
Finally, let me remark that the only acceptable protasis to the phrase Alsee would have been successful with his efforts if ... in this context is if the various software components interacted correctly.
Flow mangled Link3, destroyed Red example, and added a random newline to Jack&Jill.
P.S. Jack&Jill renders one way on article pages, a different way in Flow preview, and a third different way when Flow-saved.
[<!--comment-->[Copy source|Link 3]] as a link is a bug in the old parser. The nowiki tags that Flow added aren't necessary, because it ought not ever interpret
[something[ the same way as it interprets
It is not a bug that the article parser is smart enough to recognize comments, no matter where they are located. It is a bug/limitation of Parasoid that it's unable to cleanly skip comments.
And regardless of debate on how it should work, it is a clear and unacceptable bug that the preview does not match the saved page.
Rather than "the article parser is smart enough to recognize comments", it is actually is a bug that "the old parser is too stupid to remember that the comment is present".
This really is quite an astonishing response. A volunteer tester points out that the behaviour of the new editor differs from that of the current one when rendering a particular kind of comment, and that this is a bug. A community liaison staff member argues that this difference is not a bug in the new editor but that it is in their personal opinion a bug in the current editor, then hastily files a bug report asserting that this behaviour in the current editor, which has apparently been in place for many years, needs to be changed right now, adducing as a reason that this behaviour, if used consciously, ought not to be relied on for its current behaviour but ought instead to have been used for some other purpose, for which it need not and can not be used, and so have some other result. There seems to be no analysis of whether the precise result of this behaviour is used anywhere, and if so whether changing it would break any existing use cases. Is it really so important that the difference in behaviour between the two editors be resolved in favour of the new software, not yet in production use, that the existing behaviour, which may or may not be relied on, we just don't know, needs to be changed right now? Is it not rather an opportunity to discuss with the people who actually use the current editor to generate the content that is on display to our readers, whether or not this behaviour is useful, whether it is used, what the desirable end state is and what the consequences might be of making these changes?
It would have been far easier and far far more productive to say "Thanks for pointing out that discrepancy, it really shows how careful your testing is and we in the WMF are glad that our volunteer users are giving this new software such a thorough work-out. We'll discuss with the community what the most useful behaviour in this case and how best to define the intended behaviour of the current and new editors."
I've been unable to find any instances of anyone exploiting the comment-between-square-brackets bug for any particular purpose. The search string is
insource:/\[\<\!\-\-\[/, if you're interested in searching yourself. I've checked several wikis, in excess of 15 million total wiki pages so far, and I've found it used in exactly one page (the sandbox of a blocked user, where is is almost certainly an accident, as the result is "]").
The reason to resolve this bug in favor of the software that's been in production for three years now (Parsoid) rather than the old parser, is that we already know that this behavior will not be supported in the long run. w:Bug-for-bug compatibility is not desirable when we know that the bug will be fixed in the old parser later and we have no reason to believe that anyone is actually using this behavior intentionally.
The same rationale applies to the odd #if: statement. There are many millions of wiki pages, so it might be present somewhere, but an #if statement with a hard-coded answer, to create an unbalanced fragment of HTML, is not really useful. (This would be useful in the context of a template that has parameters passed to it; it is not useful in a regular wikitext page.)
I'm glad to hear that you have checked for possible reliance on this obscure edge case, although the Phab task would have been the place to report the results of your research. I don't know why you are so insistent that the behaviour of the old parser is a bug, or so keen to report it as such, when it seems clear that it is a plausible if unexpected behaviour which happens to be different to the new parser. It really does not encourage volunteers to do this sort of work if reports of this kind are turned into an argument which give the appearance of criticising volunteers for daring to suggest the existence of bugs in new software.
On the Jack and Jill example:
Is this actually the HTML (not wikitext) you typed?
<div><table><td><li>went up the hill<div> to fetch a pail of water </div>Jack and Jill </table></div>
When an editor types "went up the hill" as the first line, I think that editor would reasonably expect that to be the first line, not the second line, so I don't know why anyone would think it's wrong to present the text in the order you typed it. (The only difference I see between Flow here and the preview on the Beta Cluster is a faint line around the table, which is probably a matter of local CSS, not parsing.)
The only difference I see between Flow here and the preview on the Beta Cluster is a faint line around the table
Ah, I forgot to count the New Editor. That makes four different ways it's rendered.
- Article&real preview: "went" in the middle. "went" indented by 3 characters. Blue bullet.
- NWE preview: "went" on top. Nothing indented. Black bullet.
- Flow preview: "went" on top. Everything indented by 1 character. Black bullet. In a box.
- Flow save: "went" on top. Nothing indented. No bullets.
The difference between NWE and Flow preview might be CSS, as you suggested. The different line order between articles and the others is Parasoid. I have no clue why Flow drops the bullet on save. I'd guess it has something to do with Parasoid's strange attempt to put that bullet in some sort of negative-column position.
The color and indentation level of bullets, and whether a table without a class assigned defaults to having visible lines or not, is generally a matter of local CSS.
I believe this reordering is actually a misfeature of ancient
tidy, which doesn't implement HTML5 semantics for
<table> in the same way all modern browsers do. Whether you like it or not, hoisting things out of
<table> is the modern accepted semantics. The PHP preview will match everyone else once we replace
tidy, which Tim Starling is working on.
Yes, that's what I typed, except that Flow added an extra newline after Jill. The HTML was being used and discussed on EnWiki. I happened to try it in Flow and found the problem. I re-wrote it with "Jack and Jill" text just to make a nice example out of it.
I don't know why anyone would think it's wrong to present the text in the order you typed it.
There's an obvious reason. The preview is wrong by definition. Saying the preview is "correct" and articles are "wrong" is nonsensical. Readers see the articles, and the purpose of the preview is to show the editor what the article is going to look like for readers.
If you want to get into a discussion of how this particular case should render, then I suggest you grab a couple of aspirin and prepare for a headache talking to HTML-specialists. I half-understand it, and at this time I'm not looking to figuring out the other half.
The general point is that there are a large and unknown number of cases where New Editor preview doesn't match the saved article. Using a different parser for articles and for article-preview is just plan broken. Even if you want to claim something is a "bug" in how articles are rendered, the fact is that's how articles work. The fact is that editors learn that's how articles work. The fact is that editors either exploit or work-around actual behavior when creating articles.
A broken preview is not just bad for experienced editors, it's particularly harmful for new editors. They're experimenting and learning. A critical part of that is having an accurate preview. We don't want new editors being driven off because they're frustrated and confused by a New Editor that lies to them.
It's really simple. There is an existing render that is inherently 100% accurate, it's a bad design not to use it. You want to help new editors? That's how you do it.
Let's try and step up a level here. Is it planned that the new editor should be capable of rendering what the current editor renders, and in the same way? That's a design decision that could and should have been taken early on. However the WMF interprets "Agile" (and I think the prevailing interpretation is not an entirely productive one), the answer to this cannot reasonably be, let's write some code and see whether it does or not. If the new editor is supposed to render the HTML in the same way as the current one, then the behaviour reported is simply wrong. If the new editor is supposed to render some input the same and some differently, then the behaviour here may or may not be wrong, depending on the proposals for the scope of the differences to be tolerated between the two rendering engines. That decision, and those differences, need to be made clear early on, so that the community can decide whether it is a blocker (see discussions around Technical Collaboration Guideline/Community decisions) and if it is agreed by the community that they are willing in principle to accept differences in principle, and/or the scope of the differences proposed, then there needs to be a discussion leading to a clear description of what will and will not be rendered the same way by the two software systems. Currently there is a non-meeting of minds because these issues have not yet been surfaced.
So: is the new editor supposed to render the same code in the same way as the old one? A simple question. I hope someone can point to a clear, definitive and agreed answer.
If you want to be precise, I understand that the 2017 wikitext editor is not meant to render code at all.
Parsoid is meant to render wikitext into standards-compliant HTML (and standards-compliant HTML into passable wikitext). Its primary mission is being standards-compliant, rather than providing bug-for-bug compatibility with the old parser. NB that Parsoid contains some bugs (=unintentional, unwanted behaviors).
I do not know whether the primary mission for the old parser is also to render wikitext into standards-compliant HTML. (Its primary mission might have been, for example, to render something passable without killing the servers – a mission that would have been perfectly understandable at the time.) NB that the old parser also contains some bugs (=unintentional, unwanted behaviors) and some warts.
There is a project underway to eliminate these undersirable differences. However, "behave the same way" doesn't mean "make Parsoid just as broken and non-compliant as the old parser". Many of the examples Alsee gave above (e.g., the treatment of misplaced fragments of HTML) are likely to be resolved by making the old parser match the new one.
I aim to be precise, but have no desire to quibble about the difference between previewing and rendering, which is not the point here. The preview function of an editor must surely be designed to show the result of rendering it for the reader as accurately as possible.
None has suggested that the new editor must repeat exactly the bugs of the old editor – that's an unhelpful strawman. It has been suggested, and sems to be agreed all round, that discrepancies be examined in case the old editor's behaviour is both plausible and relied on to a significant extent (it seems the first but not the second was the case under discussion here). It is not helpful to rush into assumptions that one or the other is at fault where they differ. Of course the old editor contains bugs, as presumably will the new one.
@Whatamidoing (WMF): Sorry if I'm being technically ignorant here, but why did the WMF feel the need to make four completely different parsers in the first place? Couldn't they have just debugged Parsoid until it worked properly and then replaced the article and preview renderers with it (or something)?
If there are specific problems detected, it is useful to have them logged in Phabricator as separate tasks, so each problem can be discussed and eventually resolved in a clearer context. #VisualEditor-MediaWiki-2017WikitextEditor was created a few days ago. There are already several tasks filed, some of them resolved.
Quite so, and it is unfortunate that this discussion got bogged down in a rather contrived argument about one particular controversial aspect of the difference between the two editors. The issue which was originally raised, which has not been adequately addressed yet, and which is of vital importance to the contributors and readers is whether the new editor will support editors by giving an accurate preview of the text as they have written it for ultimate delivery to the readers, and whether it will store the text they have written in the way they need. Taking these in reverse order, we have been told that the New Editor will store the wikitext in the same way as the old editor. There is a slight caveat that we have been told elsewhere that WMF have a project to modify the definition and behaviour of wikitext, but that's not connected to the way the editors store it. The principal issue of concern right now is how the New Editor will preview the text. It is imperative that this be as close as possible to the way it will subsequently be rendered for the reader. This is potentially a complete blocker: any major discrepancy would do serious damage to the way contributors work. We need a clear acceptance of that principle and it would be highly desirable to see how it is proposed to ensure that previewing and rendering remain consistent if they are delivered by different software subsystems. (This principle, vital and obviously so as it seems to me, has already been eroded by the notion that Flow could or should be used to discuss article text.) This discussion was started by a report that there were cases, possibly extreme or unlikely, possibly not, we don't yet have the data, in which there appeared to be serious discrepancies. The individual issues can and should be reported as individual bugs for investigation and resolution, possibly by fixing the old system, possibly the new, as required. What is needed at this level is transparency about how the convergence is to be achieved and assured.
(Sorry, Flow timed out and then decided to post that again.)
Apply same structure also on team pages? No Comm-Tech section?
I like a lot what I see on this page. Thanks to everybody who worked on this!
I see sections for Discovery, Editing, Fundraising Tech and Reading. The header of https://www.mediawiki.org/wiki/Wikimedia_Engineering also lists Community Tech. For consistency, is there a reason CT does not have their own section here (yet?) with the same structure like for all other teams?
We have had a few discussions with the Community Tech team and the current structure is what they prefer reporting wise so they remain under the reading audience. We did give the wish list a special call out and reading can post status updates specific to community tech in their section. I'd love to see more as it comes and I am sure we will.
I strongly disagree about adding yet more reporting burden to each of the teams of the departments here. The whole point of this is to give a consistent over-view; rolling it out to other teams is just makework.
For Comm Tech, they are part of Reading and not reported distinctly.
@Jdforrester (WMF): I'd also disagree with more reporting burden, hence I proposed to reuse the data and structure already available on the Wikimedia Product page to be also displayed and used on each actual team page. And I'd hope that my additional request of having a "Contact" section is long-lived enough to not create too much burden either. :)
The structure of the page is not the burden; expectations of filling it in every day/week/month/quarter is.
If I get you right then that's a topic to discuss about the structure and content of this very page itself, and not about my proposal to re-use the content of this page also on each dedicated team page. :)
Each team has a landing page, lead and phab board listed. These are all core interaction points. On the landing pages of it is not already there a mailing list or lead should also be listed for contact. Are you suggesting somethings else?
@WMoran (WMF): I'm suggesting that each team page like Wikimedia Discovery, Fundraising tech, or Reading have a similar consistent structure like Wikimedia Product (Team head, contact info via IRC and mailing lists, relevant Phab boards, potential subteams, updates, goals), as interested people might not always have visited Wikimedia Product before ending up on each team page.
Plus I'm suggesting not to duplicate work, by not having such info manually inserted in two places (each team page and Wikimedia Product) but using transclusion so teams would only have to keep one place up-to-date.
Hope I made it a bit clearer now, sorry for any potential confusion. :-/
Please let us know if you have ideas for improvements to this page.
It takes over 10 mouse scrolls to get to the bottom of it for me now. It's a huge, unappealing wall of info, I think. Maybe a different structure with "tabs" like the one m:Tech/News has?
@Elitre (WMF) Moving info to subpages (to implement tabs) would make it harder to keep track of updates (for people who want to track everything), and also slightly harder to quickly check 1 or 2 sections. Instead of 10 mouse scrolls (or a click in the ToC), it would require 6-10 click-and-wait-to-loads. I'd have the same objections to using "collapsible" sections.
I think the detailed-ToC and long-page-of-text is optimal for some people, and non-optimal (unappealing) for other people. [ditto for encyclopedia articles...]. There isn't a simple or clear way to make everyone happy. :/
I think wanting to be able to track everything is a good use case, and one that could be easily achieved by transcluding stuff somewhere.
We'll see how well it works once the Wikimedia Resource Center, that links to this page, is launched :)
iOS downloads link is to a site requiring login
Is this intentional? Is there no other place on the Internet that defines what "iOS downloads" means? The "Android installs" link goes to a Google support page which doesn't require login, so I presume these links are not meant to go to WMF-specific data, but instead definitional pages.
It's not intentional - as far as I recall, the page didn't require login when I added the link here (and indeed according to the IA at least some previous versions were public).
I'm copying the most relevant part below:
Downloads - A download is counted when an app is downloaded for the first time from the app store.
- iOS - If a user downloads an app under one user account and installs it in multiple devices, this is considered as a single download. Please note that update figures are not included in this figure.
This definition differs slightly from some related ones. If you can find another place on the Internet that contains the same information, please let me know. If it turns out that there is none and that the login requirement isn't just a temporary glitch on App Annie's site, I may add a summary in a footnote instead (disadvantage: would need to be updated in case App Annie makes changes to their definition).
Hey Greg, this is not intentional. Are you seeing this page when you click on the link? Is it the "iTunes Preview" web page asking for a login, or does it automatically open iTunes on your computer and then prompt from some credentials? This is very curious and I'd like to help figure out what's going on. My mistake, I misunderstood the question.
Commons new active editors
By any chance, did the steep decline in Commons' new active editors happen to begin on approximately 14 July 2016? And is it now getting steeper as of 05 August 2016?
@Whatamidoing (WMF), thank you! That's a very interesting hypothesis, and there's nothing an analyst loves more than that :) As a side note, do ping me in the future when you ask data questions—I wouldn't have seen this if someone else hadn't pointed me to it.
However, it doesn't really look like cross-wiki uploads are responsible. If you look at the data below, you see these big spikes (bolded) that last for a month or two and then disappear. They usually happen in September and May-June, so I'm pretty sure that Wiki Loves Earth and Wiki Loves Monuments are responsible.
In addition, consider that cross-wiki uploads are likely to be made by users who are already making edits on another project. To get the global number of active editors, I aggregate contributions across all projects so as long as you make 5 edits on some combination of wikis, you'll be counted. So, while I think the block on cross-wiki uploads is bad for new users and likely to harm us long-term, I don't think it's causing these big spikes.
|month||new actives on Commons|
The cross-wiki upload tool is primarily used by new editors, and the abuse filter is focused on editors whose accounts are less than six months old and who have made fewer than 50 edits at Commons. Looking at CentralAuth for the last handful of users caught in the abuse filter, 60% of them (in my sample) currently have ≤4 edits (ever) across all projects. If they stop today, then those 60% won't be "new active editors". If their uploads had worked, then some fraction of them probably would be.
@Whatamidoing (WMF) yes, I definitely agree that the cross-wiki upload block will reduce the number of new active editors at the margins just by preventing those specific edits to Commons, not to mention that another limitation on the capability of new editors won't help our problem with retaining promising new users. But it isn't the major factor in the big month-to-month drop we just saw.
Tying in to another page
I'm repeating this here, as the original post is archived.
There is a portal-type page that I'd like ensure we're using effectively, either in tandem with this page or in order to integrate the work being posted to this page. Can we discuss that in the not-too distant future? (not a rush, but want to post here to get the ball moving). I'd love to see all of the product information more easily accessible to communities who are not often on MediaWiki, so I'm hoping to find time for a long-term pet project to see the portal more visible and useful. I'd love to see how that page and this one can compliment one another. -Rdicerb (WMF) (talk) 23:07, 25 November 2015 (UTC)
Really slow on the reply here, I apologize, I was operating off page on some discussion around this and realize I should summarize here as well. The expectation is still for each PM/PO or feature area to continue to submit relevant items to Tech News and Communications as appropriate. This page is designed to reduce the multiple information points into one for the core product audiences. It also provides the chance to have additional discussions in one place relating to Product. We will continue to explore the efforts like the portal to make sure we are communicating in the right places.
Openness policy on showcase?
Recent: November 16, 2015 (Watch the showcase): This video is not public.
Recent: October 28, 2015 (Watch the showcase): This video is public.
My guess is that one of the two videos has incorrect access settings, and I hope that these showcases mentioned on public pages should also be public. Can this be resolved, and can we make sure that future showcases have the correct access settings?
They are public. Not sure why the url changed. Updated