Topic on Talk:Reading/Web/Projects/Related pages/Flow

Redundant, confusing and overall useless

16
TMg (talkcontribs)

I know these "more" suggestions from the Wikipedia Android app. I use the app almost daily. But I never, ever - not even once - clicked one of these links at the bottom of an article. I really wish I could disable them in the Android app. For me, they are noise and waste bandwidth because my phone downloads three thumbnails on each article page. The links never add anything I could make use off.

  • In some cases they are plain redundant. They just repeat what's written in the paragraphs above. And most articles are short, so the repetition is on the same screen (at least on desktop). The only differences are:
    • The links are ripped out of context. Usually they do not make any sense when you have not read the article before.
    • They are distracting, because they are preceded by a random (?) image from each article. No other link in the article does this. Why these three? Why are they so much more visual than all other links in the article? In some cases these three links are more prominent than the whole article. And who picked the images? How can I fix or remove non-helpful images?
  • In some cases they link to more specific articles, and in other cases they link to a wider topic. There seems to be no system.
    • Is this section meant as some kind of "if you have problems understanding this narrow topic, you may want to read these wider topics before"? But why are these links at the bottom of the page then? And why are they generated by an algorithm? How can an editor change them to be more useful?
    • Or is the selection based on backlinks to form some kind of "here are some more specific topics that refer to the article you just read"? I can't tell.
    • Or is this meant as a replacement for the "see also" sections? Why doesn't it do what "see also" is supposed to do then?
  • In many cases it's completely unclear where the links come from, how and why they are picked and what their relationship to the article is. Who picked the links, how and based on what information? They should help readers to learn something, shouldn't they? How can an algorithm know what's helpful for a reader? In many cases not even an experienced editor knows that.

TL;DR: Unclear how the links are picked and what they represent. Counterproductive. Instead please make this an editing tool to suggest missing links.

The success criteria are an insult for every editor that cares about high quality links in articles, because quality is not even mentioned there. I can set up a spambot that will make you reach these criteria. Will you still call it a "success" then and enforce it on the communities?

Jkatz (WMF) (talkcontribs)

Thanks for your comments, TMg and those who have chimed in with agreement. I encourage you all to take a look at my response and let me know if you continue to have concerns.

TLDR: I am sorry you do not like it, and the specifics of your dislike are helpful. An icon link to an explanation of the feature might be very helpful. It seems, however, that our readers really like this feature. I am not sure if your concerns about logic-behind selection are blockers for you or simply questions, but I answered them the best that I could.

Overall: A particular user (like yourselves) not liking the feature or not finding it useful is valuable feedback and it would be foolish and immoral to try to convince you that you do like it. I want to be clear that your particular taste is treated as feedback we can use to improve the feature, but not necessarily a blocker, since others clearly appreciate it (more on that below). I do, however, want to make sure that I have addressed any concerns you have about the experience from an ethical, philosophical or contributing perspective.

There's a lot in here, but I will try and respond to them in order.

I know these "more" suggestions from the Wikipedia Android app. I use the app almost daily. But I never, ever - not even once - clicked one of these links at the bottom of an article. I really wish I could disable them in the Android app. For me, they are noise and waste bandwidth because my phone downloads three thumbnails on each article page. The links never add anything I could make use off.

I'm sorry you don't find them useful on Android. No feature is going to make everyone happy, but this feature gets quite a lot of engagement that has not diminished over time. This is ~10% of all times that the 'read more' feature has shown up. This may seem low, but it is an order of magnitude higher than even the top blue-links on an article and they appear at the bottom. Here is a link to the click-through rates (CTR) of the last 6 months on Android--the web data is still too young to show this trend, but it mirrors these results closely so far. The only difference worth noting is that desktop does not see as much engagement as mobile...but we are also tweaking the desktop design to be more desktop-friendly.

Now--I agree that the CTR is at best a proxy for user value, but, given that it is not decreasing over time and is at the bottom of the article (and less likely to be cannibalistic--more on shorter articles in a bit), one can infer that users continue to find the links helpful. If anything, the CTR is increasing, which suggests that user's are more likely to continue clicking these links over time, which further suggests that they are valuable. If someone does something and it was a mistake, they are less likely to repeat it.

  • In some cases they are plain redundant. They just repeat what's written in the paragraphs above. And most articles are short, so the repetition is on the same screen (at least on desktop).

Most articles are short, but the vast majority of pageviews (>90%) come from ~.25% of articles. I don't have the query handy, but can get it for you if that would be helpful. These most-popular articles tend to be at least several screens (anecdotal, but a cursory look at http://top.hatnote.com/ shows this to be the case).

The only differences are:

  • The links are ripped out of context. Usually they do not make any sense when you have not read the article before.

Lack of context does not seem to be an issue for our users, as reflected by the CTR.

  • They are distracting, because they are preceded by a random (?) image from each article. No other link in the article does this. Why these three? Why are they so much more visual than all other links in the article? In some cases these three links are more prominent than the whole article. And who picked the images? How can I fix or remove non-helpful images?
  • the images are not random--it is the first image in the article. We are working on creating a wikitext or wikdata way for an article's editor to specify the image to use.
  • the links are more visual than the other links in the article because A. we think it looks good, but believe that hovercards on blue links represent the best way to improve the 'preview' experience within the article, B. applying this level of visual attention to every link in the article would be very disruptive, but at the bottom for only 3 it is far less so.
  • Decreasing the prominence relative to very short articles is interesting...I hesitate to put a less conspicuous design for smaller articles, however, because the smaller the article, the more helpful this feature is, and I am reluctant to create inconsistent designs for differently-sized articles.
  • In some cases they link to more specific articles, and in other cases they link to a wider topic. There seems to be no system.

You are right, the morelike API we use does not distinguish between topic level (broad/specific). Do you feel that it should be set at a specific mode? As a side note, we have talked recently about the possibility of highlighting core principles/topics that are suggested for comprehension of the article, but those would not go at the bottom of the article.

  • Is this section meant as some kind of "if you have problems understanding this narrow topic, you may want to read these wider topics before"? But why are these links at the bottom of the page then? And why are they generated by an algorithm? How can an editor change them to be more useful?
  • No, the sections is meant as a "you're done with the article, if you got to the bottom you might be hungry for more related articles...here's what we suggest". There is a purpose to having this at the end of the article, which is to catch those users who have reached the end and may be looking for something else to do...putting it up higher in the article is certainly something we have considered and might test on the iOS app, but we wanted to see the results in web before promoting the feature any further.
  • they are generated by algorithm since this auto-populates every page, but are overwritable by editors ---we need to make this clearer in an 'about' section that I mentioned above. I actually think that making them editable is a problem because:
    • the results are not automatically updated
    • it means that improvements we make to the algorithm/selection will be lost to pages where an editor has overridden the automated selection.
    • right now the manual selection option only applies to the web (not the apps), which is misleading
    • some editors have requested that we do automated refreshes every time you load the page--this would not be possible if it was manually derived.
Or is the selection based on backlinks to form some kind of "here are some more specific topics that refer to the article you just read"? I can't tell.

I answer this above

  • Or is this meant as a replacement for the "see also" sections? Why doesn't it do what "see also" is supposed to do then?

I think you're right that they are similar. I explain how I think this adds value here: https://www.mediawiki.org/w/index.php?title=Topic:Suqj6do13qpmlerd&topic_showPostId=suqohefbah3egyu0#flow-post-suqohefbah3egyu0

  • In many cases it's completely unclear where the links come from, how and why they are picked and what their relationship to the article is. Who picked the links, how and based on what information? They should help readers to learn something, shouldn't they? How can an algorithm know what's helpful for a reader? In many cases not even an experienced editor knows that.

For the decision-making,see Reading/Web/Projects/Related pages and Help:CirrusSearch. As to how an algorithm can read minds, I agree that this is a challenge ;), but some advantages are that the bias of the algorithm is consistent, amoral, neutral and improvable, whereas an editor will bring their inherent personal biases to article selection. Again, it is editable, but I actually think this aspect might be something to consider rollingback.

TL;DR: Unclear how the links are picked and what they represent. Counterproductive. Instead please make this an editing tool to suggest missing links.

The success criteria are an insult for every editor that cares about high quality links in articles, because quality is not even mentioned there. I can set up a spambot that will make you reach these criteria. Will you still call it a "success" then and enforce it on the communities?

I agree that CTR is totally gameable, but if the feature is not providing value the CTR drops over time (as users realize that quality is not there). Similarly, I think placing these at the bottom of the article help ensure that we are not cannibalizing links, since the majority of pageviews occur on very long articles (many screens long). This being said, I am open to suggestions. How do you think we should alter our criteria?

Lastly, I am about to head out on paternity leave and will not likely be able to respond further for a month(!). If you have material responses, please ping @ABaso (WMF)

TMg (talkcontribs)

I find the way you must have read my post very frustrating. It was not meant to tell you my personal "taste". It appears you have not understood the core arguments I gave. I can't find substance in your response other than you using "a measure of the success of an online advertising campaign" as an argument. This scares me so much, I did not felt like a response would do anything other that raising the frustration I already felt. If shooting down alienated community members with marketing babble was your goal, congratulations. However, I tried to sum up my findings again.

Jkatz (WMF) (talkcontribs)

Hi @TMg I am sorry you found my response frustrating and lacking substance. I did not mean, nor do I think I did, focus on your personal distaste, but since you opened with a story of personal experience, I wanted to honor it by acknowledging and responding to it. I also apologize for using marketing babble. Being a professional product manager, it can be hard to remember what terms are normal and what are industry specific. As to the use of click-through-rate as a metric, it is admittedly a bad proxy for 'learning', which is our primary goal. Unfortunately, we do not have a lot of other metrics at our disposal. I spent a lot of my time reading, thinking and responding to your questions during my weekend and my primary intent was to give additional context in the hope that it would help you understand why we believe in this feature, and I am disappointed it felt to you like I was 'shooting you down.

Have you seen this? Reading/Web/Projects/Related pages#Initial Community Feedback It is my attempt to summarize the issues raised so far. Do you feel it is missing anything important?

TMg (talkcontribs)

Responding to the last question, if the "Initial Community Feedback" section misses anything:

You are at least honest there. By writing you thought it was a no-brainer, you are telling me you have no idea what the product you are messing with is, how it was build, by whom and why. Let me say that loud and clear: Wikipedia was not build to make people not leave the site.

But this is your only metric.

Many sentences you wrote raise my blood pressure. Don't you get that you are messing with content? We, the community, are the content creators. We do not care if, to come up with an extreme example, a nude image at the bottom of each article makes users happy. This is not a metric we are aiming for. This is exclusively your, the WMFs metric. I really do not know how I can be more constructive when everything you do and say is based on a wrong assumption.

Jkatz (WMF) (talkcontribs)

Hi @TMg. I think I understand your position a lot better and in recognition of your growing frustration, will refrain from responding. But, I do want to ask you: what do you think our metric(s) should be? If we aren't going to make product decisions based on user satisfaction as derived from either interaction with a feature or time-spent on site, what do you think we should use as a guiding metric(s)? Increasing learning is the mission of my team, but I don't know how we measure it.

TMg (talkcontribs)

It appears your approach is to start with a metric instead of asking what problem you want to solve. It appears you think that helping readers understand a topic is something you can sell. Tell me: How do three random links help me understand the article I'm currently reading? I have no idea. Tracking mouse clicks wont tell you something not even the user doing the click understands. One possible way forward involves talking to actual people, like our most active Wikipedians do when they visit children and teach them what Wikipedia is and how they can use it in school to their advantage. There is so much you can and should learn before you start changing content in a way that was never intended by the authors. This will not make bad articles better. But it will make good articles worse.

Wow. Now our conversation feels like I'm helping somebody doing it's first edit.

Jkatz (WMF) (talkcontribs)

"Wow. Now our conversation feels like I'm helping somebody doing it's first edit."

Yes, the level of condescension was palpable - it is a conversation stopper.

TMg (talkcontribs)

I'm sorry? It was you asking these questions.

You wrote, "what do you think we should use as a guiding metric?" Guiding metric for what? Really, what's your goal? I really don't know. I explained why I think "tricking people into not leaving the page" is a bad, even harmful goal. So what's your goal instead?

You wrote, "I don't know how we measure it". I'm sorry? What response do you expect to that question, other than "just talk to people"?

Ruud Koot (talkcontribs)

I think measuring if people are truly reading the articles they land on is a good proxy for if people are learning (sidestepping the question if all articles are equally educational, that's largely a value-judgement). But reading should be differentiated from skimming, having an article sit idle in the background, or from when people are just quickly clicking through to the next article.

I think if people are scrolling through an article at a rate that's consistent with a plausible reading speed, then that would be a good indicator that people are reading the article at the moment. But this is complicated by the fact that people will probably only read part of the and skim through other parts, or that the article is too short to require scrolling.

I'd actually be very surprised if there didn't already exists a sizeable amount of research on this topic from academia and industry. A quick Google search turned up the following:

I think other content-oriented sites like Medium.com have the exact same problem. Perhaps you can contact them?

Jkatz (WMF) (talkcontribs)

Thanks @Ruud Koot. This is helpful food for thought.

The only pushback I would offer is that Medium and most other for-profit orgs can simply use pageviews as a metric because that is how they sell ads or justify increased investment, but we are looking for something less tangible 'learning', for which pageviews/time spent is a good, but incomplete proxy.

I will check out these links, thanks!

Ruud Koot (talkcontribs)

Yes, true. But I'd expect that a reader that is truly engaged with/learning from your content is in the long term going to be more valuable to you than one that is merely generating a lot of pageviews. The former is going to stay with you, the latter may well tire of you and move on to your competitor. I'm pretty sure that Gawker or Buzzfeed would trade their own content for Wikipedia's at an instant and slap some ads on it, if it wasn't for Google penalizing them for actually doing that. So I wouldn't be completely shocked if they have some research or software laying around to help measure this, either.

And while I'm throwing ideas out there. I think it wouldn't be too hard to find some academics out there that would very interested in helping to answer this exact question. It's pretty hard to get your hands on a dataset as large as Wikipedia's reader-base. They get their publications, you get your metrics.

Riba (talkcontribs)

I totally agree with the comments of TMg. The links are useless, badly chosen. I do not see the need for that functionality.

Gerardduenas (talkcontribs)

I completely agree with TMg.

ErikWoeller (talkcontribs)

But I like the suggestions in the Android app more than here.

Jdlrobson (talkcontribs)

Could you elaborate on this? The android app and web experience use exactly the same source of articles so I'm curious to what the android app is doing better for you.

Reply to "Redundant, confusing and overall useless"