Topic on Talk:ORES/Paper

Progress catalyst: Standpoints that haven't been operationalized now can be

4
EpochFail (talkcontribs)

Riffing off of Technology is lagging behind social progress, is the idea of a "progress catalyst" (better name ideas?) -- that by reducing some barriers to innovation in the space of quality control tools, ORES opens the doors for new standpoints, new objectivities (operationalizations), and therefor the expression of values that have until now been silenced by the difficulty and complexity of managing a realtime prediction model.

JtsMN (talkcontribs)

One way of thinking about this (and I think there are relationships to the two points above as well) is "what affordances does ORES provide?" As "progress catalyst" ORES affords the leveraging of prediction models to the community.

EpochFail (talkcontribs)

I'd like to turn this discussion towards the term "conversation" because I have found that it helped explain what I'd hoped to happen when building ORES. I'd like to put forth the idea of a "technological conversation". I see this process as better described by "access" than "affordance". When I say "technological conversation", I imagine a the expression of ideas through designed "tools" and that new "tools" will innovate in response to past "tools". (anyone know of any lit comparing innovation markets to a conversation and tracking design/affordance memes between, say, phone apps or something like that?)

Back before ORES, there were affordances that allowed the use of prediction models to the community, but one needed to engage in a complex discipline around Computer Science to do so effectively. The obvious result of this is that only computer scientists built tools that used prediction models to do useful stuff. Their professional vision was enacted and the visions/values/standpoints of others was excluded because they were not able to participate.

OK. Now looking at this like a conversation... Essentially, the only people who were able to participate at first were the computer scientists who valued efficiency and accuracy -- so they built prediction models that optimized these characteristics of Wikipedia quality control work (cite the massive literature on vandalism detection). We've seen that this has been largely successful -- their values were manifested by the technologies they built. E.g. when ClueBot NG goes down it takes twice as long to revert vandalism (cite Geiger & Halfaker 2013, "When the Levee Breaks"). These technologies have somewhat crystallized and stagnated design-wise -- we have a couple of auto-revert bots and a couple of human-computation systems to clean up what the auto-revert bots can't pick up. (We can see the stagnation in the complete rewrite of Huggle that implemented the same exact interaction design.) Snuggle is a good example of another Computer Scientist trying his hand at moving the technological conversation forward. While full of merits, this was more of a paternal approach of "I'll give you the right tool to fix your problems." While I believe that Snuggle helped push the conversation forward, it didn't open the conversation to non-CS participants.

OK onto the progress catalyst. To me, ORES represents a sort of stepping-back from the problem I want to solve (efficient newcomer socialization and support) and embracing the idea that progress is the goal and that I can't be personally responsible for progress itself. Us CS folk couldn't possibly be expected to bring all of Wikipedians' standpoints to a conversation about what technologies around quality control/newcomer socialization and other social wiki-work should look like. So how do we open up the conversation so that we can expand participation beyond this small set of CS-folk? How about we take out the primary barrier that only the CS-folk had crossed? If we're right, non-machine-learning-CS-folks will start participating in the technological conversation and with them, we'll see values/standpoints that us CS folk never considered.

One thing that makes this really exciting is the risk it entails. You lose control when you open a conversation to new participants. Up until now, I've been a relatively dominant voice re. technologies at the boundaries of Wikipedia. I have a set of things I think are important -- that I'd like to see us do ("Speaking to be heard"). But by opening things up, I enable others to gain power and possibly speak much more loudly than I can. Maybe they'll make newcomer socialization worse! Maybe they'll find newcomer socialization to be boring and they'll innovate in ways that don't help newcomers. That's the risk we take when we "Hear to speech". I'm admitting that I don't know what newcomer socialization & quality control ought to look like and I'm betting that we can figure it out together.

205.175.119.191 (talkcontribs)

I think there are obvious parallels here to wikis in general--they reduce barriers to designing individual web pages as well as the organization of entire web sites, allowing more people to participate directly in both the creation of content and the way that content is organized and presented. New genres of website emerged--e.g. a collaborative design pattern repository, then a "crowdsourced" encyclopedia, the specialized open wikis created by interest-based communities. I also like your contrast between paternalistic approaches (Snuggle, and also TWA to be fair) and more open-ended, more hands-off, less explicitly directive approaches. ORES definitely fits that model. Still (and we've talked about this before), isn't it still "libertarian paternalism" in n the sense that you're providing defaults (e.g. a good faithiness score, maybe less stringent default vandalism thresholds) that you hope will nudge people towards behaving differently than they might otherwise? Aren't you still embedding values in ORES, albeit somewhat different ones and more loosely? (Jonathan, logged out, on phone)

Reply to "Progress catalyst: Standpoints that haven't been operationalized now can be"