Talk:ORES/Feature injection

Jump to navigation Jump to search

About this board

Halfak (WMF) (talkcontribs)
Reply to "See Sage's blog post"
Adamw (talkcontribs)

Thanks for writing this great primer! Here are a few random notes,

  • I'm not sure we can say that our feature injection is "unique", it seems closely related to existing counterfactual methods for generating explanations. I imagine that what's unique is that most ML systems will either take an identifier and extract the features opaquely and internally, or will accept the features as direct inputs. We're doing something interesting by providing ID-based scoring but with access to features, and it goes beyond feature injection as you point out, by providing transparent access to the extracted features. Maybe we should cite writing about counterfactuals to help put our capabilities in context?
  • This might be a good opportunity to add one more stage to the image, showing how the model outputs floating-point predictions which are then compared against a threshold. Also fine if you want to avoid the complexity.
  • "if half of those paragraphs had exactly one templated reference"—but in the example features it looks like we're assuming every paragraph has a reference, rather than half of the paragraphs.
Halfak (WMF) (talkcontribs)

I'm not sure that the point of this documentation is to make an academic case for novelty. AFAICT, there are no other public services that allow for feature injection like this.

Adamw (talkcontribs)

I see your point, that ORES is unique in how much we expose to the public.

What I'm trying to point out is that "feature injection" is our name for a subset of common methodologies, for example using "counterfactuals" to study bias e.g. THEMIS, or to provide interpretations e.g. LIME. People reading our documentation would benefit if we point them to existing literature on similar techniques, rather than emphasizing that we're doing something unique and giving them a name that won't turn up anything useful in a web search. Allowing manipulation of "root" and transformed features is obviously the default for backend code, since it's the simplest way of interacting with a ML model. scikit-learn accepts a transformed feature vector, and commercial implementations such as the Amazon thing accept the root feature vector: https://docs.aws.amazon.com/machine-learning/latest/dg/creating-a-data-schema-for-amazon-ml.html

Halfak (WMF) (talkcontribs)

That's a good point. I'd be down with switching to using a more common terminology. I just don't think there is a good common terminology in use here.

I don't see "counterfactuals" as capturing what is happening here and we're doing something different than LIME. Not sure about THEMIS.

"Dependency injection" is nice and general, but it's maybe a bit too general for our use. "Feature injection" is a natural name because it's dependency injection of features for prediction.

Adamw (talkcontribs)

I like your name for it, the only thing I would change about this is that rather than emphasizing our uniqueness on account of feature injection, we say "we're unique because of public and transparent everything" and then situate feature injection in the context of other techniques to read about for relevant ideas. For example, someone might want to audit ORES by using counterfactuals (e.g. THEMIS).

Reply to "Review notes"
There are no older topics