JADE/Background

Some research and precedents for the JADE project.

Providing explanations to the end-user
When we present machine scores to the end-user, and ask for feedback, we need to provide an understandable explanation for our algorithm's behavior. A very low-level description of the model and features will obviously be too confusing, but on the other hand, there is research showing that an overly high-level explanation isn't optimal either. "Trust" and communication work best when the end-user is provided with a frame of reference nearly equivalent to what the machine uses to make its prediction. These pieces of information have been shown to be most effective at eliciting accurate feedback from end-users: Explaining the low-level context to end-users is still in its infancy. For linear support vector machines, we could show feature weights, but might not be understandable for end-users without machine learning knowledge. Interactive methods exist in which the user can see how changes would affect the system. Shooting from the hip, I can also imagine a classifier-agnostic method in which we run the score with each feature tweaked to its minimum and maximum, to empirically determine which are the top features contributing to the prediction.
 * Uncertainty: was the machine confident in its prediction?
 * Sufficient low-level context: raw sensor or feature data, rather than machine interpretations of the high-level meaning of this data. "Sufficient" meaning, the bare minimum amount of this information such that any less would decrease the end-user's accuracy.  Unnecessary extra information also decreases user accuracy.
 * Prediction: what classifications the machine predicted.
 * Ask for free-form feedback: this seems to improve user accuracy, see the next section.

In one study, the more transparent we are with end-users about how the algorithm works and the more they understand the system, the more satisfied they are with its outputs.

Soliciting feedback from the end-user
There is a tradeoff between getting rich feedback from users, and having the feedback in a form that can be easily reincorporated into future iterations of the machine learning algorithm.

Aside from the question of feedback usability, there is evidence that asking for certain types of feedback will improve the accuracy of the end-user's labeling. For example, the speculation is that a free-form text justification forces the user to think harder about how they justify their labeling, sometimes correcting it.

The end-user can justify their classification in the form of "rationales" or feature annotations, in a type of active learning. In this system, the user chooses keywords which are assumed to map to features, indicating that these are the most important. These choices are linked to the specific document and when the document is rescored, the chosen features' weights are increased by a factor, and the remainder of feature weights are decreased by a factor. It's not clear to me why these per-document changes would improve overall classifier health.

Another approach, similar to rationales, is to accept user feedback in the form of constraints, for example "there should be a positive weight on feature X". This hasn't been shown to be very effective yet, but the results might vary depending on the context and algorithm.

The norm in end-user interaction is either to have no feedback collection, or to have end-users either provide data, answer domain-related questions, or give feedback about the model, and the elicited information is mediated by the algorithm designers before being incorporated into each iteration. The radical alternative is to allow end-users to interactively explore the clustering space and directly make changes to the model. This can be frustrating if the end-user doesn't see effective changes being made to the system. An powerful method of end-user feedback is to allow the user to define new concepts by providing examples, but probably not applicable to JADE since our classes are predefined. People naturally want to do more than just provide labels.

Adversarial classification
If we allow end-user input to directly influence our models, we have to consider the possibility that some users will be intentionally attempting to manipulate the algorithms. Recent examples are Microsoft's "Tay", turned evil after only 24 hours of abusive input data, and the more straightforward Time Magazine "marblecake" pwnage. There may be strategies to detect even an optimal adversary and defensively adapt in real time. This extra data may reduce the overall false positive rate, so if handled correctly adversarial inputs could be beneficial.

Integrating end-user feedback into the algorithm
In an interesting twist on the co-training algorithm in which two machine learning algorithms are used to improve one another, Stumpf et al. 2009 came up with the concept of "user co-training". In user co-training, the user is treated as if they are one of the classifiers. This requires that the user provide both a label and a set of important words (or more generally, features). This might only be a useful technique on data where the machine classifier is low-accuracy, lower than the average human error rate. In their experiment, machine learning accuracy of roughly 60% was increased by more than 10% by the user co-training approach, including user-highlighted keywords.

Benefits of end-user feedback to algorithm health
[TODO: Give numbers for different types of feedback incorporated into model iteration.]

The "rationales" authors claim a 22-fold reduction in the number of documents that need to be labeled, when rationales are incorporated into learning.

Actively choosing documents to reduce labeling burden
In active learning, the system chooses which documents will be manually labeled, in the hope that we can learn faster than with a randomly chosen set of documents. This might be out of scope for JADE, since end-users voluntarily choose which scores to give feedback for, but ORES in general and WikiLabels might benefit from this approach. Usually, the documents with the least certain predictions and sometimes those with the most certain predictions are chosen for labeling.

When adopting an active learning strategy including rationales, specifically sampling documents with the most uncertain predictions, and "conflicting" keywords which appear in both positive and negative rationale sets, the authors claim an additional improvement for most data sets.

Treating users as oracles to answer questions posed by a machine can be annoying to them, so we need to be careful when soliciting feedback outside of the dedicated WikiLabels context.