JADE/Background

From mediawiki.org

Some research and precedents for the JADE project.

Communication between end-users and intelligent systems[edit]

Providing explanations to the end-user[edit]

When we present machine scores to the end-user, and ask for feedback, we need to provide an understandable explanation for our algorithm's behavior. A very low-level description of the model and features will obviously be too confusing, but on the other hand, there is research showing that an overly high-level explanation isn't optimal either. "Trust" and communication work best when the end-user is provided with a frame of reference nearly equivalent to what the machine uses to make its prediction. These pieces of information have been shown to be most effective at eliciting accurate feedback from end-users:[1][2]

  • Uncertainty: was the machine confident in its prediction?
  • Sufficient low-level context: raw sensor or feature data, rather than machine interpretations of the high-level meaning of this data. "Sufficient" meaning, the bare minimum amount of this information such that any less would decrease the end-user's accuracy. Unnecessary extra information also decreases user accuracy.
  • Prediction: what classifications the machine predicted.
  • Ask for free-form feedback: this seems to improve user accuracy, see the next section.

Explaining the low-level context to end-users is still in its infancy. For linear support vector machines, we could show feature weights, but might not be understandable for end-users without machine learning knowledge. Interactive methods exist in which the user can see how changes would affect the system.[3] Shooting from the hip, I can also imagine a classifier-agnostic method in which we run the score with each feature tweaked to its minimum and maximum, to empirically determine which are the top features contributing to the prediction.

In one study, the more transparent we are with end-users about how the algorithm works and the more they understand the system, the more satisfied they are with its outputs.[4][5]

Soliciting feedback from the end-user[edit]

There is a tradeoff between getting rich feedback from users, and having the feedback in a form that can be easily reincorporated into future iterations of the machine learning algorithm.[6]

Aside from the question of feedback usability, there is evidence that asking for certain types of feedback will improve the accuracy of the end-user's labeling.[1] For example, the speculation is that a free-form text justification forces the user to think harder about how they justify their labeling, sometimes correcting it.

The end-user can justify their classification in the form of "rationales" or feature annotations, in a type of active learning.[7] In this system, the user chooses keywords which are assumed to map to features, indicating that these are the most important. These choices are linked to the specific document and when the document is rescored, the chosen features' weights are increased by a factor, and the remainder of feature weights are decreased by a factor. It's not clear to me why these per-document changes would improve overall classifier health.

Another approach, similar to rationales, is to accept user feedback in the form of constraints, for example "there should be a positive weight on feature X". This hasn't been shown to be very effective yet, but the results might vary depending on the context and algorithm.[3]

The norm in end-user interaction is either to have no feedback collection, or to have end-users either provide data, answer domain-related questions, or give feedback about the model, and the elicited information is mediated by the algorithm designers before being incorporated into each iteration.[4] The radical alternative is to allow end-users to interactively explore the clustering space and directly make changes to the model. This can be frustrating if the end-user doesn't see effective changes being made to the system.[8] An powerful method of end-user feedback is to allow the user to define new concepts by providing examples, but probably not applicable to JADE since our classes are predefined. People naturally want to do more than just provide labels.[4]

If we allow end-user input to directly influence our models, we have to consider the possibility that some users will be intentionally attempting to manipulate the algorithms. Recent examples are Microsoft's "Tay", turned evil after only 24 hours of abusive input data[9], and the more straightforward Time Magazine "marblecake" pwnage[10]. There may be strategies to detect even an optimal[11] adversary and defensively adapt in real time. This extra data may reduce the overall false positive rate, so if handled correctly adversarial inputs could be beneficial.[12]

Integrating end-user feedback into the algorithm[edit]

In an interesting twist on the co-training algorithm in which two machine learning algorithms are used to improve one another, Stumpf et al. 2009 came up with the concept of "user co-training".[3] In user co-training, the user is treated as if they are one of the classifiers. This requires that the user provide both a label and a set of important words (or more generally, features). This might only be a useful technique on data where the machine classifier is low-accuracy, lower than the average human error rate. In their experiment, machine learning accuracy of roughly 60% was increased by more than 10% by the user co-training approach, including user-highlighted keywords.

Benefits of end-user feedback to algorithm health[edit]

[TODO: Give numbers for different types of feedback incorporated into model iteration.]

The "rationales" authors claim a 22-fold reduction in the number of documents that need to be labeled, when rationales are incorporated into learning.

Actively choosing documents to reduce labeling burden[edit]

In active learning, the system chooses which documents will be manually labeled, in the hope that we can learn faster than with a randomly chosen set of documents. This might be out of scope for JADE, since end-users voluntarily choose which scores to give feedback for, but ORES in general and WikiLabels might benefit from this approach. Usually, the documents with the least certain predictions and sometimes those with the most certain predictions are chosen for labeling.

When adopting an active learning strategy including rationales, specifically sampling documents with the most uncertain predictions, and "conflicting" keywords which appear in both positive and negative rationale sets, the authors claim an additional improvement for most data sets.

Treating users as oracles to answer questions posed by a machine can be annoying to them, so we need to be careful when soliciting feedback outside of the dedicated WikiLabels context.[4]

References[edit]

  1. 1.0 1.1 Rosenthal, Stephanie L. (2010). "Towards Maximizing the Accuracy of Human-labeled Sensor Data". Proceedings of the 15th International Conference on Intelligent User Interfaces: 259–268. New York, NY, USA: ACM. doi:10.1145/1719970.1720006. 
  2. Dey, Anind K.; Rosenthal, Stephanie; Veloso, Manuela (2009). Using interaction to improve intelligence: How intelligent systems should ask for input. Workshop on Intelligence and Interaction, International Joint Conference on Artificial Intelligence 2009.
  3. 3.0 3.1 3.2 Stumpf, Simone. "Interacting meaningfully with machine learning systems: Three experiments". International Journal of Human-Computer Studies 67 (8): 639–662. doi:10.1016/j.ijhcs.2009.03.004. 
  4. 4.0 4.1 4.2 4.3 Amershi, Saleema (2014-12-22). "Power to the People: The Role of Humans in Interactive Machine Learning" (in en). AI Magazine 35 (4): 105–120. doi:10.1609/aimag.v35i4.2513. ISSN 0738-4602. 
  5. Kulesza, Todd (2012). "Tell Me More?: The Effects of Mental Model Soundness on Personalizing an Intelligent Agent". Proceedings of the SIGCHI Conference on Human Factors in Computing Systems: 1–10. New York, NY, USA: ACM. doi:10.1145/2207676.2207678. 
  6. Stumpf, Simone (2007). "Toward Harnessing User Feedback for Machine Learning". Proceedings of the 12th International Conference on Intelligent User Interfaces: 82–91. New York, NY, USA: ACM. doi:10.1145/1216295.1216316. 
  7. Sharma, Manali. "Active Learning with Rationales for Text Classification". Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. doi:10.3115/v1/n15-1047. 
  8. Amershi, S., Fogarty, J., Kapoor, A., & Tan, D. (2011). Effective end-user interaction with machine learning. In Proceedings of the National Conference on Artificial Intelligence. (Vol. 2, pp. 1529-1532)
  9. Horton, Helena (2016-03-24). "Microsoft deletes 'teen girl' AI after it became a Hitler-loving sex robot within 24 hours". The Telegraph. ISSN 0307-1235. Retrieved 2017-11-18.
  10. Schonfield, Erick. "Time Magazine Throws Up Its Hands As It Gets Pwned By 4Chan". TechCrunch. Retrieved 2017-11-18.
  11. https://en.wikipedia.org/wiki/Generative_adversarial_network
  12. Dalvi, Nilesh (2004). "Adversarial Classification". Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining: 99–108. New York, NY, USA: ACM. doi:10.1145/1014052.1014066.