JADE/Open questions

From mediawiki.org

Open questions

  • We need to define what type of feedback we accept from people.  Free-form text, but also some quantitative number that measures level of disagreement?  Prespecified, structured data and a comment from human evaluator.
  • Which entities will we eventually support as thread target?
  • What makes a JADE thread any different from a talk page topic?
  • We need terminology for each element of our system.  "target entity"?  "refutation", "review", "agreement"?  "comment"?  "reply"?
  • Since comments will likely reference ORES scores as part of a refutation, we'll want to store that relationship explicitly.
  • Should this affect the automated score, automatically?  After some arbitration process?
  • Meta ORES human judgments should have some kind of influence on the error function we use during ML training.  How is that feedback structured?  Is it a strong signal?  Do we respect all judgment equally? Do we need another level of human meta-meta-curation of feedback??
  • What do we expect clients will do with this information?
  • Will there be any automated syncing between JADE and Wikilabels?