Thanks for asking for feedback! I’m sure that this can be very useful, but I found it confusing to use.
- I have played around with it for a while, and I still don’t have a good mental model. Does this filter things in or out? Are the checkboxes ANDed or ORed together? If Bot is not checked and Human is not checked, then I see all edits; but if I check Bot, then I stop seeing Human edits. This is not a consistent mental model.
- Prefixing items with “Show” or “Hide” or “Show only” or something would help a lot.
- The subtitles are inconsistent. Most of them (like “Edits made by human editors”) tell me what checking the box will show me. That makes sense. But some are different, like “Highly accurate at finding almost all problem-free edits”. This seems to be telling me about the algorithm used to implement this checkbox. That’s not what I would expect, and it’s hard for me to interpret. How about “Show only edits identified as very likely problem-free by a highly accurate algorithm” or something?
- It appears that these predictions filters are not “spanning” or whatever. For me, I got 4 edits under “very likely good” and zero under the rest, but if I checked none of them, I have 13 edits. The names “very likely good, may have problems, likely have problems, very likely have problems” led me down the garden path; I assumed that every edit has a single “sketchiness” rating, and each checkbox shows me one section of that number space. If that’s not the case, I suggest that this naming scheme is more misleading than helpful.
- While I’ve been trying to understand how this system works, I’ve been thwarted by the fact that the filter checkboxes cover up the edits that they control. It’s great that clicking on them changes my results live, but that’s actually not very useful because I can barely see.
In the end, I just checked Human and left it at that.