Moderator Tools/Automoderator/ja

The team is exploring a project to build an 'automoderator' tool for Wikimedia projects. これはモデレータに、機械学習のモデルに基づいた不適切な編集の自動的な予防・差し戻しを設定可能にします. In simpler terms, we're looking to build software which performs a similar function to anti-vandalism bots such as ClueBot NG, SeroBOT, and Dexbot, but make this available to all language communities.

着想：''明らかな荒らしを自動で防止したり取り消すことができれば、モデレータはより有意義なことに時間を使えるのではないか. ''

私たちは2023年中にこのアイデアを調査し、2024年中に開発作業を始める構想です.

最新のアップデート 2023年6月：以下に未解決の質問があるので、あなたの考えと疑問、そしてコメントをトークページに追加してください！

動機
A substantial number of edits are made to Wikimedia projects which should unambiguously be undone, reverting a page back to its previous state. Patrollers and administrators have to spend a lot of time manually reviewing and reverting these edits, which contributes to a feeling on many larger wikis that there is an overwhelming amount of work requiring attention compared to the number of active moderators. We would like to reduce these burdens, freeing up moderator time to work on other tasks.

Many online community websites, including Reddit, Twitch, and Discord, provide 'automoderation' functionality, whereby community moderators can set up a mix of specific and algorithmic automated moderation actions. On Wikipedia, AbuseFilter provides specific, rules-based, functionality, but can be frustrating when moderators have to, for example, painstakingly define a regular expression for every spelling variation of a swear word. It is also complicated and easy to break, causing many communities to avoid using it. At least a dozen communities have anti-vandalism bots, but these are community maintained, requiring local technical expertise and usually having opaque configurations. These bots are also largely based on the ORES damaging model, which has not been trained in a long time and has limited language support.

目標

 * 悪質な編集を巡回者のキューに入れないことでモデレートするべきものの量を減らします.
 * モデレーターに自動モデレートが信頼できる誤反応なしのツールであるという信用を与えます.
 * 誤反応にあった編集者が、エラーを解消して編集を戻す方法を確立します.


 * 他に勘案すべき目標はありますか？

モデル
This project will leverage the new revert risk models developed by the Wikimedia Foundation Research team. There are two versions of this model:
 * 1) A multilingual model, with support for 47 languages.
 * 2) A language-agnostic model.

These models can calculate a score for every revision denoting the likelihood that the edit should be reverted. We envision providing communities with a way to set a threshold for this score, above which edits would be automatically prevented or reverted.

The models currently only support Wikipedia and Wikidata, but could be trained on other Wikimedia projects. Additionally they are currently only trained on the main (article) namespace. Once deployed, we could re-train the model on an ongoing basis as false positives are reported by the community.

このプロジェクトを進める前に、最近の編集に対してモデルをテストする機会を提供する予定です. そこで、荒らしと戦う人にモデルがどれだけ正確か、そして私たちの提案する方法での利用に信頼が置けるかどうか、理解してもらおうと考えています.


 * これらのモデルに何か懸念がありますか？
 * あなたやあなたのコミュニティが許容できる誤動作の割合は最大で何パーセントでしょうか？



考えられる解決策
We are envisioning a tool which could be configured by a community's moderators to automatically prevent or revert edits. Reverting edits is the more likely scenario - preventing an edit requires high performance so as not to impact edit save times. Additionally, it provides less oversight of what edits are being prevented, which may not be desirable, especially with respect to false positives. Moderators should be able to configure whether the tool is active or not, and have options for how strict the model should be.

Lower thresholds would mean more edits get reverted, but the false positive rate is higher, while a high threshold would revert a smaller number of edits, but with higher confidence.

While the exact form of this project is still being explored, the following are some feature ideas we are considering, beyond the basics of preventing or reverting edits which meeting a revert risk threshold.

Testing
If communities have options for how strict they want the automoderator to be, we need to provide a way to test those thresholds in advance. This could look like AbuseFilter’s testing functionality, whereby recent edits can be checked against the tool to understand which edits would have been reverted at a particular threshold.


 *  How important is this kind of testing functionality for you? Are there any testing features you would find particularly useful? 

Community configuration
A core aspect of this project will be to give moderators clear configuration options for setting up the automoderator and customising it to their community’s needs. Rather than simply reverting all edits meeting a threshold, we could, for example, provide filters for not operating on editors with certain user groups, or avoiding certain pages.


 *  What configuration options do you think you would need before using this software? 
 *  Who should be able to configure the automoderator? 
 *  Should Stewards be able to configure the tool for small wikis? 

False positive reporting
Machine learning models aren't perfect, and so we should expect that there will be a non-zero number of false positive reverts. There are at least two things we need to consider here: the process for a user flagging that their edit was falsely reverted so it can be reinstated, and providing a mechanism for communities to provide feedback to the model over time so that it can be re-trained.

The model is more sensitive to edits from new and unregistered users, as this is where most vandalism comes from. We don't want this tool to negatively impact the experience of good faith new users, so we need to create clear pathways for new users to understand that their edit has been reverted, and be able to reinstate it. This needs to be balanced with not providing easy routes for vandals to undo the tool's work, however.

Although these models have been trained on a large amount of data, false positive reporting by editors can provide a valuable dataset for ongoing re-training of the model. We need to figure out how to enable experienced editors to send false positive data back to the model so that it can improve over time.


 *  How could we provide clear information and actions for editors on the receiving end of a false positive, in a way which isn’t abused by vandals? 
 *  What concerns do you have about false positives? 



他の未解決の質問

 * あなたのコミュニティが荒らし対策のボットを使っているならば、そこからどのような経験が得られましたか？ それが機能しなくなった場合、どのように感じますか？
 * あなたのコミュニティはこれを採用すると思いますか？ 他のワークフロー・ツールとどのように組み合わせますか？
 * 私たちがツールの成功度を確かめるために何のデータを見ればいいでしょうか？
 * 上に書かれていない考慮すべき点はありますか？