Moderator Tools/Automoderator/Testing/id

Tim saat ini sedang membuat  - yaitu sebuah alat yang dapat membalikkan suntingan merusak secara otomatis. Cara kerja dari peralatan ini serupa dengan peralatan yang dibuat oleh komunitas seperti ClueBot NG, SeroBOT, dan Dexbot. Agar peralatan ini semakin andal, maka kami mengumpulkan beberapa suntingan yang dikumpulkan di spreadsheet untuk Anda tentukan apakah layak untuk dibalikkan atau dibiarkan saja.

Akurasi dari peralatan ini berdasarkan gabungan dari skor model pembelajaran mesin dan pengaturan internal. Terlepas model ini akan semakin baik seiring berjalannya waktu, kami juga berusaha untuk meningkatkan akurasi dengan cara masukan dari kontributor. Misalnya, ketika kontributor mengembalikan suntingannya sendiri maka terkadang akan dianggap sebagai vandalisme oleh peralatan ini. Untuk mencegah hal tersebut kembali terulang, kami mengumpulkan beberapa kejadian serupa dan meminta masukan dari kontributor perihal suntingan tersebut.

Sebagai catatan bahwa ini tidak merefleksikan hasil akhir dari peralatan "moderasi secara otomatis" untuk ke depannya. Peralatan ini akan berkembang sesuai dengan masukan dari komunitas.



Cara untuk berpartisipasi



 * Jika Anda memiliki akun Google:
 * Use the Google Sheet link below and make a copy of it
 * You can do this by clicking File > Make a Copy ... after opening the link.
 * Setelah Anda mendapatkan salinan berkas tersebut, silakan klik tombol Bagikan yang ada di bagian atas, lalu berikan akses ke avardhana@undefinedwikimedia.org (pastikan centang pilihan "Beritahu"). Hal ini bertujuan agar kami dapat mengumpulkan masukan dari Anda lebih mudah.
 * Alternatively, you can change 'General access' to 'Anyone with the link' and share a link with us directly or on-wiki.
 * Alternatively, use the .ods file link to download the file to your computer.
 * Setelah Anda memberikan penilaian terhadap suntingan yang ada, silakan kirimkan kembali sheet tersebut ke avardhana@undefinedwikimedia.org.

Jika sudah mengakses spreadsheet ...


 * 1) Follow the instructions in the sheet to select a random dataset, review 30 edits, and then uncover what decisions Automoderator would make for each edit.
 * 2) Feel free to explore the full data in the 'Edit data & scores' tab.
 * 3) If you want to review another dataset please make a new copy of the sheet to avoid conflicting data.
 * 4) Join the discussion on the talk page.

'' Alternatively, you can simply dive in to the 'Edit data & scores' tab and start investigating the data directly. ''

''* We welcome translations of this sheet - if you would like to submit a translation please translate a copy and send it back to us at swalton@undefinedwikimedia.org. ''

If you want a sheet generated with data from another Wikipedia please let us know and we can create one.

About Automoderator
Automoderator’s model is trained exclusively on Wikipedia’s main namespace pages, limiting its dataset to edits made to Wikipedia articles. Further details can be found below:

Internal configuration
In the current version of the spreadsheet, in addition to considering the model score, Automoderator does not take actions on:


 * Edits made by administrators
 * Edits made by bots
 * Edits which are self-reverts
 * New page creations

The datasets contain edits which meet these criteria, but Automoderator should never say it will revert them. This behaviour and the list above will be updated as testing progresses if we add new exclusions or configurations.

Caution levels
In this test Automoderator has five 'caution' levels, defining the revert likelihood threshold above which Automoderator will revert an edit.


 * At high caution, Automoderator will need to be very confident to revert an edit. This means it will revert fewer edits overall, but do so with a higher accuracy.


 * At low caution, Automoderator will be less strict about its confidence level. It will revert more edits, but be less accurate.

The caution levels in this test have been set by the Moderator Tools team based on our observations of the models accuracy and coverage. To illustrate the number of reverts expected at different caution levels see below:

'' If you would like us to pull this data for another Wikimedia project just let us know on the talk page. ''

Score an individual edit
We have created a simple user script to retrieve a Revert Risk score for an individual edit. Simply import User:JSherman (WMF)/revertrisk.js into your commons.js with  on English Wikipedia, or   on other wikis.

You should then find a 'Get revert risk score' in the Tools menu in your sidebar. Note that this will only display the model score, and does not take into account Automoderator's internal configurations as detailed above. See the table above for the scores above which we are investigating Automoderator's false positive rate.