ORES/es

"ORES"(/ɔɹz/, en inglés "Objective Revision Evaluation Service" o "Servicio de Evaluación de Revisión Objetivo")es un servicio web y API que proporciona  machine learning  como servicio para proyectos de Wikimedia mantenidos por el equipo de [[$ wspt | Plataforma de puntuación] ] El sistema está diseñado para ayudar a automatizar el trabajo wiki crítico, por ejemplo, detección y eliminación de vandalismo. Actualmente, los dos tipos generales de puntaje que genera ORES se encuentran en el contexto de "calidad de edición" y "calidad del artículo".

ORES es un servicio de back-end y no proporciona directamente una forma de hacer uso de los puntajes. Si desea usar los puntajes de ORES, consulte nuestra lista de herramientas que usa puntajes de ORES. Si ORES aún no es compatible con su wiki, consulte nuestras instrucciones para solicitar asistencia.

¿Buscas respuestas a tus preguntas sobre ORES? Comprueba la página de preguntas y respuestas.

Mejorando la calidad
Una de las preocupaciones más críticas sobre los proyectos abiertos de Wikimedia es la revisión de las contribuciones potencialmente dañinas ("ediciones"). También existe la necesidad de identificar a los colaboradores de buena fe (que pueden estar causando daño inadvertidamente) y ofrecerles apoyo. Estos modelos están diseñados para facilitar el trabajo de filtrado a través del feed Especial:CambiosRecientes. Ofrecemos dos niveles de soporte para editar modelos de predicción de calidad: básico y avanzado.

Soporte básico
Suponiendo que las ediciones más dañinas serán revertidas y las que no sean dañinas no se revertirán, pomemos trabajar utilizando la historia de ediciones (y reversión de ediciones)de un wiki. Este modelo es fácil de configurar, pero adolece el problema que muchos ediciones ser revierten por razones diferentes que por vandalismo. Para ayudar con esto, creamos un modelo basado en malas palabras.


 * - pronostica si una edición será finalmente revertida.

Soporte avanzado
Rather than assuming, we can ask editors to train ORES which edits are in-fact  and which edits look like they were saved in. This requires additional work on the part of volunteers in the community, but it affords a more accurate and nuanced prediction with regards to the quality of an edit. Many tools will only function when advanced support is available for a target wiki.


 * – predicts whether or not an edit causes damage
 * – predicts whether an edit was saved in good-faith

Article quality
The quality of encyclopedia articles is a core concern for Wikipedians. New pages must be reviewed and curated to ensure that spam, vandalism, and attack articles do not remain in the wiki. For articles that survive the initial curation, some of the Wikipedians periodically evaluate the quality of articles, but this is highly labor intensive and the assessments are often out of date.

Curation support
The faster that seriously problematic types of draft articles are removed, the better. Curating new page creations can be a lot of work. Like the problem of counter-vandalism in edits, machine predictions can help curators focus on the most problematic new pages first. Based on comments left by admins when they delete pages (see the logging table), we can train a model to predict which pages will need quick deletion. See en:WP:CSD for a list of quick deletion reasons for English Wikipedia. For the English model, we used G3 "vandalism", G10 "attack", and G11 "spam".


 * – predicts if the article will need to be speedy deleted (spam, vandalism, attack, or OK)

Assessment scale support
For articles that survive the initial curation, some of the large Wikipedias periodically evaluate the quality of articles using a scale that roughly corresponds to the English Wikipedia 1.0 assessment rating scale ("wp10"). Having these assessments is very useful because it helps us gauge our progress and identify missed opportunities (e.g., popular articles that are low quality). However, keeping these assessments up to date is challenging, so coverage is inconsistent. This is where the  machine learning model comes in handy. By training a model to replicate the article quality assessments that humans perform, we can automatically assess every article and every revision with a computer. This model has been used to help WikiProjects triage re-assessment work and to explore the editing dynamics that lead to article quality improvements.

The wp10 model bases its predictions on structural characteristics of the article. E.g. How many sections are there? Is there an infobox? How many references? And do the references use a cite template? The wp10 model doesn't evaluate the quality of the writing or whether or not there's a tone problem (e.g. a point of view being pushed). However, many of the structural characteristics of articles seem to correlate strongly with good writing and tone, so the models work very well in practice.


 * – predicts the (Wikipedia 1.0-like) assessment class of an article or draft

Support table
The following table reports the status of ORES support by wiki and model available. If you don't see your wiki listed, or support for the model you'd like to use, you can request support.

API usage
ORES offers a Restful API service for dynamically retrieving scoring information about revisions. See https://ores.wikimedia.org for more information on how to use the API.

If you're querying the service about a large number of revisions, it's recommended to batch 50 revisions in each request as described below. It's acceptable to use up to 4 parallel requests. For even larger number of queries, you can run ORES locally

Example query: |wp10&revids=34854345|485104318 http://ores.wmflabs.org/v3/scores/enwiki/?modelsdraftquality|wp10&revids34854345|485104318

Example query: https://ores.wikimedia.org/v3/scores/wikidatawiki/421063984/damaging

Local usage
To run ORES locally you can install ORES by

Then you should be able to run it through

You should see output of