Jump to content

ORES/FAQ

From mediawiki.org
Warning Warning: The ORES infrastructure is being deprecated by the Machine Learning team, please check wikitech:ORES for more info.

Overview

[edit]

Following is a list of questions and answers about the ORES AI for Wikipedia.

The purpose of this FAQ is to help readers become more familiar with what ORES is, how ORES is used on Wikipedia and other projects and how to get involved.

If you are interested learning more about artificial intelligence, machine learning or data science, getting involved with ORES is a great place to start!

If you have a question that is not covered in this FAQ, please ask on the ORES/FAQ Discussion Page.

Beginner Questions

[edit]

What does ORES stand for?

[edit]

ORES is an acronym for Objective Revision Evaluation Service. ORES is pronounced like the English word "ores" (ɔɹz).

We chose a mining metaphor for ORES, because machine learning models are a product of data mining analysis.

What is ORES used for?

[edit]

Predicting edit and article quality

[edit]

ORES is an artificial intelligence (AI) service that helps human editors improve the quality of edits and articles on Wikipedia.

ORES uses a combination of open data and open source machine learning algorithms to train models and create scores. ORES scores help predict the quality of edits as they are made, as well as the quality of articles.

Learn more about the background and basics of ORES on Wikipedia.

Tools and services that use ORES scores

[edit]

Many tools use ORES scores. These tools make it possible to automate time consuming workflows that would otherwise have to be done manually by human editors.

ORES tools use machine learning to predict the quality of new edits and articles, quickly identify and address damaging edits (sometimes called "vandalism"), check for copyright violations, and patrol recent changes to Wikipedia articles.

What's an ORES score?

[edit]

ORES scores are scores assigned to individual edits that are made to articles on Wikipedia. ORES scores describe the quality of edits and help humans determine which kinds of edits are damaging to articles and which kinds of edits are made in good faith. ORES scores make it easier to identify folks who might need extra support to make higher quality edits to articles. ORES scores can also help identify and revert obvious vandalism

ORES scores allow humans and machines to work together to generate better article content and to improve existing content.

What is a patroller?

[edit]

A patroller is a human user on Wikipedia who helps determine whether edits might be damaging. ORES helps support patrollers by using specific filters to determine if new edits may be damaging.

Potentially damaging edits are flagged and brought to the attention of human patrollers who make the final decisions regarding edit quality.

What are damaging edits (sometimes called "vandalism")?

[edit]

"Vandalism" occurs when an editor decides to make a damaging edit on purpose.

Sometimes people make edits that have damaging effects, even when they make these edits with the best intentions. A patroller's job is to look for "damaging" edits, whether the damage was done on purpose or not.

Additional information on ORES review tools page.

What is an article?

[edit]

An article or entry, is a page that anyone can edit. Wikipedia is made of encyclopedic articles that anyone can edit. New information is always emerging. Articles are revised often and may change over time.

ORES uses information from the article's revisions to score the content of the article.

What is an edit?

[edit]

Wikipedia articles can be changed by anyone who wants to make changes or contribute new information. Each edit is made by a user, who opens the article, edits and saves. Edits cannot be changed after the fact, but a new edit can be made to correct mistakes.

When ORES AI scores an edit, it analyzes the change to determine whether it was helpful or damaging to the article and whether the change was made in good faith.

Edits or changes are also known as "revisions." The info page for an edit (example) shows the change made in that edit and the contents of the article, once that edit is applied.

What does quality mean?

[edit]

Article content is scored using scales such as the Wikipedia 1.0 assessment, which are designed to give an idea of the completeness and readability of an article, as well as the richness of citations to supporting material. Each wiki language and project will use its own scale. See the English Wikipedia 1.0 scale for more information.

How do I get support for my wiki in ORES?

[edit]

ORES/Get support

How do I help translate the ORES interfaces to my language?

[edit]

Visit our project on translatewiki.net, where you can review or translate.

Why don't I see ORES in my Recent Changes feed?

[edit]

One possibility is that your wiki doesn't have ORES support yet. See this table: https://tools.wmflabs.org/ores-support-checklist/

If your wiki has "advanced edit quality models" but you still don't see ORES, your wiki preferences might be interfering. Go to "My settings" -> "Recent changes" and make sure that "Hide the improved version of Recent Changes" is not checked.

Intermediate Questions

[edit]

How do I use ORES?

[edit]

ORES is built into the RecentChanges, Watchlist, and Contributions pages for supported wikis. We recommend using the new filter interface, which gives more flexible searching and highlighting for several thresholds of damaging and good-faith prediction.

If your wiki isn't supported yet, please help us put together the language assets and let us know that you're interested in working with us to develop support. See "How to I get support for my wiki" above.

How can I use ORES to support my editing activities?

[edit]

The best way to make use of ORES is to find a tool that uses ORES predictions. However, you can always query ORES directly via the API. For example, https://ores.wikimedia.org/v3/scores/enwiki/234234320/damaging returns

{
  "enwiki": {
    "models": {
      "damaging": {
        "version": "0.3.0"
      }
    },
    "scores": {
      "234234320": {
        "damaging": {
          "score": {
            "prediction": false,
            "probability": {
              "false": 0.9785340256606994,
              "true": 0.021465974339300656
            }
          }
        }
      }
    }
  }
}

What tools are available that make use of ORES?

[edit]

See our list of tools that use ORES for examples of what's available.

What types of models are available?

[edit]

Learn more about the models available for ORES.

How do I run an instance of ORES?

[edit]

For development? The best place to start is to install MediaWiki-Vagrant, and enable the following roles: "betafeatures", "ores", "ores_service", and "wikilabels".

As a public service? Nobody outside of the Machine Learning team has tried this yet. Please contact the team directly for more support.

What does ORES' architecture look like?

[edit]
The ORES architecture.

Requests to ORES first go to a set of load balancers that distribute the load to a set of WSGI workers that validate requests and perform basic IO for scoring jobs. If a score has already been generated, the WSGI workers will find it in the score cache and respond immediately. Otherwise, the scoring job is farmed out to a set of celery workers via a Redis task queue.

What information does ORES use to evaluate an edit?

[edit]

In addition to the content of the edit itself, ORES also uses a variety of metadata about the edit and the editor to evaluate each edit. See https://ores.wikimedia.org/v3/scores/enwiki/123457/damaging?features for a full list. You can experiment with the local gradients for each feature by injecting counterfactuals: ORES/Feature injection.

How can I get help with ORES?

[edit]

Getting help for your ORES project

Many people who have a basic knowledge of how ORES works are interested in how to get help or how to help with ORES projects. A good place to start is the ORES support page. You can also look at our support table to see if we already support your Wiki.

Who do I talk to about problems/ideas/etc.?

[edit]

Please contact the Machine Learning team, using any of the means under "Work with us" at the bottom of the page.

How do I report problems with ORES' scores?

[edit]

A dedicated feedback system called Judgment and Dialogue Engine (JADE) is in the early stages of development, and that will be the best place to report problems. In the meantime, please create a wiki page, for example under the relevant Patrolling project, and alert us about the page using Phabricator or other means.

How can I contribute to ORES?

[edit]

If you want to help with development, please visit our IRC channel #wikimedia-ai connect. Most of our code is available from GitHub, under https://github.com/wiki-ai/

To translate messages into your language, please visit our project on TranslateWiki.net.

Expert Questions

[edit]

What deployments of ORES are there?

[edit]

We maintain a production-ish space, where uptime and stability are optimized for, as well as an experimental space, where new models and features are deployed in a high-performance and flexible environment.

Production-ish Experimental
Production Beta WMFLabs Staging
URL ores.wikimedia.org ores-beta.wmflabs.org ores.wikimedia.org ores-staging.wmflabs.org
Suggested use Serving real-time requests Making sure we don't break production Experimenting with new code and running analyses Making sure we don't break the experimental install
Stability Stable Not stable Mostly Stable Not stable
Performance Fast Extremely limited Pretty fast Extremely limited
Code version Stable code Experimental code Experimental code Experimental code
Parallel Requests Up to 2 parallel requests per second Up to 1 request per second Up to 4 parallel requests per second Up to 1 request per second

How is ORES configured?

[edit]

ORES is configured via two respositories. The production-ish configuration lives in https://phabricator.wikimedia.org/source/ores-deploy/, and the experimental configuration lives in https://github.com/wiki-ai/ores-wmflabs-deploy. The primary configuration asset is config/00-main.yaml. Other configurations at the machine level come from https://github.com/wikimedia/puppet.

How do I deploy ORES?

[edit]

See our documentation on the Wikitech wiki: https://wikitech.wikimedia.org/wiki/ORES/Deployment

Who should I contact about ORES stuff?

[edit]
File bugs on the Phabricator board
https://phabricator.wikimedia.org/tag/machine-learning-team
Join us on IRC
#wikimedia-ml connect

What are my options when querying ORES?

[edit]

ORES provides basic documentation about how to query the system using Swagger documentation. See https://ores.wikimedia.org for links to swagger documentation for each version of our API. For example https://ores.wikimedia.org/v2/ lists the documentation for the v2 interface.

What models are available and what are their fitness statistics?

[edit]

The best way to find out what models are available is to query ORES directly. For example, https://ores.wikimedia.org/v2/scores/ lists out all of the wikis and models that are supported with their versions. In order to get fitness statistics, add "?model_info" to the URL (https://ores.wikimedia.org/v2/scores/?model_info). Eventually we'll have a UI to make accessing this information easier.

Where should I set my thresholds for filtering/highlighting

[edit]

This depends on what you want to optimize for. For example, when using the "damaging" model it's useful to optimize for high recall to make sure that most vandalism is caught. ORES reports "threshold optimizations" that you can use to identify an appropriate threshold. For example:

https://ores.wikimedia.org/v2/scores/enwiki/damaging?model_info=statistics.thresholds.true.'maximum precision @ recall >= 0.9' returns:

{
  "scores": {
    "enwiki": {
      "damaging": {
        "info": {
          "statistics": {
            "thresholds": {
              "true": [
                {
                  "!f1": 0.883,
                  "!precision": 0.996,
                  "!recall": 0.794,
                  "accuracy": 0.797,
                  "f1": 0.233,
                  "filter_rate": 0.77,
                  "fpr": 0.206,
                  "match_rate": 0.23,
                  "precision": 0.134,
                  "recall": 0.901,
                  "threshold": 0.09295862121864444
                }
              ]
            }
          }
        },
        "version": "0.4.0"
      }
    }
  }
}

This means you should set your threshold at 0.093 and expect to get 13.4% precision at 90% recall.

Where do I get more information about what ORES is and how it is used?

[edit]

ORES on MediaWiki

ORES MediaWiki Extension

PythonHosted: https://pythonhosted.org (Uses python's sphinx doc framework) (Audience: Developers / contributors)