Wikidata Query Service/User Manual/fr

Service de requête Wikidata (WDQS) est un progiciel et un service public conçu pour acquérir la capacité SPARQL d'interroger Wikidata l'ensemble des données.

Vous remarquerez que ce service est actuellement en mode “beta” ce qui veux dire que certains détails de l'ensemble des données ou du service fourni peut changer sans aucun avertissement préalable..

Cette page ou d'autres pages de documentation pertinentes seront mises à jour en conséquence; Il est recommandé de les regarder si vous utilisez le service.

Vous pouvez voir des exemples de requêtes SPARQL sur la page Exemples SPARQL.

Ensemble de données
Le service de requête Wikidata fonctionne sur un ensemble de données de Wikidata.org, représenté dans RDF comme décrit dans la documentation RDF dump.

L'ensemble de données du service ne correspond pas exactement à l'ensemble de données produit par les vidages RDF, principalement pour des raisons de performances; la documentation décrit un petit ensemble de différences.

Vous pouvez télécharger un vidage hebdomadaire des mêmes données à partir de :

https://dumps.wikimedia.org/wikidatawiki/entities/

Notions de base - Comprendre le SPO (sujet, prédicat, objet) également connu sous le nom de Triple sémantique
spo ou "subject, predicate, object" est connu comme un triple, ou communément appelé dans Wikidata une déclaration sur les données.

La déclaration "La capitale des États-Unis est Washington DC" se compose du sujet "États-Unis" ( Q30), le prédicat "La capitale est" ( P36), et un objet "Washington DC" ( Q61). Cette déclaration peut être représentée comme trois URIs:

Grace aux préfixes (voir ci-dessous), la même déclaration peut être écrite sous une forme plus concise. Notez le point à la fin pour représenter la fin de la déclaration.

Le /entity/ (wd:) représente l'entité Wikidata (valeurs du nombre-Q). Le /prop/direct/ (wdt:) est une propriété "véridique" - une valeur à laquelle on s'attend le plus souvent en regardant la déclaration. Les propriétés véridiques sont nécessaires parce que certaines déclarations pourraient être «plus-vraies» que d'autres. Par exemple, la déclaration «La capitale des États-Unis est New York City» est également vraie - mais seulement si vous regardez le contexte de l'histoire américaine. WDQS utilise la classification pour déterminer quelles déclaration devraient être utilisées comme "véridiques".

En plus des déclarations véridiques, WDQS stocke toutes les déclarations (véridiques et non véridiques), mais n'utilise pas le même préfixe wdt:. U.S. capital ont trois valeurs: DC, Philadelphie et New York. Et chacune de ces valeurs a des "qualificatifs" - des informations supplémentaires, telles que les dates de début et de fin, qui affine l'étendue de chaque déclaration. Pour stocker cette information dans la base de données RDF (triplestore), WDQS introduit un sujet "déclaration" auto-magique, qui est essentiellement un nombre aléatoire:

Voir Tutoriel SPARQL - qualificateurs pour plus d'informations.

spo est également utilisé comme une forme de structure syntaxique de base pour interroger les structures de données RDF ou toute base de données graphique ou de base de triplets (triplestore), comme le WDQS (Wikidata Query Service), géré par Blazegraph, une base de données graphique haute performance.

Utilisations avancées d'un triple (spo) même en utilisant des triples comme objets ou sujets d'autres triples!

Notions de base - Comprendre les préfixes
Les sujets et les prédicats (première et seconde valeurs du triple) doivent toujours être stockés sous la forme URI. Par exemple, si le sujet est Universe (Q1), il sera stocké sous   . Les préfixes nous permettent d'écrire ce long URI sous une forme plus courte: wd:Q1. Contrairement aux sujets et aux prédicats, l'objet (la troisième valeur du triple) peut être un URI ou un littéral, par ex. un nombre ou une chaîne.

WDQS comprend de nombreuses abréviations de raccourcis, connues sous le nom de préfixes. Certains sont internes à Wikidata, par ex. wd, wdt, p, ps, bd, et beaucoup d'autres sont des préfixes externes couramment utilisés, comme rdf, skos, owl, schema.

Dans la requête suivante, nous demandons des éléments où il y a une déclaration de "P279 = Q7725634" ou en termes plus complets, en sélectionnant les sujets qui ont un prédicat de "sous-classe de" avec un objet de = "travail littéraire".

Les variables de sortie:

Extensions
Le service prend en charge les extensions suivantes aux fonctionnalités SPARQL standard:

Service d'étiquette
Vous pouvez récupérer l'étiquette, l'alias ou la description des entités que vous interrogez, avec le langage de secours, en utilisant le service spécialisé avec l'URI . Ce service est très utile lorsque vous souhaitez récupérer des étiquettes, car il réduit la complexité des requêtes SPARQL dont vous auriez autrement besoin pour obtenir le même résultat.

Le service peut être utilisé dans l'un des deux modes: manuel et automatique.

En mode automatique, il vous suffit de spécifier le modèle de service, par exemple:

and WDQS will automatically generate labels as follows:


 * If an unbound variable in  is named , then WDQS produces the label  for the entity in variable.
 * If an unbound variable in  is named , then WDQS produces the alias  for the entity in variable.
 * If an unbound variable in  is named , then WDQS produces the description  for the entity in variable.

In each case, the variable in  should be bound, otherwise the service fails.

You specify your preferred language(s) for the label with one or more of  triples. Each string can contain one or more language codes, separated by commas. WDQS considers languages in the order in which you specify them. If no label is available in any of the specified languages, the Q-id of the entity (without any prefix) is its label.

The Wikidata Query Service website auto-magically replaces  with the language code of current user's interface. For example, if the user's UI is in French, the SPARQL's code  will be converted to   before being sent to the query service.

Example, showing the list of US presidents and their spouses:

In this example WDQS automatically creates the labels  and   for properties.

In the manual mode, you explicitly bind the label variables within the service call, but WDQS will still provide language resolution and fallback. Example:

This will consider labels and descriptions in French, German and English, and if none are available, will use the Q-id as the label.

Recherche géospatiale
Le service permet de rechercher des éléments avec des coordonnées situées dans un certain rayon du centre de certaines zones limites.

Recherche autour du point
Exemple:

La première ligne de  appel du service ayant le format     , où le résultat de recherche peut indiquer   aux éléments avec les endroits spécifiés et   pour leur coordonnées. Les paramètres supporté sont:

Search within box
Example of box search:

ou:

Coordinates may be specified directly:

The first line of the  service call must have format     , where and the result of the search will bind   to items within the specified location and   to their coordinates. The parameters supported are:

and  should be used together, as well as   and , and can not be mixed. If  and   predicates are used, then the points are assumed to be the coordinates of the diagonal of the box, and the corners are derived accordingly.

Distance function
The function  returns distance between two points on Earth, in kilometers. Example usage:

Coordinate parts functions
Functions,   &   return parts of a coordinate - globe URI, latitude and longitude accordingly.

Decode URL functions
Function  decodes (i.e. reverses percent-encoding) given URI string. This may be necessary when converting Wikipedia titles (which are encoded) into actual strings. This function is an opposite of SPARQL encode_for_uri function.

Automatic prefixes
Most prefixes that are used in common queries are supported by the engine without the need to explicitly specify them.

Extended dates
The service supports date values of type  in the range of about 290B years in the past and in the future, with one-second resolution. WDQS stores dates as the 64-bit number of seconds since the Unix epoch.

Blazegraph extensions
Blazegraph platform on top of which WDQS is implemented has its own set of SPARQL extension. Among them several graph traversal algorithms which are documented on Blazegraph Wiki, including BFS, shortest path, CC and PageRank implementations.

Please also refer to the Blazegraph documentation on query hints for information about how to control query execution and various aspects of the engine.

Federation
We allow SPARQL Federated Queries to call out to a selected number of external databases. Please see the full list of federated endpoints on the dedicated page.

Example federated query:

Please note that the databases that the federated endpoints serve use ontologies that may be very different from the Wikidata one. Please refer to the owner documentation links to learn about the ontologies and data access to these databases.

MediaWiki API
Please see full description on MediaWiki API Service documentation page.

MediaWiki API Service allows to call out to MediaWiki API from SPARQL, and receive the results from inside the SPARQL query. Example (finding category members):

Wikimedia service
Wikimedia runs the public service instance of WDQS, which is available for use at http://query.wikidata.org/.

The runtime of the query on the public endpoint is limited to 60 seconds. That is true both for the GUI and the public SPARQL endpoint. If you need to run longer queries, please contact the Discovery team.

GUI
The GUI at the home page of http://query.wikidata.org/ allows you to edit and submit SPARQL queries to the query engine. The results are displayed as an HTML table. Note that every query has a unique URL which can be bookmarked for later use. Going to this URL will put the query in the edit window, but will not run it - you still have to click "Execute" for that.

One can also generate a short URL for the query via a URL shortening service by clicking the "Generate short URL" link on the right - this will produce the shortened URL for the current query.

The "Add prefixes" button generates the header containing standard prefixes for SPARQL queries. The full list of prefixes that can be useful is listed in the RDF format documentation. Note that most common prefixes work automatically, since WDQS supports them out of the box.

The GUI also features a simple entity explorer which can be activated by clicking on the "🔍" symbol next to the entity result. Clicking on the entity Q-id itself will take you to the entity page on wikidata.org.

Default views

 * Main article: wikidata:Special:MyLanguage/Wikidata:SPARQL query service/Wikidata Query Help/Result Views

If you run the query in the WDQS GUI, you can choose which view to present by specifying a comment:  at the beginning of the query.

SPARQL endpoint
SPARQL queries can be submitted directly to the SPARQL endpoint with a GET or POST request to.

GET requests have the query specified in the URL, in the format, e.g..

POST requests can alternatively accept the query in the body of the request, instead of the URL, which allows running larger queries without hitting URL length limits. (Note that the POST body must still include the  prefix (that is, it should be   rather than just  ), and the SPARQL query must still be URL-escaped.)

The result is returned as XML by default, or as JSON if either the query parameter  is included in the URL, or the header   is provided with the request.

The JSON format is standard SPARQL 1.1 Query Results JSON Format.

It is recommended to use GET for smaller queries and POST for larger queries, as POST queries are not cached.

Supported formats
The following output formats are currently supported by the SPARQL endpoint:

Query limits
There is a hard query deadline configured which is set to 60 seconds. There are also following limits:


 * One client (user agent + IP) is allowed 60 seconds of processing time each 60 seconds
 * One client is allowed 30 error queries per minute

Clients exceeding the limits above are throttled with HTTP code. Use  header to see when the request can be repeated. If the client ignores 429 responses and continues to produce requests over the limits, it can be temporarily banned from the service.

Every query will timeout when it takes more time to execute than this configured deadline. You may want to optimize the query or report a problematic query here.

Also note that currently access to the service is limited to 5 parallel queries per IP. The above limits are subject to change depending on resources and usage patterns.

Namespaces
The data on Wikidata Query Service contains the main namespace,, to which queries to the main SPARQL endpoint are directed, and other auxiliary namespaces, listed below. To query data from different namespace, use endpoint URL https://query.wikidata.org/bigdata/namespace/NAMESPACENAME/sparql.

Categories
Please see full description on Categories documentation page.

Wikidata Query Service also provides access to category graph of select wikis. The list of covered wikis can be seen here: https://noc.wikimedia.org/conf/categories-rdf.dblist

The category namespace name is. The SPARQL endpoint for accessing it is https://query.wikidata.org/bigdata/namespace/categories/sparql.

Please see Categories page for detailed documentation.

DCAT-AP
The DCAT-AP data for Wikidata is available as SPARQL in namespace.

The SPARQL endpoint for accessing it is: https://query.wikidata.org/bigdata/namespace/dcatap/sparql

The source for the data is: https://dumps.wikimedia.org/wikidatawiki/entities/dcatap.rdf

Example query to retrieve data:

Linked Data Fragments endpoint
We also support querying the database using Triple Pattern Fragments interface. This allows to cheaply and efficiently browse triple data where one or two components of the triple is known and you need to retrieve all triples that match this template. See more information at the Linked Data Fragments site.

The interface can be accessed by the URL:. Example requests:


 * https://query.wikidata.org/bigdata/ldf?subject=http%3A%2F%2Fwww.wikidata.org%2Fentity%2FQ146 - all triples with subject


 * https://query.wikidata.org/bigdata/ldf?subject=&predicate=http%3A%2F%2Fwww.w3.org%2F2000%2F01%2Frdf-schema%23label&object=%22London%22%40en - all triples that have English label "London"

Note that only full URLs are currently supported for the,   and   parameters.

By default, HTML interface is displayed, however several data formats are available, defined by  HTTP header.

The data is returned in pages, page size being 100 triples. The pages are numbered starting from 1, and page number is defined by  parameter.

Standalone service
As the service is open source software, it is also possible to run the service on any user's server, by using the instructions provided below.

The hardware recommendations can be found in Blazegraph documentation.

If you plan to run the service against non-Wikidata Wikibase instance, please see further instructions.

Installing
In order to install the service, it is recommended that you download the full service package as a ZIP file, e.g. from Maven Central, with group ID  and artifact ID " ", or clone the source distribution at https://github.com/wikimedia/wikidata-query-rdf/ and build it with "mvn package". The package ZIP will be in the  directory under.

The package contains the Blazegraph server as a .war application, the libraries needed to run the updater service to fetch fresh data from the wikidata site, scripts to make various tasks easier, and the GUI in the  subdirectory. If you want to use the GUI, you will have to configure your HTTP server to serve it.

By default, only the SPARQL endpoint at http://localhost:9999/bigdata/namespace/wdq/sparql is configured, and the default Blazegraph GUI is available at http://localhost:9999/bigdata/. Note that in the default configuration, both are accessible only from localhost. You will need to provide external endpoints and an appropriate access control if you intend to access them from outside.

Using snapshot versions
If you want to install an un-released snapshot version (usually this is necessary if released version has a bug which is fixed but new release is not available yet) and do not want to compile your own binaries, you can use either:
 * https://github.com/wikimedia/wikidata-query-deploy - deployment repo containing production binaries. Needs  working. Check it out and do " ".
 * Archiva snapshot deployments at https://archiva.wikimedia.org/#artifact/org.wikidata.query.rdf/service - choose the latest version, then Artifacts, and select the latest package for download.

Loading data
Further install procedure is described in detail in the Getting Started document which is part of the distribution, and involves the following steps:


 * 1) Download recent RDF dump from https://dumps.wikimedia.org/wikidatawiki/entities/ (the RDF one is the one ending in  ).
 * 2) Pre-process data with the   script. This creates a set of TTL files with preprocessed data, with names like , etc. See options for the script below.
 * 3) Start Blazegraph service by running the   script.
 * 4) Load the data into the service by using  . Note that loading data is usually significantly slower than pre-processing, so you can start loading as soon as several preprocessed files are ready. Loading can be restarted from any file by using the options as described below.
 * 5) After all the data is loaded, start the Updater service by using.

Loading categories
If you also want to load category data, please do the following:


 * 1) Create namespace, e.g.  :
 * 2) Load data into it:

Note that these scripts only load data from Wikimedia wikis according to Wikimedia settings. If you need to work with other wiki, you may need to change some variables in the scripts.

Scripts
The following useful scripts are part of the distribution:

munge.sh
Pre-process data from RDF dump for loading.

Example:

loadData.sh
Load processed data into Blazegraph. Requires  to be installed.

Example:

runBlazegraph.sh
Run the Blazegraph service.

Example:

Inside the script, there are two variables that one may want to edit: Also, the following environment variables are checked by the script (all of them are optional):

runUpdate.sh
Run the Updater service.

It is recommended that the settings for the  and   options (or absence thereof) be the same for munge.sh and runUpdate.sh, otherwise data may not be updated properly.

Example:

Also, the following environment variables are checked by the script (all of them are optional):

Updater options
The following options works with Updater app.

They should be given to the  script as additional options after , e.g.:.

Configurable properties
The following properties are configurable via adding them to the script run command in the scripts above:

Missing features
Below are features which are currently not supported:


 * Redirects are only represented as owl:sameAs triple, but do not express any equivalence in the data and have no special support.

Contacts
If you notice anything wrong with the service, you can contact the Discovery team by email on the list  or on the IRC channel.

Bugs can also be submitted to and tracked on the Discovery Phabricator board.