Wikidata Query Service/User Manual/uk

Служба запитів Вікіданих (Wikidata Query Service, WDQS) — це програмний пакет і публічна послуга, призначена для надання точки доступу SPARQL, яка дозволяє запитувати набір даних Вікіданих.

Ця сторінка або інші відповідні сторінки документації будуть відповідно оновлені; рекомендується переглядати їх, якщо ви користуєтесь послугою.

Ви можете побачити приклади запитів SPARQL на сторінці прикладів SPARQL.



Набір даних
Служба запитів Вікіданих працює на наборі даних з Wikidata.org, представленому у форматі RDF, як описано в документації формату дампа RDF.

Набір даних служби не точно відповідає набору даних, що створюється дампами RDF, головним чином з міркувань продуктивності; документація описує невеликий набір відмінностей.

Ви можете завантажити щотижневий дамп тих самих даних із:

https://dumps.wikimedia.org/wikidatawiki/entities/



Основи — Розуміння SPO (Subject, Predicate, Object), також відомої як семантична трійка (підмет, присудок, додаток)
spo або «підмет, присудок, додаток» відома як трійка, або зазвичай згадується у Вікіданих як твердження (statement) про дані.

The statement "The United States capital is Washington DC" consists of the subject "United States" (Q30), the predicate "capital is" (P36), and an object "Washington DC" (Q61). This statement can be represented as three URIs:

Завдяки префіксам (див. нижче) те саме твердження можна записати в більш стислій формі. Примітка. Точка наприкінці є кінцем твердження.

The /entity/ (wd:) represents Wikidata entity (Q-number values). The /prop/direct/ (wdt:) is a "truthy" property — a value we would expect most often when looking at the statement. The truthy properties are needed because some statements could be "truer" than others. For example, the statement "The capital of U.S. is New York City" is true — but only in the historical context of the year 1790. WDQS uses rank to determine which statements should be used as "truthy".

In addition to the truthy statements, WDQS stores all statements (both truthy and not), but they don't use the same wdt: prefix. U.S. capital has three values: DC, Philadelphia, and New York. And each of these values have "qualifiers" - additional information, such as start and end dates, that narrows down the scope of each statement. To store this information in the triplestore, WDQS introduces an automatic "statement" subject, which is essentially a random number:

Див. Підручник зі SPARQL — кваліфікатори для отримання додаткової інформації.

spo is also used as a form of basic syntax layout for querying RDF data structures, or any graph database or triplestore, such as the Wikidata Query Service (WDQS), which is powered by Blazegraph, a high performance graph database.

Advanced uses of a triple (spo) even including using triples as objects or subjects of other triples!



Основи — розуміння префіксів
The subjects and predicates (first and second values of the triple) must always be stored as URI. For example, if the subject is Universe (Q1), it will be stored as   . Prefixes allow us to write that long URI in a shorter form: wd:Q1. Unlike subjects and predicates, the object (triple's third value) can be either a URI or a literal, e.g. a number or a string.

WDQS розуміє багато скорочень, відомих як префікси. Деякі є внутрішніми для Вікіданих, наприклад, wd, wdt, p, ps, bd, і багато інших часто використовують зовнішні префікси, як-от rdf, skos, owl, schema.

In the following query, we are asking for items where there is a statement of "P279 = Q7725634" or in fuller terms, selecting subjects that have a predicate of "subclass of" with an object of = "literary work".

The output variables:

Розширення
The service supports the following extensions to standard SPARQL capabilities:



Служба міток
You can fetch the label, alias, or description of entities you query, with language fallback, using the specialized service with the URI . The service is very helpful when you want to retrieve labels, as it reduces the complexity of SPARQL queries that you would otherwise need to achieve the same effect.

Службою можна користуватися в одному з двох режимів: ручному та автоматичному.

В автоматичному режимі вам потрібно лише вказати шаблон служби, наприклад:

і WDQS автоматично створить мітки так:


 * Якщо незв’язана змінна в  називається , тоді WDQS створює мітку  для сутності в змінній.
 * Якщо незв’язана змінна в  називається , то WDQS створює псевдонім  для сутності в змінній.
 * Якщо незв’язана змінна в  називається , тоді WDQS створює опис  для сутності в змінній.

У кожному випадку змінна в  повинна бути зв’язана, інакше служба не працюватиме.

Автоматичний режим перевіряє лише проекцію запиту – наприклад, у  розпізнається лише перша мітка, а   взагалі не підтримується автоматичним режимом. У таких випадках вам доведеться використовувати ручний режим (див. нижче).

You specify your preferred language(s) for the label with one or more of  triples. Each string can contain one or more language codes, separated by commas. WDQS considers languages in the order in which you specify them. If no label is available in any of the specified languages, the Q-id of the entity (without any prefix) is its label.

The Wikidata Query Service website automatically replaces  with the language code of current user's interface. For example, if the user's UI is in French, the SPARQL's code  will be converted to   before being sent to the query service.

Приклад, який показує список президентів США та їхніх дружин:

У цьому прикладі WDQS автоматично створює мітки  і   для властивостей.

In the manual mode, you explicitly bind the label variables within the service call, but WDQS will still provide language resolution and fallback. Example:

This will consider labels and descriptions in French, German and English, and if none are available, will use the Q-id as the label.

Geospatial search
The service allows to search for items with coordinates located within a certain radius of a central point or within a certain bounding box.

Search around point
Приклад:

The first line of the  service call must have format     , where the result of the search will bind   to items within the specified location and   to their coordinates. The parameters supported are:

Search within box
Example of box search:

or:

Coordinates may be specified directly:

The first line of the  service call must have format     , where and the result of the search will bind   to items within the specified location and   to their coordinates. The parameters supported are:

and  should be used together, as well as   and , and cannot be mixed. If  and   predicates are used, then the points are assumed to be the coordinates of the diagonal of the box, and the corners are derived accordingly.



Distance function
The function  returns distance between two points on Earth, in kilometers. Example usage:

Coordinate parts functions
Functions,   &   return parts of a coordinate - globe URI, latitude and longitude accordingly.



Функції декодування URL
Function  decodes (i.e. reverses percent-encoding) given URI string. This may be necessary when converting Wikipedia titles (which are encoded) into actual strings. This function is an opposite of SPARQL encode_for_uri function.



Автоматичні префікси
Most prefixes that are used in common queries are supported by the engine without the need to explicitly specify them.



Розширені дані
The service supports date values of type  in the range of about 290B years in the past and in the future, with one-second resolution. WDQS stores dates as the 64-bit number of seconds since the Unix epoch.

Blazegraph extensions
Blazegraph platform on top of which WDQS is implemented has its own set of SPARQL extension. Among them several graph traversal algorithms which are documented on Blazegraph Wiki, including BFS, shortest path, CC and PageRank implementations.

Please also refer to the Blazegraph documentation on query hints for information about how to control query execution and various aspects of the engine.

There is no documentation in the BlazeGraph wiki about the bd:sample extension. It's documented only in a comment in the code.

Federation
We allow SPARQL Federated Queries to call out to a selected number of external databases. Please see the full list of federated endpoints on the dedicated page.

Example federated query:

Please note that the databases that the federated endpoints serve use ontologies that may be very different from the Wikidata one. Please refer to the owner documentation links to learn about the ontologies and data access to these databases.

MediaWiki API
Please see full description on MediaWiki API Service documentation page.

MediaWiki API Service allows to call out to MediaWiki API from SPARQL, and receive the results from inside the SPARQL query. Example (finding category members):

Wikimedia service
Wikimedia runs the public service instance of WDQS, which is available for use at http://query.wikidata.org/.

The runtime of the query on the public endpoint is limited to 60 seconds. That is true both for the GUI and the public SPARQL endpoint.

GUI
The GUI at the home page of http://query.wikidata.org/ allows you to edit and submit SPARQL queries to the query engine. The results are displayed as an HTML table. Note that every query has a unique URL which can be bookmarked for later use. Going to this URL will put the query in the edit window, but will not run it - you still have to click "Execute" for that.

One can also generate a short URL for the query via a URL shortening service by clicking the "Generate short URL" link on the right - this will produce the shortened URL for the current query.

The "Add prefixes" button generates the header containing standard prefixes for SPARQL queries. The full list of prefixes that can be useful is listed in the RDF format documentation. Note that most common prefixes work automatically, since WDQS supports them out of the box.

The GUI also features a simple entity explorer which can be activated by clicking on the "🔍" symbol next to the entity result. Clicking on the entity Q-id itself will take you to the entity page on wikidata.org.

Default views

 * Main article: wikidata:Special:MyLanguage/Wikidata:SPARQL query service/Wikidata Query Help/Result Views

If you run the query in the WDQS GUI, you can choose which view to present by specifying a comment:  at the beginning of the query.

Display a title
If you run the query in the WDQS GUI, you can display a title on top of the results by specifying a comment:  at the beginning of the query.



Точка доступу SPARQL
Запити SPARQL можна надсилати запитом GET або POST безпосередньо до точки доступу SPARQL.

Запити GET містять запит, зазначений у URL-адресі, у форматі, наприклад,.

Запити POST можуть натомість приймати запит у тілі запиту замість URL-адреси, що дозволяє виконувати великі запити без обмеження довжини URL-адреси. (Зверніть увагу, що тіло POST все ще має містити префікс  (тобто це має бути , а не лише  ), а запит SPARQL все ще має бути URL-екранованим.)

Результат повертається як XML за замовчуванням, або як JSON, якщо або до URL-адреси включено параметр запиту, або разом із запитом вказано заголовок.

Формат JSON є стандартним SPARQL 1.1 Query Results JSON Format.

Рекомендується використовувати GET для невеликих запитів та POST для великих запитів, оскільки запити POST не кешуються.



Підтримувані формати
Тепер точка доступу SPARQL підтримує такі формати виходу:



Обмеження запиту
Налаштовано жорстку тривалість запиту, яку встановлено на 60 секунд. Існують також такі обмеження:


 * Одному клієнту (user agent + IP) дозволяється 60 секунд часу обробки кожні 60 секунд
 * Одному клієнту дозволено 30 помилкових запитів за хвилину

Клієнти, що перевищують зазначені вище умови, обмежуються кодом HTTP. Використовуйте заголовок, щоб побачити, коли запит можна повторити. Якщо клієнт ігнорує відповіді 429 і продовжує надсилати запити понад обмеження, він може бути тимчасово заборонений службою. Клієнтів, які не дотримуються політик User-Agent, можуть повністю заблокувати — обов’язково надішліть хороший заголовок.

Every query will timeout when it takes more time to execute than this configured deadline. You may want to optimize the query or report a problematic query here.

Also note that currently access to the service is limited to 5 parallel queries per IP. The above limits are subject to change depending on resources and usage patterns.

Explain Query
Blazegraph allows to show query analysis that explains how the query has been parsed and which optimizations were applied. To see this information, add  parameter to the query string, for example:.

Простори назв
The data on Wikidata Query Service contains the main namespace,, to which queries to the main SPARQL endpoint are directed, and other auxiliary namespaces, listed below. To query data from different namespace, use endpoint URL https://query.wikidata.org/bigdata/namespace/NAMESPACENAME/sparql.

Категорії
'' Please see full description on Categories documentation page. ''

Wikidata Query Service also provides access to the category graph of select wikis. The list of covered wikis can be seen here: https://noc.wikimedia.org/conf/dblists/categories-rdf.dblist

The category namespace name is. The SPARQL endpoint for accessing it is https://query.wikidata.org/bigdata/namespace/categories/sparql.

Please see Categories page for detailed documentation.

DCAT-AP
The DCAT-AP data for Wikidata is available as SPARQL at https://query.wikidata.org/bigdata/namespace/dcatap/sparql endpoint.

The source for the data is: https://dumps.wikimedia.org/wikidatawiki/entities/dcatap.rdf

Example query to retrieve data:

Linked Data Fragments endpoint
We also support querying the database using Triple Pattern Fragments interface. This allows to cheaply and efficiently browse triple data where one or two components of the triple is known and you need to retrieve all triples that match this template. See more information at the Linked Data Fragments site.

The interface can be accessed by the URL:. This service is implemented on the top of Blazegraph database, so it will have the same lag as the Query Service. Example requests:


 * https://query.wikidata.org/bigdata/ldf?subject=http%3A%2F%2Fwww.wikidata.org%2Fentity%2FQ146 - all triples with subject
 * https://query.wikidata.org/bigdata/ldf?subject=&predicate=http%3A%2F%2Fwww.w3.org%2F2000%2F01%2Frdf-schema%23label&object=%22London%22%40en - all triples that have English label "London"
 * https://query.wikidata.org/bigdata/ldf?predicate=http%3A%2F%2Fwww.wikidata.org%2Fprop%2Fdirect%2FP212&object=%22978-0-262-03293-3%22 All triples that have as the value for . The following shell command uses  to build the same URL and obtain the same data.

Note that only full URLs are currently supported for the,   and   parameters.

By default, HTML interface is displayed, however several data formats are available, defined by  HTTP header.

The data is returned in pages, page size being 100 triples. The pages are numbered starting from 1, and page number is defined by  parameter.

Standalone service
As the service is open source software, it is also possible to run the service on any user's server, by using the instructions provided below.

The hardware recommendations can be found in Blazegraph documentation.

If you plan to run the service against non-Wikidata Wikibase instance, please see further instructions.

Встановлення
In order to install the service, it is recommended that you download the full service package as a ZIP file, e.g. from Maven Central, with group ID  and artifact ID " ", or clone the source distribution at https://github.com/wikimedia/wikidata-query-rdf/ and build it with "mvn package". The package ZIP will be in the  directory under.

The package contains the Blazegraph server as a .war application, the libraries needed to run the updater service to fetch fresh data from the wikidata site, scripts to make various tasks easier, and the GUI in the  subdirectory. If you want to use the GUI, you will have to configure your HTTP server to serve it.

By default, only the SPARQL endpoint at http://localhost:9999/bigdata/namespace/wdq/sparql is configured, and the default Blazegraph GUI is available at http://localhost:9999/bigdata/. Note that in the default configuration, both are accessible only from localhost. You will need to provide external endpoints and an appropriate access control if you intend to access them from outside.

Using snapshot versions
If you want to install an un-released snapshot version (usually this is necessary if released version has a bug which is fixed but new release is not available yet) and do not want to compile your own binaries, you can use either:


 * https://github.com/wikimedia/wikidata-query-deploy - deployment repo containing production binaries. Needs  working. Check it out and do " ".
 * Archiva snapshot deployments at https://archiva.wikimedia.org/#artifact/org.wikidata.query.rdf/service - choose the latest version, then Artifacts, and select the latest package for download.



Завантаження даних
Further install procedure is described in detail in the Getting Started document which is part of the distribution, and involves the following steps:


 * 1) Download recent RDF dump from https://dumps.wikimedia.org/wikidatawiki/entities/ (the RDF one is the one ending in  ).
 * 2) Pre-process data with the   script. This creates a set of TTL files with preprocessed data, with names like , etc. See options for the script below.
 * 3) Start Blazegraph service by running the   script.
 * 4) Load the data into the service by using  . Note that loading data is usually significantly slower than pre-processing, so you can start loading as soon as several preprocessed files are ready. Loading can be restarted from any file by using the options as described below.
 * 5) After all the data is loaded, start the Updater service by using.



Завантаження категорій
If you also want to load category data, please do the following:


 * 1) Create namespace, e.g.  :
 * 2) Load data into it:

Note that these scripts only load data from Wikimedia wikis according to Wikimedia settings. If you need to work with other wiki, you may need to change some variables in the scripts.

Scripts
The following useful scripts are part of the distribution:

munge.sh
Pre-process data from RDF dump for loading.

Приклад:

loadData.sh
Load processed data into Blazegraph. Requires  to be installed.

Приклад:

runBlazegraph.sh
Run the Blazegraph service.

Приклад:

Inside the script, there are two variables that one may want to edit:

Also, the following environment variables are checked by the script (all of them are optional):

runUpdate.sh
Run the Updater service.

It is recommended that the settings for the  and   options (or absence thereof) be the same for munge.sh and runUpdate.sh, otherwise data may not be updated properly.

Приклад:

Also, the following environment variables are checked by the script (all of them are optional):

Updater options
The following options works with Updater app.

They should be given to the  script as additional options after , e.g.:.

Configurable properties
The following properties are configurable via adding them to the script run command in the scripts above:

Missing features
Below are features which are currently not supported:


 * Redirects are only represented as owl:sameAs triple, but do not express any equivalence in the data and have no special support.

Contacts
If you notice anything wrong with the service, you can contact the Discovery team by email on the list  or on the IRC channel.

Bugs can also be submitted to and tracked on the Discovery Phabricator board.



Див. також

 * WDQ to SPARQL syntax translator
 * SPARQL Query examples
 * Discovery team
 * WDQS Implementation notes
 * An introduction to SPARQL query syntax