Wikidata Query Service/User Manual/ja

ウィキデータ･クエリサービス Wikidata Query Service (WDQS) はSPARQL（スパークル）エンドポイントを提供するソフトウェア兼公共サービスで、利用者はウィキデータのデータセットに検索をかけることができます.

サーバ運用は現在、ベータ段階にあり、そのため、事前のお知らせのないままデータセットもしくはサービス内容を変更することがある点にご留意ください.

このページあるいは関連するその他の説明文書は、臨機応変に更新されます. このサービスを利用する場合は、これらをウォッチリストに追加するようお勧めします.

SPARQLクエリのサンプルは、SPARQLの例のページを参照してください.

データセット
ウィキデータ・クエリサービスの対象はWikidata.orgに保存されたデータセットで、RDF ダンプ形式説明文書に解説してあるとおり、RDF形式で保存されます.

このサービスのデータセットは主に性能上の理由から、RDF形式のダンプファイルと完全には一致しません. 相違点の解説は説明文書を参照してください.

同じデータは毎週、次の場所でダンプを公開しています（訳注：ウィキデータ・ウィキの実体の索引）.

https://dumps.wikimedia.org/wikidatawiki/entities/

基本 - SPOまたは構文の3要素を理解する
SPOつまり「主語・述語・目的語」（Subject, Predicate, Object）の3つの要素はデータに関する情報を表現し、これをトリプルあるいはウィキデータではデータの文（ステートメント）と呼んでいます.

「アメリカ合衆国の首都はワシントンD.C.である」という文の、主語は〈アメリカ合衆国〉（Q30）、述部は〈ワシントンD.C.である〉（P36）、目的語は〈首都〉（Q61）で、この文を構成するURIは3つとなります.

接頭辞のおかげで（下記で詳述）、同じ文は下記のように簡潔に書き換えができます. 文の終わりにある句点に注意してください.

The /entity/ (wd:) represents Wikidata entity (Q-number values). The /prop/direct/ (wdt:) is a "truthy" property — a value we would expect most often when looking at the statement. The truthy properties are needed because some statements could be "true-er" than others. For example, the statement "The capital of U.S. is New York City" is also true — but only if you look at the context of U.S. history. WDQS uses rank to determine which statements should be used as "truthy".

In addition to the truthy statements, WDQS stores all statements (both truthy and not), but they don't use the same wdt: prefix. U.S. capital have three values: DC, Philadelphia, and New York. And each of these values have "qualifiers" - additional information, such as start and end dates, that narrows down the scope of each statement. To store this information in the triplestore, WDQS introduces an auto-magical "statement" subject, which is essentially a random number:

詳細はSPARQL tutorial - qualifiersを参照してください.

spo is also used as a form of basic syntax layout for querying RDF data structures, or any graph database or triplestore, such as the Wikidata Query Service (WDQS), which is powered by Blazegraph, a high performance graph database.

Advanced uses of a triple (spo) even including using triples as objects or subjects of other triples!

基本 - 接頭辞（プレフィックス）とは
主語と述語（トリプルの最初と2番目の要素）はつねにURIとして保存する必要があります. 一例として、主語が宇宙 (Q1)の場合は  として保存されます. 接頭辞があると左記の長いURIをコンパクトにwd:Q1と書くことができます. 主語や述語に対して、目的語は（トリプルの第3の要素）URIでも、そのまま数字または文字列などでも保存できます.

WDQS understands many shortcut abbreviations, known as prefixes. Some are internal to Wikidata, e.g. wd, wdt, p, ps, bd, and many others are commonly used external prefixes, like rdf, skos, owl, schema.

In the following query, we are asking for items where there is a statement of "P279 = Q7725634" or in fuller terms, selecting subjects that have a predicate of "subclass of" with an object of = "literary work". The output variables:

拡張機能
このサービスはSPARQLの標準機能に対して、下記の拡張機能を提供します.

ラベル検索サービス
URI を使う特別なサービスにより、実行しようとするクエリのラベルや別名（Alias＝コマンドを別名で登録したエイリアス）あるいはエンティティの説明を入手できます. SPARQLクエリでこれらを入手しようとすると複雑な手順が必要ですが、このサービスはそれを省略してくれるため、ラベルを入手したいときに大変便利です.

サービスは手動もしくは自動で実行します.

自動モードではサービスのテンプレートを指定するだけです. 例：

地理空間検索
このサービスでは座標を与えられたアイテムを、特定の枠組みの中心から特定の距離の範囲内で検索することができます.

特定のポイント周辺の検索
例:

The first line of the  service call must have format     , where the result of the search will bind   to items within the specified location and   to their coordinates. The parameters supported are:

枠組み内の検索
ボックス検索の例：

または:

座標は直接、指定できます：

The first line of the  service call must have format     , where and the result of the search will bind   to items within the specified location and   to their coordinates. The parameters supported are:

and  should be used together, as well as   and , and can not be mixed. If  and   predicates are used, then the points are assumed to be the coordinates of the diagonal of the box, and the corners are derived accordingly.

Distance function
The function  returns distance between two points on Earth, in kilometers. Example usage:

Coordinate parts functions
Functions,   &   return parts of a coordinate - globe URI, latitude and longitude accordingly.

Decode URL functions
Function  decodes (i.e. reverses percent-encoding) given URI string. This may be necessary when converting Wikipedia titles (which are encoded) into actual strings. This function is an opposite of SPARQL encode_for_uri function.

Automatic prefixes
Most prefixes that are used in common queries are supported by the engine without the need to explicitly specify them.

Extended dates
The service supports date values of type  in the range of about 290B years in the past and in the future, with one-second resolution. WDQS stores dates as the 64-bit number of seconds since the Unix epoch.

Blazegraph extensions
Blazegraph platform on top of which WDQS is implemented has its own set of SPARQL extension. Among them several graph traversal algorithms which are documented on Blazegraph Wiki, including BFS, shortest path, CC and PageRank implementations.

Please also refer to the Blazegraph documentation on query hints for information about how to control query execution and various aspects of the engine.

Federation
We allow SPARQL Federated Queries to call out to a selected number of external databases. Supported endpoints are: Example federated query:

Please note that the databases listed above use ontologies that may be very different from the Wikidata one. Please refer to the owner documentation links above to learn about the ontologies and data access to these databases.

Mediawiki API
Please see full description on Mediawiki API Service documentation page.

Mediawiki API Service allows to call out to Mediawiki API from SPARQL, and receive the results from inside the SPARQL query. Example (finding category members):

Wikimedia サービス
Wikimedia runs the public service instance of WDQS, which is available for use at http://query.wikidata.org/.

The runtime of the query on the public endpoint is limited to 60 seconds. That is true both for the GUI and the public SPARQL endpoint. If you need to run longer queries, please contact the Discovery team.

GUI
The GUI at the home page of http://query.wikidata.org/ allows you to edit and submit SPARQL queries to the query engine. The results are displayed as an HTML table. Note that every query has a unique URL which can be bookmarked for later use. Going to this URL will put the query in the edit window, but will not run it - you still have to click "Execute" for that.

One can also generate a short URL for the query via a URL shortening service by clicking the "Generate short URL" link on the right - this will produce the shortened URL for the current query.

The "Add prefixes" button generates the header containing standard prefixes for SPARQL queries. The full list of prefixes that can be useful is listed in the RDF format documentation. Note that most common prefixes work automatically, since WDQS supports them out of the box.

The GUI also features a simple entity explorer which can be activated by clicking on the "🔍" symbol next to the entity result. Clicking on the entity Q-id itself will take you to the entity page on wikidata.org.

Default views

 * Main article: wikidata:Special:MyLanguage/Wikidata:SPARQL query service/Wikidata Query Help/Result Views

If you run the query in the WDQS GUI, you can choose which view to present by specifying a comment:  at the beginning of the query.

SPARQL endpoint
SPARQL queries can be submitted directly to the SPARQL endpoint with a GET or POST request to. The result is returned as XML by default, or as JSON if either the query parameter  or the header   are provided. POST requests also accept the query in the body of the request, instead of URL, allowing to run larger queries without hitting URL length limit. (Note that the POST body must still be, not just  , and the SPARQL query must still be URL-escaped.)

JSON format is standard SPARQL 1.1 Query Results JSON Format.

It is recommended to use GET for smaller queries and POST for larger queries, as POST queries are not cached.

Supported formats
The following output formats are currently supported by the SPARQL endpoint:

Query timeout
There is a hard query deadline configured which is set to 60 seconds.

Every query will timeout when it takes more time to execute than this configured deadline. You may want to optimize the query or report a problematic query here.

Also note that currently access to the service is limited to 5 parallel queries per IP. These limits are subject to change depending on resources and usage patterns.

名前空間
The data on Wikidata Query Service contains the main namespace,, to which queries to the main SPARQL endpoint are directed, and other auxiliary namespaces, listed below. To query data from different namespace, use endpoint URL https://query.wikidata.org/bigdata/namespace/NAMESPACENAME/sparql.

DCAT-AP
The DCAT-AP data for Wikidata is available as SPARQL in namespace.

The SPARQL endpoint for accessing it is: https://query.wikidata.org/bigdata/namespace/dcatap/sparql

The source for the data is: https://dumps.wikimedia.org/wikidatawiki/entities/dcatap.rdf

Example query to retrieve data:

Linked Data Fragments endpoint
We also support querying the database using Triple Pattern Fragments interface. This allows to cheaply and efficiently browse triple data where one or two components of the triple is known and you need to retrieve all triples that match this template. See more information at the Linked Data Fragments site.

The interface can be accessed by the URL:. Example requests:


 * https://query.wikidata.org/bigdata/ldf?subject=http%3A%2F%2Fwww.wikidata.org%2Fentity%2FQ146 - all triples with subject


 * https://query.wikidata.org/bigdata/ldf?subject=&predicate=http%3A%2F%2Fwww.w3.org%2F2000%2F01%2Frdf-schema%23label&object=%22London%22%40en - all triples that have English label "London"

Note that only full URLs are currently supported for the,   and   parameters.

By default, HTML interface is displayed, however several data formats are available, defined by  HTTP header.

The data is returned in pages, page size being 100 triples. The pages are numbered starting from 1, and page number is defined by  parameter.

Standalone service
As the service is open source software, it is also possible to run the service on any user's server, by using the instructions provided below.

The hardware recommendations can be found in Blazegraph documentation.

If you plan to run the service against non-Wikidata Wikibase instance, please see further instructions.

インストール
In order to install the service, it is recommended that you download the full service package as a ZIP file, e.g. from Maven Central, with group ID  and artifact ID " ", or clone the source distribution at https://github.com/wikimedia/wikidata-query-rdf/ and build it with "mvn package". The package ZIP will be in the  directory under.

The package contains the Blazegraph server as a .war application, the libraries needed to run the updater service to fetch fresh data from the wikidata site, scripts to make various tasks easier, and the GUI in the  subdirectory. If you want to use the GUI, you will have to configure your HTTP server to serve it.

By default, only the SPARQL endpoint at http://localhost:9999/bigdata/namespace/wdq/sparql is configured, and the default Blazegraph GUI is available at http://localhost:9999/bigdata/. Note that in the default configuration, both are accessible only from localhost. You will need to provide external endpoints and an appropriate access control if you intend to access them from outside.

Using snapshot versions
If you want to install an un-released snapshot version (usually this is necessary if released version has a bug which is fixed but new release is not available yet) and do not want to compile your own binaries, you can use either:
 * https://github.com/wikimedia/wikidata-query-deploy - deployment repo containing production binaries. Needs  working. Check it out and do " ".
 * Archiva snapshot deployments at https://archiva.wikimedia.org/#artifact/org.wikidata.query.rdf/service - choose the latest version, then Artifacts, and select the latest package for download.

データの読み込み
Further install procedure is described in detail in the Getting Started document which is part of the distribution, and involves the following steps:


 * 1) Download recent RDF dump from https://dumps.wikimedia.org/wikidatawiki/entities/ (the RDF one is the one ending in  ).
 * 2) Pre-process data with the   script. This creates a set of TTL files with preprocessed data, with names like , etc. See options for the script below.
 * 3) Start Blazegraph service by running the   script.
 * 4) Load the data into the service by using  . Note that loading data is usually significantly slower than pre-processing, so you can start loading as soon as several preprocessed files are ready. Loading can be restarted from any file by using the options as described below.
 * 5) After all the data is loaded, start the Updater service by using.

Loading categories
If you also want to load category data, please do the following:


 * 1) Create namespace, e.g.  :
 * 2) Load data into it:

Note that these scripts only load data from Wikimedia wikis according to Wikimedia settings. If you need to work with other wiki, you may need to change some variables in the scripts.

スクリプト
The following useful scripts are part of the distribution:

munge.sh
Pre-process data from RDF dump for loading.

例:

loadData.sh
Load processed data into Blazegraph. Requires  to be installed.

例:

runBlazegraph.sh
Run the Blazegraph service.

例:

Inside the script, there are two variables that one may want to edit: DEFAULT_GLOBE=2 USER_AGENT="Wikidata Query Service; https://query.wikidata.org/"; Also, the following environment variables are checked by the script (all of them are optional):
 * 1) Q-id of the default globe
 * 1) Blazegraph HTTP User Agent for federation

runUpdate.sh
Run the Updater service.

It is recommended that the settings for the  and   options (or absence thereof) be the same for munge.sh and runUpdate.sh, otherwise data may not be updated properly.

例:

Also, the following environment variables are checked by the script (all of them are optional):

Updater options
The following options works with Updater app.

They should be given to the  script as additional options after , e.g.:.

Configurable properties
The following properties are configurable via adding them to the script run command in the scripts above:

Missing features
Below are features which are currently not supported:


 * Redirects are only represented as owl:sameAs triple, but do not express any equivalence in the data and have no special support.

Contacts
If you notice anything wrong with the service, you can contact the Discovery team by email on the list  or on the IRC channel.

Bugs can also be submitted to and tracked on the Discovery Phabricator board.

関連項目

 * WDQ to SPARQL syntax translator
 * SPARQL Query examples
 * Discovery team
 * WDQS Implementation notes
 * An introduction to SPARQL query syntax