Wikidata Query Service/User Manual



wdqs>Wikidata query service|Wikidata Query Service (WDQS) is a software package and public service designed to provide a SPARQL endpoint which allows you to query against the wd>wikidata:|Wikidata data set.

Please note that the service is currently in beta mode, which means that details of the data set or the service provided can change without prior warning.

This page or other relevant documentation pages will be updated accordingly; it is recommended that you watch them if you are using the service.

You can see examples of the SPARQL Queries on the examples>d:Special:MyLanguage/Wikidata:SPARQL query service/queries/examples|SPARQL examples page.

Data set
The Wikidata Query Service operates on a data set from Wikidata.org, represented in RDF as described in the dump-format>Wikibase/Indexing/RDF Dump Format|RDF dump format documentation.

The service's data set does not exactly match the data set produced by RDF dumps, mainly for performance reasons; the documentation describes a small set of differences.

You can download a weekly dump of the same data from:

https://dumps.wikimedia.org/wikidatawiki/entities/

Basics - Understanding SPO (Subject, Predicate, Object) also known as a Semantic Triple
spo or "subject, predicate, object" is known as a triple, or commonly referred to in Wikidata as a statement about data.

The statement "The United States capital is Washington DC" consists of the subject "United States" (Q30), the predicate "capital is" (P36), and an object "Washington DC" (Q61). This statement can be represented as three URIs:

Thanks to the prefixes (see below), the same statement can be written in a more concise form. Note the dot at the end to represent the end of the statement.

The /entity/ (wd:) represents Wikidata entity (Q-number values). The /prop/direct/ (wdt:) is a "truthy" property — a value we would expect most often when looking at the statement. The truthy properties are needed because some statements could be "true-er" than others. For example, the statement "The capital of U.S. is New York City" is also true — but only if you look at the context of U.S. history. WDQS uses rank to determine which statements should be used as "truthy".

In addition to the truthy statements, WDQS stores all statements (both truthy and not), but they don't use the same wdt: prefix. q30-p36>wikidata:Q30#P36|U.S. capital have three values: DC, Philadelphia, and New York. And each of these values have "qualifiers" - additional information, such as start and end dates, that narrows down the scope of each statement. To store this information in the triplestore, WDQS introduces an auto-magical "statement" subject, which is essentially a random number:

See sparql-tut>wikidata:Wikidata:SPARQL_tutorial#Qualifiers|SPARQL tutorial - qualifiers for more information.

spo is also used as a form of basic syntax layout for querying RDF data structures, or any graph database or triplestore, such as the Wikidata Query Service (WDQS), which is powered by Blazegraph, a high performance graph database.

Advanced uses of a triple (spo) even including using triples as objects or subjects of other triples!

Basics - Understanding Prefixes
The subjects and predicates (first and second values of the triple) must always be stored as URI. For example, if the subject is Universe (Q1), it will be stored as   . Prefixes allow us to write that long URI in a shorter form: wd:Q1</>. Unlike subjects and predicates, the object (triple's third value) can be either a URI or a literal, e.g. a number or a string.

WDQS understands many shortcut abbreviations, known as prefixes. Some are internal to Wikidata, e.g. <tvar|code>wd, wdt, p, ps, bd</>, and many others are commonly used external prefixes, like rdf, skos, owl, schema.

In the following query, we are asking for items where there is a statement of "P279 = Q7725634" or in fuller terms, selecting subjects that have a predicate of "subclass of" with an object of = "literary work". The output variables:

Extensions
The service supports the following extensions to standard SPARQL capabilities:

Label service
You can fetch the label, alias, or description of entities you query, with language fallback, using the specialized service with the URI <tvar|uri><http://wikiba.se/ontology#label></>. The service is very helpful when you want to retrieve labels, as it reduces the complexity of SPARQL queries that you would otherwise need to achieve the same effect.

The service can be used in one of the two modes: manual and automatic.

In automatic mode, you only need to specify the service template, e.g.:

Geospatial search
The service allows to search for items with coordinates located within certain radius of the center of within certain bounding box.

Search around point
Example:

The first line of the <tvar|around> </> service call must have format <tvar|code>    </>, where the result of the search will bind <tvar|item> </> to items within the specified location and <tvar|location> </> to their coordinates. The parameters supported are:

Search within box
Example of box search:

or:

Coordinates may be specified directly:

The first line of the <tvar|box> </> service call must have format <tvar|code>    </>, where and the result of the search will bind <tvar|item> </> to items within the specified location and <tvar|loc> </> to their coordinates. The parameters supported are:

<tvar|1> </> and <tvar|2> </> should be used together, as well as <tvar|3> </> and <tvar|4> </>, and can not be mixed. If <tvar|5> </> and <tvar|6> </> predicates are used, then the points are assumed to be the coordinates of the diagonal of the box, and the corners are derived accordingly.

Distance function
The function <tvar|dist> </> returns distance between two points on Earth, in kilometers. Example usage:

Coordinate parts functions
Functions <tvar|glob> </>, <tvar|lat> </> & <tvar|long> </> return parts of a coordinate - globe URI, latitude and longitude accordingly.

Decode URL functions
Function <tvar|decodeuri> </> decodes (i.e. reverses percent-encoding) given URI string. This may be necessary when converting Wikipedia titles (which are encoded) into actual strings. This function is an opposite of SPARQL [<tvar|w3-org>https://www.w3.org/TR/sparql11-query/#func-encode</> encode_for_uri] function.

Automatic prefixes
Most prefixes that are used in common queries are supported by the engine without the need to explicitly specify them.

Extended dates
The service supports date values of type  in the range of about 290B years in the past and in the future, with one-second resolution. WDQS stores dates as the 64-bit number of seconds since the Unix epoch.

Blazegraph extensions
Blazegraph platform on top of which WDQS is implemented has its own set of SPARQL extension. Among them several graph traversal algorithms which are documented on Blazegraph Wiki, including BFS, shortest path, CC and PageRank implementations.

Please also refer to the Blazegraph documentation on query hints for information about how to control query execution and various aspects of the engine.

Federation
We allow [<tvar|federated-query>https://www.w3.org/TR/sparql11-federated-query/</> SPARQL Federated Queries] to call out to a selected number of external databases. Supported endpoints are:

Example federated query:

Please note that the databases listed above use ontologies that may be very different from the Wikidata one. Please refer to the owner documentation links above to learn about the ontologies and data access to these databases.

Mediawiki API
Please see full description on mwapi>Wikidata_query_service/User_Manual/MWAPI</>|Mediawiki API Service documentation page.

Mediawiki API Service allows to call out to Mediawiki API from SPARQL, and receive the results from inside the SPARQL query. Example (finding category members):

Wikimedia service
Wikimedia runs the public service instance of WDQS, which is available for use at <tvar|url>http://query.wikidata.org/</>.

The runtime of the query on the public endpoint is limited to 60 seconds. That is true both for the GUI and the public SPARQL endpoint. If you need to run longer queries, please contact the discovery>Wikimedia Discovery</>|Discovery team.

GUI
The GUI at the home page of <tvar|wd-query>http://query.wikidata.org/</> allows you to edit and submit SPARQL queries to the query engine. The results are displayed as an HTML table. Note that every query has a unique URL which can be bookmarked for later use. Going to this URL will put the query in the edit window, but will not run it - you still have to click "Execute" for that.

One can also generate a short URL for the query via a URL shortening service by clicking the "Generate short URL" link on the right - this will produce the shortened URL for the current query.

The "Add prefixes" button generates the header containing standard prefixes for SPARQL queries. The full list of prefixes that can be useful is listed in the prefixes>Wikibase/Indexing/RDF Dump Format#Full list of prefixes</>|RDF format documentation. Note that most common prefixes work automatically, since WDQS supports them out of the box.

The GUI also features a simple entity explorer which can be activated by clicking on the "🔍" symbol next to the entity result. Clicking on the entity Q-id itself will take you to the entity page on wikidata.org.

Default views

 * Main article: wikidata:Special:MyLanguage/Wikidata:SPARQL query service/Wikidata Query Help/Result Views

If you run the query in the WDQS GUI, you can choose which view to present by specifying a comment:  at the beginning of the query.

SPARQL endpoint
SPARQL queries can be submitted directly to the SPARQL endpoint with a GET or POST request to <tvar|code1> </>. The result is returned as XML by default, or as JSON if either the query parameter <tvar|code2> </> or the header <tvar|code3> </> are provided. POST requests also accept the query in the body of the request, instead of URL, allowing to run larger queries without hitting URL length limit. (Note that the POST body must still be, not just  , and the SPARQL query must still be URL-escaped.)

JSON format is standard [<tvar|json>https://www.w3.org/TR/sparql11-results-json/</> SPARQL 1.1 Query Results JSON Format].

It is recommended to use GET for smaller queries and POST for larger queries, as POST queries are not cached.

Supported formats
The following output formats are currently supported by the SPARQL endpoint:

Query timeout
There is a hard query deadline configured which is set to 60 seconds.

Every query will timeout when it takes more time to execute than this configured deadline. You may want to optimization>d:Wikidata:SPARQL query service/query optimization</>|optimize the query or report a problematic query problematic>Wikidata query service/Problematic queries</>|here.

Also note that currently access to the service is limited to 5 parallel queries per IP. These limits are subject to change depending on resources and usage patterns.

Namespaces
The data on Wikidata Query Service contains the main namespace, <tvar|wdq> </>, to which queries to the main SPARQL endpoint are directed, and other auxiliary namespaces, listed below. To query data from different namespace, use endpoint URL <tvar|url>https://query.wikidata.org/bigdata/namespace/NAMESPACENAME/sparql</>.

DCAT-AP
The DCAT-AP data for Wikidata is available as SPARQL in namespace.

The SPARQL endpoint for accessing it is: <tvar|url>https://query.wikidata.org/bigdata/namespace/dcatap/sparql</>

The source for the data is: <tvar|rdf>https://dumps.wikimedia.org/wikidatawiki/entities/dcatap.rdf</>

Example query to retrieve data:

Linked Data Fragments endpoint
We also support querying the database using Triple Pattern Fragments interface. This allows to cheaply and efficiently browse triple data where one or two components of the triple is known and you need to retrieve all triples that match this template. See more information at the Linked Data Fragments site.

The interface can be accessed by the URL:. Example requests:


 * <tvar|url>https://query.wikidata.org/bigdata/ldf?subject=http%3A%2F%2Fwww.wikidata.org%2Fentity%2FQ146</> - all triples with subject


 * <tvar|url>https://query.wikidata.org/bigdata/ldf?subject=&predicate=http%3A%2F%2Fwww.w3.org%2F2000%2F01%2Frdf-schema%23label&object=%22London%22%40en</> - all triples that have English label "London"

Note that only full URLs are currently supported for the <tvar|subj> </>, <tvar|predic> </> and <tvar|obj> </> parameters.

By default, HTML interface is displayed, however several data formats are available, defined by  HTTP header.

The data is returned in pages, page size being 100 triples. The pages are numbered starting from 1, and page number is defined by  parameter.

Standalone service
As the service is open source software, it is also possible to run the service on any user's server, by using the instructions provided below.

The hardware recommendations can be found in Blazegraph documentation.

If you plan to run the service against non-Wikidata Wikibase instance, please see further instructions.

Installing
In order to install the service, it is recommended that you download the full service package as a ZIP file, e.g. [<tvar|url>http://search.maven.org/#search%7Cgav%7C1%7Cg%3A%22org.wikidata.query.rdf%22%20AND%20a%3A%22service%22</> from Maven Central], with group ID  and artifact ID " ", or clone the source distribution at <tvar|github>https://github.com/wikimedia/wikidata-query-rdf/</> and build it with "mvn package". The package ZIP will be in the  directory under.

The package contains the Blazegraph server as a .war application, the libraries needed to run the updater service to fetch fresh data from the wikidata site, scripts to make various tasks easier, and the GUI in the  subdirectory. If you want to use the GUI, you will have to configure your HTTP server to serve it.

By default, only the SPARQL endpoint at <tvar|url1>http://localhost:9999/bigdata/namespace/wdq/sparql</> is configured, and the default Blazegraph GUI is available at <tvar|url2>http://localhost:9999/bigdata/</>. Note that in the default configuration, both are accessible only from localhost. You will need to provide external endpoints and an appropriate access control if you intend to access them from outside.

Using snapshot versions
If you want to install an un-released snapshot version (usually this is necessary if released version has a bug which is fixed but new release is not available yet) and do not want to compile your own binaries, you can use either:
 * <tvar|github>https://github.com/wikimedia/wikidata-query-deploy</> - deployment repo containing production binaries. Needs  working. Check it out and do " ".
 * Archiva snapshot deployments at <tvar|archiva>https://archiva.wikimedia.org/#artifact/org.wikidata.query.rdf/service</> - choose the latest version, then Artifacts, and select the latest package for download.

Loading data
Further install procedure is described in detail in the [<tvar|github>https://github.com/wikimedia/wikidata-query-rdf/blob/master/docs/getting-started.md</> Getting Started document] which is part of the distribution, and involves the following steps:


 * 1) Download recent RDF dump from <tvar|dumps>https://dumps.wikimedia.org/wikidatawiki/entities/</> (the RDF one is the one ending in  ).
 * 2) Pre-process data with the   script. This creates a set of TTL files with preprocessed data, with names like , etc. See options for the script below.
 * 3) Start Blazegraph service by running the   script.
 * 4) Load the data into the service by using <tvar|loaddata> </>. Note that loading data is usually significantly slower than pre-processing, so you can start loading as soon as several preprocessed files are ready. Loading can be restarted from any file by using the options as described below.
 * 5) After all the data is loaded, start the Updater service by using.

Loading categories
If you also want to load cat>#Categories</>|category data, please do the following:


 * 1) Create namespace, e.g.  :
 * 1) Load data into it:

Note that these scripts only load data from Wikimedia wikis according to Wikimedia settings. If you need to work with other wiki, you may need to change some variables in the scripts.

Scripts
The following useful scripts are part of the distribution:

munge.sh
Pre-process data from RDF dump for loading.

Example:

loadData.sh
Load processed data into Blazegraph. Requires  to be installed.

Example:

runBlazegraph.sh
Run the Blazegraph service.

Example:

Inside the script, there are two variables that one may want to edit: DEFAULT_GLOBE=2 USER_AGENT="Wikidata Query Service; https://query.wikidata.org/"; Also, the following environment variables are checked by the script (all of them are optional):
 * 1) Q-id of the default globe
 * 1) Blazegraph HTTP User Agent for federation

runUpdate.sh
Run the Updater service.

It is recommended that the settings for the  and   options (or absence thereof) be the same for munge.sh and runUpdate.sh, otherwise data may not be updated properly.

Example:

Also, the following environment variables are checked by the script (all of them are optional):

Updater options
The following options works with Updater app.

They should be given to the <tvar|runupdater> </> script as additional options after <tvar|code1> </>, e.g.: <tvar|code2> </>.

Configurable properties
The following properties are configurable via adding them to the script run command in the scripts above:

Missing features
Below are features which are currently not supported:


 * Redirects are only represented as owl:sameAs triple, but do not express any equivalence in the data and have no special support.

Contacts
If you notice anything wrong with the service, you can contact the discovery>Wikimedia Discovery</>|Discovery team by email on the list <tvar|maillist> </> or on the irc>Special:MyLanguage/MediaWiki on IRC</>|IRC channel <tvar|irc-discovery> </>.

Bugs can also be submitted to <tvar|phab></> and tracked on the phab-discovery>phab:tag/discovery/</>|Discovery Phabricator board.