Extension:LinkedWiki

The LinkedWiki extension lets you reuse Linked Data in your wiki. You can get data from  or another source directly with a SPARQL query. This extension also provides Lua functions for building your modules so that you can write your data in your RDF database.

Quick start
After installing this extension:


 * 1) Open the special page: SPARQL editor
 * 2) Select a SPARQL service (in your settings) or write the endpoint of your SPARQL service
 * 3) Insert a SPARQL query (examples of SPARQL queries)
 * 4) Select a visualization: HTML table or a Sgvizler2 visualization
 * 5) For a Sgvizler2 visualization, you can click on the button "See the doc" to find its available options.
 * 6) Check the result
 * 7) To finish, open the tab "Howto use this query in this wiki?" and copy the generated wiki code in a page of your wiki

See details : #sparql reuses your data in your wiki

Maps




Only 3 parameters are necessary to print a map in your wiki:
 * 1) a SPARQL query
 * 2) a SPARQL service (by default Wikidata)
 * 3) a visualization (charts or table, pivot, etc.)

For the leaflet.visualization.Map visualization with OpenStreetMap, you can add several options.

You can also use the google.visualization.Map visualization and see the log to debug your query or the visualization.

You can replace the parameter  by the parameter   with a SPARQL endpoint but if it does not work, if that does not work, you will have to create a specific configuration for this SPARQL service.

HTML table


By default,  builds a HTML table that can be customized with wiki templates.

This visualization supports the service SPARQL with credentials that have to describe in the.

Example:

Datatable
Another available table "DataTable", this JavaScript visualization can be customized with HTML tags.

Build SPARQL queries
The LinkedWiki extension gives two SPARQL editors. Flint Editor works with SPARQL endpoint 1.1 or 1.0 but sometimes, it doesn't work for example with Wikidata.

We develop a new SPARQL editor where you can select in one click an endpoint already defined in your configuration and read (and write, if you want) via SPARQL directly in this editor.

See details: Special pages to test your queries and to build your visualizations for your wiki

Visualize SPARQL results
The extension gives a parser  to reuse your data and the Linked Open Data in your wiki.

You can use a new SPARQL endpoint or reuse a SPARQL service already defined in the configuration of your wiki.

See details :
 * #sparql reuses your data in your wiki
 * (deprecated) #wsparql

Write data in the pages
The tag  lets to write directly in RDF/Turtle (1.0 or 1.1) on any page of wiki. All pages with this tag are inserted in the category "RDF page".

If the option "check RDF Page" is enabled, the wiki checks the RDF before saving the page. If there is an error, the wiki shows the line where there is a problem in the RDF code.

Example of page with the tag  to describe a RDF documentation:

You can see the raw RDF of the page with these parameters :

Share data
IRIs (or URIs) of pages with the tag  are Cool IRIs. So via a HTTP request, a machine see only the RDF content and a human can see the RDF content and its description in natural language in the same page.

If your wiki is private, it is possible to open your private wiki only for your RDF database (see the installation).

Write data of main pages in a data namespace
The LinkedWiki extension creates namespaces: Data and UserData. Users navigate in these namespaces via the tab "Data" on all main/users pages.

Only users in the group "Data" can change these namespaces. A user or a bot can use these namespaces to write a RDF/Turtle content in relation with the main pages.

Push private data in an open knowledge base
You can insert the tab "Push" on all pages, in order to push easily in a target wiki a wiki page of another wiki with its sub-pages, data, files, modules, etc.

Configuration of SPARQL services
Often, the configuration of SPARQL services is not trivial and very different in function of RDF databases. In the configuration of this extension, you can configure in detail the HTTP requests supported by your public SPARQL services as well as your private SPARQL services.

This extension supports the SPARQL services with credentials and the users of your Wiki can reuse your private data without seeing your credentials.

See details : Configuration of the LinkedWiki extension



Module : Lua class to read/write your data
Generally for users, a wiki page is like an object where they want to be able to add a new property. Unfortunately, RDF schemas can be complex and the contributors are rarely experts in RDF or in SPARQL.

The extension simplifies the work of contributors without imposing definitive RDF schemas on your data. With the Lua class of this extension, you can build your own module (for example an infobox) where you are able to add, read and check a property of your RDF database via a SPARQL service.

If you want to change your RDF schemas, you need to change simply your modules and refresh your database and all pages of your wiki via the special page "Refresh database".

See details : Use LinkedWiki in your modules

Write constraints and generate a SHACL report
The tag  supports the attribute   to precise how checking your data. All pages with this attribute are inserted in the category "RDF schema".

For the moment, LinkedWiki support only SHACL. If RDFUnit is installed, the special page "RDF test cases" generate the SHACL report of your database with the rules wrote in the wiki. This special page shows the last report calculated and can recalculate it (many minutes).

To enable constraints, you need to insert this attribut  in the tag. Example :

Download instructions
You can download the latest version with this link.

Installation of LinkedWiki
To install this extension :
 * 1) copy the extension in the folder   of your wiki
 * 2) in the folder, execute   and   (or  ). If you don't have install composer or yarn, see in this page : "How install composer and yarn?".
 * 3) add the following lines to :

You can now use the special page "SPARQL Editor" of your wiki to build a query with its visualization and copy an example of code with the parser  in any pages of your wiki. On the service LinkedWiki.com, you can find examples of queries with their wiki text.

Local configuration of SPARQL services
By default, a query without endpoint or configuration is resolved by Wikidata (read only).

If you add a new SPARQL service and change the default SPARQL service of your wiki, you need to add parameters in your.

For example for a Virtuoso SPARQL service, you can add the configuration "http://database-test/data" :

If you want to replace Wikidata by this SPARQL service, you need to add also this line:

If you want to use this SPARQL service to save all RDF data of wiki, you need to add this line:


 * See details : Configuration
 * Examples of other endpoints : List of configurations

Make an infobox with lua functions
If you want to make an infobox with Lua functions of LinkedWiki, you need to install the Extension:Scribunto and the Extension:Capiunto.

Next, you can start to read the quick start with Lua.

Add a tab Data for main pages and user pages
NamespaceData extension allows the tag  to write with RDF/Turtle directly in a page (their ontology or their SHACL rules, for example) but people prefer to separate natural language from RDF/Turtle in their wiki.

Installation:  Download the NamespaceData extension Insert in your  Give the rights to users to see or not this tab and to change or not the pages in the Data namespace 

Add a tab Push
When you have finished working in private (ie, in a private wiki), you may want to push your pages (with their modules, templates, files and data pages) in another (public) wiki. This installation inserts a discrete tab "push" on your pages.

Installation:  Download the PushAll extension Insert in your </li> You can add push targets (wikis you can push content to) by adding elements to the  array. The array keys should be the names of your wikis and the values should point to the wiki root, without tailing slash. You can find your wiki root by clicking the history tab on a page, finding the '/index.php' in it, and taking everything that's left of that.</li> You need to create the logins and passwords via the Special: BotPasswords of targets.</li> </ol>

Example:

Check the RDF/Turtle syntax before saving
In your, you can enable the feature "check RDF Page" with this line :

This feature uses RAPPER to parse the syntax Turtle (1.0 and 1.1) in the wiki.

This tool is installed in same time that Raptor2 of Redland. To install it in CentOS, the commands are :

Generate a SHACL report
You need to install RDFUnit. This tool is experimental but the code is stable.

The extension waits RDFUnit in the folder /RDFUnit of your server (or link this folder). The special page "RDF Unit" shows the command line to test the installation and shows the SHACL report or the errors of RDFunit.

Here an example to install RDFUnit v0.8.21 (last release) in a CentOS server:

Refresh your RDF database with RDF pages of private wiki
If your wiki is private, the special page "Refresh RDF database" does not work without Extension:NetworkAuth. The RDF database with a SPARQL service need to read the RDF page of private wiki without credentials.

If your database is installed with the wiki on the same server, the configuration for Extension:NetworkAuth will be: You can find the good IP used by your SPARQL service in the HTTP logs after using the special page "Refresh RDF database".

Here, you need to create the user "NetworkAuthUser" in your wiki with the credentials necessary to read the data pages.

Force the job queue to run after a refresh of your RDF database
If your wiki's traffic is too slow to clear the queue after a refresh of your RDF database, you can clear the job queue of your wiki without waiting.

On Linux, you can insert a new automatic task each 5 minutes : crontab -e
 * /5 * * * * /usr/bin/php /WWWDATA/htdocs/w/maintenance/runJobs.php > /var/log/runJobs.log 2>&1

Without forgetting to configure Logrotate in order to delete automatically new logs about these jobs : vi /etc/logrotate.d/runJobs /var/log/runJobs.log { missingok notifempty compress size 20k daily maxage 7 }

By default, each time a request runs in the wiki, one job is taken from the job queue and executed. With this new line in your task manager, you can disable the parameter Manual:$wgJobRunRate in the "Localsettings.php" : $wgJobRunRate = 0

Highlight the RDF code on the wiki pages
You need to install Extension:SyntaxHighlight_GeSHi to highlight the RDF code on the wiki pages.

Errors about CURL
If, after the installation, you have errors about CURL, probably you need to install the lib for curl in your server. Example with ubuntu, debian, CentOS or fedora: or

Questions?

 * FAQ and problems

How install composer and yarn?
For debian or fedora: or

How to propose a new feature?

 * You can propose a new feature here.

How to report a software bug?

 * You can report a software bug here.

Change the API keys
Often, the API keys are restricted by domain name. You need to check or modify your API keys with your new domain name. In the, insert your correct API keys with your new domain name:

Replace in all pages of wiki the old domain name by the new
Replace Text extension can replace the old domain name by the new in the majority of wiki.  Download the Replace Text extension</li> Insert in your </li> Use these command lines: </li> </ol>

After, you can uninstall the extension.

Replace in all modules of wiki the old domain name by the new
Manually :  To do the list of pages in the namespace </li> Open each module and replace the old domain name by the new (Lua editor gives you the tool to replace "all text"). </li> </ol>

If you found a better method, you can propose it in the discussion page.

Refresh the configuration
If you use the old domain name in the name of your RDF graph where you save your data:  You need to replace in your the old domain name by the new</li> You need to check in the special pages of your Wiki, if you see again the old domain name. If yes, you need to replace in your storage class in the folder "LinkedWiki/storageMethod" the the old domain name.</li> Check in the special pages, if you see again the old domain name...</li> </ol>

Refresh the RDF database
If you use the old domain name in the name of your graph named where you save your data in your RDF database, you need to change the configuration of your database to allow the wiki to save in this new graph named.

To refresh your RDF database, you have to open the special page "Refresh database" and to execute in the order the 3 steps: clean all, import all data pages and refresh all pages with modules and queries.

When all jobs are executed, your database has been refreshed.

Refresh the pages with SPARQL queries
If you see several pages with SPARQL queries without results, you can open the special page "Refresh database" and to execute again the last step: refresh all pages with modules and queries.

=See also=


 * SPARQL examples: LinkedWiki.com, University of Paris-Saclay and Wikidata
 * SPARQL in Wikipedia
 * Tutorial SPARQL in french (Wikiversity)