Extension:External Data

The External Data extension allows MediaWiki pages to retrieve, filter, and format structured data from one or more sources. These sources can include external URLs, regular wiki pages, uploaded files, files on the local server, databases and LDAP directories.

Parser functions
The extension has the following : one tag: and six Lua functions:
 * #get_web_data - retrieves CSV, GFF, JSON, XML, HTML or free-form data from a URL and assigns it to variables that can be accessed on the page.
 * #get_soap_data - retrieves data from a URL via the SOAP protocol.
 * #get_file_data - retrieves data from a file on the local server, in the same formats as #get_web_data.
 * #get_db_data - retrieves data from a database.
 * #get_ldap_data - retrieves data from an LDAP server.
 * #get_program_data - retrieves data returned by a program run server-side.
 * #external_value - displays the value of any such variable.
 * #for_external_table - cycles through all the values retrieved for a set of variables, displaying the same "container" text for each one.
 * #store_external_table - cycles through a table of values, storing them as semantic data via the Semantic MediaWiki extension, by mimicking a call to SMW's #subobject function for each row.
 * #display_external_table - cycles through all the values retrieved for a set of variables, displaying each "row" using a template.
 * #clear_external_data - erases the current set of retrieved data.
 * pair that shows raw external data without any wiki postprocessing.

Download
You can download the External Data code, in .zip format, here.

You can also download the code directly via Git from the MediaWiki source code repository. From a command line, you can call the following:

You can also view the code online here.

Installation
To install this extension, create an 'ExternalData' directory (either by extracting a compressed file or downloading via Git), and place this directory within the main MediaWiki 'extensions' directory. Run in that directory. Then, in the file 'LocalSettings.php', add the following line:

Authors
External Data was created, and is maintained, by Yaron Koren (reachable at yaron57@gmail.com). The overall code base, though, is the work of many people. Alexander Mashin has contributed significantly to the code. Important code contributions have also been made by Michael Dale, David Macdonald, Siebrand Mazeland, Ryan Lane, Chris Wolcott, Jelle Scholtalbers, Kostis Anagnostopoulos, Nick Lindridge, Dan Bolser, Joel Natividad, Scott Linder, Cindy Cicalese, Umherirrender, Anysite, Sahaj Khandelwal and others.

Development of some features was funded by KeyGene and KDZ – Zentrum für Verwaltungsforschung.

Retrieving data
Data can be retrieved from three different sources: from a web page containing structured data (including a page on the wiki itself), from a database, and from an LDAP server.

#get_web_data - CSV, GFF, JSON, XML, HTML
To get data from a web page that holds structured data, call the parser function #get_web_data. It can take the following syntax:

An explanation of the parameters:


 * url - sets the full URL of the file being retrieved.
 * format - specifies the format of the data being retrieved: it should be one of either 'CSV', 'CSV with header', 'GFF', 'JSON', 'XML', 'HTML' or 'text'. CSV, JSON and XML are standard data formats; GFF, or the Generic Feature Format, is a format for genomic data. The difference between 'CSV' and 'CSV with header' is that 'CSV' is simply a set of lines with values; while in ' ', the first line is a " ", holding a comma-separated list of the name of each column. 'text' indicates that the contents of the file should be retrieved as-is.
 * delimiter - specifies the delimiter between values in the data set; it is used only for the CSV formats. The default value is "  ".  To specify a tab delimiter, use "  ".
 * regex - specifies a PHP regular expression that should be used to get specific strings; used with the "text" format. Example: For sample text, the regex   returns "Heading" to the external variable.
 * data - holds the "mappings" that connect local variable names to external variable names. Each mapping (of the form  ) is separated by a comma. External variable names are the names of the values in the file (in the case of a header-less CSV file, the names are simply the indexes of the values: 1, 2, 3, etc.), and local variable names are the names that are later passed in to.
 * filters - sets filtering on the set of rows being returned. You can set any number of filters, separated by commas; each filter sets a specific value for a specific external variable. It is not necessary to use any filters; most APIs, it is expected, will provide their own filtering ability through the URL's query string.
 * start line, end line, header lines, footer lines - use these to cut out a fragment of data. Line number are one-based, negative values (-1 meaning last) are possible as well as percentages (0% to 100%). Use header lines and footer lines to carve out a valid CSV, JSON or XML. Note that if any of these is set, additional newlines will be injected into XML or JSON to guarantee that required tag/variable blocks begin and end at new lines, which will influence the required start line and end line settings. The external variables  and   store the beginning and end of the main fragment (without header or footer),   contain the number of lines returned an   — total number of lines in the file.
 * use xpath - an optional parameter that can be used with the "XML" or "HTML" formats, to indicate that "data" mappings should be done using XPath notation; see Using XPath, below.
 * default xmlns prefix - an optional parameter that can be used with "use xpath", which sets the default namespace prefix to be used.
 * use jsonpath - an optional parameter that can be used with the "JSON" format, to indicate that "data" mappings should be done using JSONPath notation; see Using JSONPath, below.
 * json offset - an optional parameter that represents the number of characters to ignore at the beginning of the data set being parsed. It is used with JSON values, in case the JSON being accessed has some kind of security string at the beginning.
 * allow trailing commas - if this is set, JSON files with commas before  or   will be parsed although JSON specification does not allow trailing commas. This setting is useful when start line, end line, header lines and footer lines are set.
 * post data - an optional parameter that lets you send some set of data to the URL via POST, instead of via the query string.
 * cache seconds - an optional parameter that sets the number of seconds that the values from this call should be cached; if it is less than, if there is any, the latter will apply; and if the effective cache expiration time is zero, caching is forbidden.
 * use stale cache - an optional parameter that allows this function to use an expired cache entry if it cannot retrieve the real data.
 * suppress error - an optional parameter that prevents any error message from getting displayed if there is a problem retrieving the data.

More than one call can be used in a page. If this happens, though, make sure that every local variable name is unique.

For data from XML sources, the variable names are determined by both tag and attribute names. For example, given the following XML text:

the variable type would have the value Apple, and the variable color would have the value red.

Similarly, the following XML text would be interpreted as a table of values defining two variables named type and color :

A CSV file must be literally a CSV file, i.e., delimited by commas. A call for a headerless CSV file might look:



while a call to CSV with a header row might look like:



where the header contains, which is retrieved as   in the wiki.

You can also set caching to be done on the data retrieved, and string replacement to hide API keys; see the "Usage" section, below, for how to do both of those.

Getting data from a non-API text file
If the data you wish to access is on a MediaWiki page or in an uploaded file, you can use the above methods to retrieve the data assuming the page or file only contains data in one of the supported formats:


 * for data on a wiki page, use " " as part of the URL;
 * for data in an uploaded file, use the full path.

If the MediaWiki page with the data is on the same wiki, it is best to use the fullurl: parser function, e.g.



Similarly, for uploaded files, you can use the filepath: function, e.g.

For wiki pages that have additional information, the External Data extension provides a way to create an API of your own, at least for CSV data. To get this working, first place the data you want accessed in its own wiki page, in CSV format, with the headers as the top row of data (see here for an example). Then, the special page 'GetData' will provide an "instant API" for accessing either certain rows of that data, or the entire table. By adding "field-name=value" to the URL, you can limit the set of rows returned.

A URL for the 'GetData' page can then be used in a call to #get_web_data, just as any other data URL would be; the data will be returned as a CSV file with a header row, so the 'format' parameter of #get_web_data should be set to 'CSV with header'. See here for an example of such data being retrieved and displayed using #get_web_data and #for_external_table. In this way, you can use any table-based data within your wiki without the need for custom programming.

Data caching
You can configure External Data to cache the data contained in the URLs that it accesses, both to speed up retrieval of values and to reduce the load on the system whose data is being accessed. To do this, you can run the SQL contained in the extension file 'ExternalData.sql' in your database, which will create the table 'ed_url_cache', then add the following to your LocalSettings.php file, after the inclusion of External Data:

You should also add a line like the following, to set the expiration time of the cache, in seconds; this example line will cache the data for a week:

By default, if data cannot be retrieved, and a cache table exists, #get_web_data will use the cached value for this data even if the cache has already expired. To disallow this, add the following to LocalSettings.php:

String replacement in URLs
One or more of the URLs you use may contain a string that you would prefer to keep secret, like an API key. If that's the case, you can use the array $edgStringReplacements to specify a dummy string you can use in its place. For instance, let's say you want to access the URL "http://worlddata.com/api?country=Guatemala&key=123abcd", but you don't want anyone to know your API key. You can add the following to your LocalSettings.php file, after the inclusion of External Data:

Then, in your call to #get_web_data, you can replace the real URL with: "http://worlddata.com/api?country=Guatemala&key=WORLDDATA_KEY".

Whitelist for URLs
You can create a "whitelist" for URLs accessed by #get_web_data : in other words, a list of domains, that only URLs from those domains can be accessed. If you are using string replacements in order to hide secret keys, it is highly recommended that you create such a whitelist, in order to prevent users from finding out those keys by including them in a URL within a domain that they control.

To create a whitelist with one domain, add the following to LocalSettings.php:

To create a whitelist with multiple domains, add something like the following instead:

HTTP options
By default, #get_web_data allows for HTTPS-based wikis to access plain HTTP URLs, and vice versa, without the need for certificates (see Transport Layer Security on Wikipedia for a full explanation). If you want to require the presence of a certificate, add the following to LocalSettings.php :

Additionally, the global variable $edgHTTPOptions lets you set a number of other HTTP-related settings. It is an array that can take in any of the following keys:


 * - how many seconds to wait for a response from the server (default is 'default', which corresponds to the value of $wgHTTPTimeout, which by default is 25)
 * - whether to verify the SSL certificate, if retrieving an HTTPS URL (default is false)
 * - whether to retrieve another URL if the specified URL redirects to it (default is false)

So, for instance, if you want to verify the SSL certificate of any URL being accessed by #get_web_data, you would add the following to LocalSettings.php :

Using XPath
In some cases, the same tag or attribute name can be used more than once in an XML or HTML file, and you only want to get a specific instance of it. You can do that using the XPath notation. To do it, you just need to add the parameter "use xpath", and then have each "external variable name" in the "data=" parameter be in XPath notation, instead of just a simple name.

We won't get into the details of XPath notation here, but you can see a demonstration of "use xpath" here.

Using JSONPath
Just as with XML (see the section above), in JSON, specifying which data you want can require more than simply specifying an attribute or tag name. Thankfully, just as XML has XPath, JSON has JSONPath: JSONPath is less well-known but just as useful. See here for one guide to JSONPath syntax, and here for an online evaluator of JSONPath syntax.

To use JSONPath, just add the parameter "use xpath" to the parser function call, and then have each "external variable name" in the "data=" parameter be in JSONPath notation.

Using CSS-style selectors
With the "HTML" format, you can either use XPath (see above) or CSS-style selectors. For CSS-style selection, you do not need to specify a special parameter: it is the default approach used when "use xpath" is not specified. CSS selectors are a notation that uses tag names, classes and IDs to locate one or more elements in an HTML page; it is also the syntax used in jQuery. See here for one reference for CSS-style selectors.

#get_soap_data - web data via SOAP
The parser function #get_soap_data, similarly to #get_web_data, lets you get data from a URL, but here using the SOAP protocol. It is called in the following way:

All of the LocalSettings.php settings that can be applied for #get_web_data can also be applied for #get_soap_data: $edgCacheTable, $edgCacheExpireTime, $edgStringReplacements, $edgAllowExternalDataFrom and $edgAllowSSL.

#get_file_data - retrieve files on the local server
You can get data from a file on the server on which the wiki resides, using #get_file_data. This parser function is called in a similar manner to #get_web_data - the set of allowed formats is the same, as are most of the other parameters. Unlike with #get_web_data, however, you cannot retrieve the data from any file; rather, the set of allowed files, and/or directories, must be set beforehand in LocalSettings.php, with an alias for each one, so that the actual file paths remain private. It is called in the following way:

Either "file=", or the combination of "directory=" and "file name=", should be set, but not both. If you want to give the wiki access to one or a small number of files, you could add one or more lines like the following to LocalSettings.php:

You would then set "file=" to the ID for that file.

And if there are any directories that you want the wiki to be able to access all files from, you could add one or more lines like the following to LocalSettings.php :

You would then set "directory=" to the ID of that directory, and "file name=" to the name of the file you want to access in this #get_file_data call. Note that the External Data code ensures that users cannot do tricks like adding "../.." and so on to the file name to access directories outside of the specified one.

To give an example, let's say that a lab wants to publish test results on their wiki. The results are all in CSV files in one directory on a server. So, they might add the following to LocalSettings.php :

Then, a #get_file_data call on the wiki might look like this:

Below that, there would presumably be a call to #for_external_table or #display_external_table to display the resulting data.

Is is also possible to process all files, optionally, with names matching a mask, in a directory. Example:

will produce a table of PHP classes with their parents in this extension, provided that  contains. File name, relative to, will be saved to the external variable.

#get_db_data - retrieve data from a database
The parser function #get_db_data allows retrieval of data from external databases. This function executes a simple SELECT statement and assigns the results to local variables that can then be used with the #external_value or #for_external_table functions.

A note about security: - If you are going to use #get_db_data you should think about the security implications. Configuring a database in LocalSettings.php will allow anyone with edit access to your wiki to run arbitrary SQL statements against that database. You should use a database user that has the minimum permissions for what you are trying to achieve. It is possible that complex SQL constructions could be passed to this function to cause it to do things vastly different from what it was designed for.

Configuration
Each database being accessed needs to be configured separately in LocalSettings.php. For normal databases (i.e., everything except for SQLite), add the following stanza for each database:

Where:


 * ID is a label for this database which is used when calling #get_db_data
 * server URL is the hostname on which the database lives
 * DB type is the type of database, i.e. mysql, postgres, mssql, oracle, sqlite, db2 or mongodb
 * DB name, username and password are details for accessing the database.

An example of a set of values would be:

The following optional settings can also be added:

Example values for these variables are:

Support for database systems
MySQL, Postgres (i.e. PostgreSQL), DB2 and MongoDB should work fully by default (though there are syntax limitations, and differences, for MongoDB - see below). For MS SQL/SQLServer, SQLite and Oracle, you may need to perform some special handling.

Postgres
If you cannot connect to a PostgreSQL database, it may be because your PHP installation is lacking the PostgreSQL database module, php-pgsql. On many Linux systems, you can install it by calling the following, then restarting the web server: yum install php55-php-pgsql

Amend the above configuration in LocalSettings.php to change the server type to "postgres":

SQLite
To connect to SQLite, you need something like the following in LocalSettings.php:

Oracle
Connecting to Oracle may work by default. If it doesn't work, the following may help:
 * Make sure that the Oracle client, and the PHP version being used, are using the same architecture: they have to either both be 32-bit, or both be 64-bit.
 * Make sure that the value of $edgDBServer for the installation matches something in the corresponding Oracle client .ora files. The value may need to look like "serverName/dbName", as opposed to "serverName".
 * If none of the above are the issue, you could try using the OdbcDatabase extension, which should work as well.

MongoDB
For MongoDB, there are no special connection parameters, although the username and password may be optional. There are two optional query parameters:  and. Under PHP 7.*, the extension  and library    is required. Unfortunately, due to the way that MediaWiki continuous integration is built, this library cannot be simply added to  for this extension (see T259743).

MongoDB is a non-SQL (or "NoSQL", if you prefer) database system, with its own querying language. When accessing MongoDB, you can either pass in a standard MongoDB query, or use the standard SQL-like syntax of #get_db_data. To use standard MongoDB querying, pass the query to the parameter  or.

You can also use the standard querying functionality. There are some restrictions and differences, however, for the "where" clause:
 * only "AND"s can be used, not "OR"s
 * for the "LIKE" comparison, no text should be placed around the comparator - it should look like "Username LIKE Jo", not "Username LIKE '%Jo%'".

Because MongoDB returns values in JSON that may be complex, and contain compound values, you can get data that is stored in such a way by separating field names with dots. For instance, if the return data contains a value for a field called "Measurements" that is an array, holding values for fields called "Height" and "Width", then the "data=" parameter to #get_web_data could have a value like "height=Measurements.Height,width=Measurements.Width".

You can do Memcached-based caching of values retrieved from MongoDB; to do that, you need the following two lines in LocalSettings.php:

To enable ModgoDB under PHP 7.4,  extension should be enabled (and also   library should be installed with Composer:   (this will be necessary until bug T259743 is resolved).

Usage
To get data from an external database, call the following:

An explanation of the fields:


 * - the identifying label configured in LocalSettings.php
 * - an SQL "FROM" clause, i.e. one or more tables - can be as simple as  or as complex as   etc.
 * - corresponds to an SQL "JOIN ... ON" clause; used if there is more than one table being queried. An example value would be  etc.
 * - an SQL "WHERE" clause (optional)
 * - an SQL "LIMIT" clause, i.e. a number, limiting the number of results (optional)
 * - an SQL "ORDER BY" clause (optional)
 * - mapping of database column names to local variables (syntax: localVariable=databaseColumn - i.e. "employeeName" is the name of the database column in the example below).
 * - prevents any error message from getting displayed if there is a problem retrieving the data (optional)

An example call, using the "employee database" example from above:

Prepared statements
A safer approach is to define one or more prepared statements for the database connections defined in, in   configuration variable, which can be a string, containing a SQL query with parameters, for the only statement, or an associative array , for several.

Parameters to the prepared statement are passed as a comma-separated list in parser function argument. If there are several prepared statements defined for the same connection, the needed statement ID is passed as  parameter. If prepared statements are defined, arbitrary queries will not be created for the same connection.

Examples:
 * Only one statement allowed for the connection:


 * Several statements per connection:

#get_ldap_data - retrieve data from LDAP directory
The parser function #get_ldap_data allows retrieval of data from external LDAP directories. This function executes LDAP queries and assigns the results to local variables that can then be used with the #external_value function.

A note about security: - If you are going to use #get_ldap_data you should think hard about the security implications. Configuring an LDAP server in LocalSettings.php will allow anyone with edit access to your wiki to run queries against that server. You should use a domain user that has the minimum permissions for what you are trying to achieve. Wiki users could run queries to extract all sorts of information about your domain. You should know what you are doing before enabling this function.

Configuration
The PHP extension  must be enabled. You need to configure each LDAP server in LocalSettings.php. Add the following stanza for each server:

Where:


 * domain is a label to be used when calling #get_ldap_data
 * myDomainuser and myDomainPassword are credentials used to bind to the LDAP server
 * [basedn] is the base DN used for the search.

Example:

Usage
To query the LDAP server, add this call to a wiki page:

Where:


 * domain is the label used in LocalSettings.php
 * filter is the LDAP filter used for the search
 * data is the mappings of LDAP attributes to local variables
 * if all is not added, the query will retrieve only one result.

An example that retrieves a user from with Win2003/AD, using a userid passed to a template:

#get_program_data - retrieve data returned by a program run server-side
The parser function #get_ldap_data allows retrieval of data returned by a program run server-side. Every such program has to be confgured at  as in the example below:

After the program is configured so, it can be invoked thus: and then the retrieved data (SVG in this case) can be shown with pair, which will prevent any wiki postprocessing.

A simplified syntax is availble in tag emulation mode: pair.

A simpler example, involving only text processing, is below: and

Although programs are run in a restricted environment by, wiki admin should exercise great caution while configuring programs to make them callable with #get_program_data.

Program's output is cached in the table  as configured by the parser function parameters: and configuration settings:

A set of tested examples can be found here and (with working output) here.

Displaying data
Once you have retrieved the data onto the page, from any source, there are two ways to display it on the page:  and.

Displaying individual values
If this call retrieved a single value for each variable specified, you can call the following:

As an example, this page contains the following text:

.
 * Germany borders the following countries:
 * Germany has population.
 * Germany has area.
 * Its capital is.

The page gets data from this URL, which contains the following text:

"357,050 km²","Austria,Belgium,Czech Republic,Denmark,France,Luxembourg,Netherlands,Poland,Switzerland",Berlin,"82,411,001"

The page then uses #external_value to display the 'bordered countries' and 'population' values; although it uses the #arraymap function, defined by the Page Forms extension, to apply some transformations to the 'bordered countries' value (you can ignore this detail if you want).

By default, #external_value displays an error message if it is called for a variable that has not been set, or if the specified data source is inaccessible, or the data source does not contain any data; and there is no fallback/default value. You can disable the error message by adding the following to LocalSettings.php:

To prevent any further wiki processing of external data, for example, when it is SVG produced by get_program_data, you can use pair.

Displaying a table of values
The data returned by #get_web_data or #get_db_data (#get_ldap_data without the  parameter doesn't support this feature) can also be a "table" of data (many values per field), instead of just a single "row" (one value per field). In this case, you can display it using one of either the functions #for_external_table or #display_external_table.

#for_external_table
This URL contains information similar to that above, but for a few countries instead of just one. Calling #get_web_data with this URL, with the same format as above, will set the local variables to contain arrays of data, rather than single values. You can then call #for_external_table, which has the following format:

...where "expression" is a string that contains one or more variable names, surrounded by triple brackets. This string is then displayed for each retrieved "row" of data.

For an example, this page contains a call to #get_web_data for the URL mentioned above, followed by this call:

The call to #for_external_table holds a single row of a table, in wiki-text; it's surrounded by wiki-text to create the top and bottom of the table. The presence of " | " is a standard MediaWiki trick to display pipes from within parser functions. There are much easier calls to #for_external_table that can be made, if you just want to display a line of text per data "row", but an HTML table is the standard approach.

There's one other interesting feature of #for_external_table, which is that it lets you modify specific values. You can URL-encode values by calling them with instead of just , and similarly you can HTML-encode values by calling them with.

As an example of the former, if you wanted to show links to Google searches on a set of terms retrieved, you could call:

This is required because standard parser functions can't be used within #for_external_table - so the following, for example, will not work:

#display_external_table
This function is called as:
 * 1) display_external_table is similar in concept to #for_external_table, but it passes the values in each row to a template, which handles the display.

An explanation of the parameters:
 * - the name of the template into which each "row" of data will be passed
 * - the data mappings between external variable and local template parameter; much like the  parameters for the other functions
 * - the separator used between one template call and the next; default is a newline. (To include newlines in the delimiter value, use "\n".)
 * - a template displayed before the results set, only if there are any results
 * - a template displayed after the results set, only if there are any results

For example, to display the data from the previous example in a table as before, you could create a template called "Country info row", that had the parameters "Country name", "Countries bordered", "Population" and "Area", and then call the following:

The template "Country info row" should then contain wikitext like the following:

Clearing data
You can also clear all external data that has already been retrieved, so that it doesn't conflict with calls to retrieve external data further down the page. The most likely case in which this is useful is when data is retrieved and displayed in a template that is called more than once on a page. To clear the data, just call " ". Note that the ":" has to be there at the end of the call, or else MediaWiki will ignore the parser function.

There is no way to clear the values for only one field; #clear_external_data erases the entire set of data.

Storing data
You can also use External Data to store a table of data that has been retrieved; you can do this using the storage capabilities of either the Semantic MediaWiki or Cargo extensions. Once the data has been stored, it can then be queried, aggregated, displayed etc. on the wiki by that extension.

Semantic MediaWiki
If you store data with Semantic MediaWiki, you should note a common problem, which is that the data stored by SMW does not get automatically updated when the data coming from the external source changes. The best solution for this, assuming you expect the data to change over time, is to create a cron job to call the SMW maintenance script "rebuildData.php" at regular intervals, such as once a day; that way, the data is never more than a day old.

To store a table of data using SMW, you can use the #store_external_table function. This function works as a hybrid of the #for_external_table function and the #subobject function, defined in the Semantic MediaWiki extension. Unlike with #subobject, the first parameter is the name of a property that will link from the subobject to the page it's on. You can see a demonstration of this function on the page Fruits semantic data; the call to #store_external_table on that page looks like:
 * 1) store_external_table loops over each row, and uses variables, in the same way as #for_external_table.

Cargo
There is no special parser function for storing data via Cargo; instead you should simply use #display_external_table, and include Cargo storage code within the template called by that function. You can see an example of Cargo-based storage using #display_external_table here; it uses this template, and you can see the resulting data here.

Scribunto/Lua
Since version 2.2, External Data defines Lua functions that match the functionality of its six "accessor" parser functions, so that wikis that have the Scribunto extension installed can call these functions directly in order to access and display outside data.

The following functions are defined:

The Lua functions accept the same parameters as parser functions, but please note the following:


 * Technically, there is only one parameter; it is known in Lua as a table, and its keys correspond to the parser function parameters.
 * Comma-separated lists like  can be replaced with Lua tables; so that both   and   will work.
 * If XML format is used, an external variable  is set, which contains XML data preserviing, with some limitations, the whole structure of the original XML document. It can be referred to in the   argument, and the corresponding internal variable will be a nested Lua table.
 * If JSON format is used, an external variable  is set, which contains JSON data preserviing the whole structure of the original JSON document. It can be referred to in the   argument, and the corresponding internal variable will be a nested Lua table.
 * "Valueless" parameters like  can be supplied both as numbered and named:   and   are both valid.
 * Parameters whose name contains a space, like, need to be surrounded with quotes and brackets, like  , unless they are valueless, in which case quotes are enough.

Each Lua function returns two values:


 * 1) A table of external data. Unlike with the parser functions, it will be "row-based", i.e. a numbered array of records with named fields corresponding to external variables. If external data is not fetched, nil will be returned.
 * 2) * If there is only one value for some external variable (it will be in the first record), it will be duplicated as a named field of the returned table, as it is highly likely that it belongs to the rowset as a whole rather than its first row; so that it can be accessed both as  and  ,
 * 3) A numbered table of error messages. If there were no errors, nil will be returned.

Unlike parser functions, external data is only returned to calling Lua module and not stored on the page to be retrieved later by, etc.

Example:

Common problems

 * If the call to #get_web_data or #for_external_table isn't returning any data, and the page being accessed is large, it could be because the call to retrieve is getting timed out. You should set the flag in your LocalSettings.php file (which represents a number of seconds) to some number greater than 25, its default value. You could call, for instance:




 * If the data being accessed has changed, but the wiki page accessing it still shows the old data, it is because that page is being cached by MediaWiki. There are several solutions to this: if you are an administrator, you can hit the "refresh" tab above the page, which will purge the cache. You can also easily disable caching for the entire wiki; see here for how. Finally, if you wait long enough (typically no more than 24 hours), the page will get refreshed on its own and display the new data.


 * If you host a private wiki locally but use a dynamic IP service to access it, your wiki will connect to itself through your public IP and not through localhost or 127.0.0.1 (or an IPv6 equivalent). In such a case, your wiki is not allowed to query itself so the examples given here will work when data are hosted on a different server but not if they are hosted on your wiki. A workaround is to use the extension Extension:NetworkAuth which allows you to automatically authenticate your router/box/modem to access your wiki. Note: the security of this approach is not guaranteed.


 * If the extension is not correctly handling non-ASCII characters, the problem might be that your PHP instance lacks the mbstring extension - make sure that it is installed.


 * To query data from another wiki that uses Semantic MediaWiki, it is recommended to use the Special:Ask page, rather than one of SMW's API actions, to construct the URL that will be passed in to #get_web_data, since the API will not output data in a syntax that External Data can use. To construct the URL, go to Special:Ask, create the desired query, then copy the URL from the "Download queried results in CSV format" link.

Version history
External Data is currently at version 2.4.1. See the entire version history.

Bugs and feature requests
The best place to report bugs is on Phabricator - see How to report a bug. The project that should be specified is MediaWiki-extensions-ExternalData.

You can also put any questions, suggestions or bug reports about External Data at the talk page for this extension. Or you can write to the MediaWiki mailing list, mediawiki-l. (If you write to the mailing list, please include "External Data" somewhere in the subject line.)

You can also send specific code patches to Yaron Koren, at yaron57@gmail.com.

Translating
Translation of External Data is done through translatewiki.net. The translation for this extension can be found here. To add language values or change existing ones, you should create an account on translatewiki.net, then request permission from the administrators to translate a certain language or languages on this page (this is a very simple process). Once you have permission for a given language, you can log in and add or edit whatever messages you want to in that language.