Voir également le guide de démarrage rapide. Il réponds à quelques questions qui n'ont pas de réponse ici et pointe également vers diverses pages utiles.
- 1 Comment puis je faire pour ... ?
- 1.1 obtenir de l'aide ?
- 1.2 Signaler une anomalie (bug) ou demander une fonctionnalité
- 1.3 figure out what action or submodule to call?
- 1.4 appelez l'API?
- 1.5 contrôler le format de sortie?
- 1.6 check if an API module is available?
- 1.7 détectez des erreurs?
- 1.8 get the content of a page (wikitext)?
- 1.9 get the content of a page (HTML)?
- 1.10 deal with 2015's API changes?
- 2 Pourquoi...
- 2.1 do I get HTTP 403 errors?
- 2.2 do I get the readapidenied error?
- 2.3 do I get badtoken errors?
- 2.4 do I get warnings instead of tokens (Action 'edit' is not allowed for the current user)?
- 2.5 do I get mustposttoken error?
- 2.6 is X not available through the API?
- 2.7 does my API call on Wikimedia wikis just return an HTML error?
- 2.8 do really long API urls not work?
Comment puis je faire pour ... ?
obtenir de l'aide ?
- Read this FAQ
- Try to find the answer to your question in the API documentation here or on the self-documenting API home page
- If you can't find the answer to your question on the web,
Signaler une anomalie (bug) ou demander une fonctionnalité
If you have found a bug in the API or have a feature request, report it in Phabricator. Search for existing bugs first (please don't file duplicate bugs) and enter MediaWiki-API as the project when reporting a new bug against the API. If the functionality you're requesting or reporting a bug against is offered by an extension (e.g. AbuseFilter, FlaggedRevs), add that extension's project, e.g. "MediaWiki-extensions-AbuseFilter".
figure out what action or submodule to call?
The MediaWiki API is big, and extensions further enlarge it. Quelques suggestions:
- If you're trying to get information about a page, you probably will use a prop= submodule of
action=query. Other query submodules return lists of pages and meta-information about the wiki. View the generated API help of all query submodules.
- If you see a wiki page doing something interesting after initial page load, it must be making an API request.
- Open your browser's developer console and look for its network requests to
- All the code running on Wikimedia wikis is open source, so you can read the source code making API requests. One strategy to locate source code is to append
?uselang=qqxto the wiki page URL to see the message keys near where API results are presented, then you can search for this message key in the localized message files
i18n/en.jsonof core and extensions.
- Open your browser's developer console and look for its network requests to
- You can view the entire expanded generated API help on one page by appending
recursivesubmodules=1, here it is.
The links to generated API help above go to English Wikipedia You should browse the generated API help on the wiki where you'll be making API requests, since different wikis have different configurations and different sets of extensions.
Send HTTP requests to
api.php. For example, on the English Wikipedia, the URL is https://en.wikipedia.org/w/api.php . Most wikis have
api.php at a similar URL: just use
api.php in place of
index.php in page actions. From 1.17 onwards, MediaWiki supports Really Simple Discovery; the HTML source of every page has an RSD link pointing to an RSD descriptor which indicates where to find the API. If you can't figure out the URL of api.php on a third-party (non-Wikimedia-operated) wiki, contact its owner. The wiki may not enable the MediaWiki API, see
To play with the API
- use Special:ApiSandbox
- enable your browser's developer console and watch net requests to
api.phpas you interact with the wiki
contrôler le format de sortie?
&format=someformat in the query string. See the list of output formats for more information. Note that effort is underway to remove all output formats except JSON, so try to use JSON whenever possible.
check if an API module is available?
You can use
action=paraminfo to request information about the API modules and submodules (such as
query+geosearch) that you want to invoke. The
paraminfo.modules array in the response must contain a
path key for each module and submodule, anything missing is not available.
If an API module isn't available and you know which extension implements it, you can check if that extension is loaded by querying the siteinfo meta information for
siprop=extensions and look for its name in the returned list.
Even if a module appears to be available, you must always handle API errors.
détectez des erreurs?
See Errors and warnings.
An error response from the API will set the
MediaWiki-API-Error HTTP header and return an
error structure. For an example error response, visit https://en.wikipedia.org/w/api.php?action=blah.
get the content of a page (wikitext)?
If you just want the raw wikitext without any other information whatsoever, it's best to use index.php's action=raw mode instead of the API: https://en.wikipedia.org/w/index.php?action=raw&title=Main_Page. Note that this will output plain wikitext without any formatting. See also the action=raw documentation
To get more information about the page and its latest version, use the API: https://en.wikipedia.org/w/api.php?action=query&prop=revisions&titles=Main_Page. See also the documentation for the prop=revisions module.
You can retrieve 50 pages per API request: https://en.wikipedia.org/w/api.php?action=query&prop=revisions&rvprop=content&titles=Main_Page%7CArticles. This also works with generators.
get the content of a page (HTML)?
If you just want the HTML, it's best to use index.php's
action=render mode instead of the API: https://en.wikipedia.org/wiki/Main_Page?action=render. See the action=render documentation.
With the advent of RESTBase, on Wikimedia wikis you can instead request the cached HTML of a page, for example https://rest.wikimedia.org/en.wikipedia.org/v1/page/html/Main_Page (for performance this is also available at https://en.wikipedia.org/api/rest_v1/page/html/Main_Page to reuse an existing network connection to the wiki). Unlike
?action=render this returns a complete HTML document (i.e.
<html><head>various metadata</head><body>...</body></html>); you could use an HTML parsing library to get the inner HTML of the
<body> tag (see the documentation).
To get more information distilled from the wikitext at parse time (links, categories, sections, etc.), you can:
- Query the property submodules that provide the information you need (links, categories, etc.).
- Use the
deal with 2015's API changes?
The default continuation behavior changed in MediaWiki 1.26. If you request additional data based on continue information from an API response, you must update your code. Either
rawcontinue=to your API requests to continue to get the confusing old query-continue behavior
- or add
continue=to your API requests and update to the cleaner continue processing that has been available since MediaWiki 1.21.
Also, since MediaWiki 1.25 an improved output structure for JSON and PHP formats has been available if you add
formatversion=2 to your requests.
As of July 2015, this is still considered experimental because a few API modules may get further improvements in this mode.
If you're willing to risk needing to make future changes to adapt, it's much nicer to process API results with
do I get HTTP 403 errors?
This could mean you're not passing a
User-Agent HTTP header or that your
User-Agent is empty or blacklisted m:User-Agent policy. See the quick start guide for more information. Also, it could mean that you're passing
& in the query string of a GET request: Wikimedia blocks all such requests, use POST for them instead.
do I get the readapidenied error?
The wiki you're querying contains private content and requires users to log in in order to be able to read all pages. This means that a client needs to be logged in to query any information at all through the API. See the quick start guide for more information. It's not currently possible to query the contents of whitelisted pages without logging in, even though they're available in the regular user interface.
do I get badtoken errors?
This is usually because you're either not passing a token at all (read about tokens in the documentation of the module you're using) or because you're having trouble staying logged in. It's also possible you're reusing a type of token that can't be reused (see module documentation for details) or that you're using a token that's associated with an expired session. In general, when using cached tokens, refetch the token (see API:Tokens) and try again before giving up.
do I get warnings instead of tokens (Action 'edit' is not allowed for the current user)?
You either don't have the right to execute the action you requested, or you're having trouble staying logged in.
do I get mustposttoken error?
The action you're attempting must be requested using HTTP POST.
You probably clicked on
api.php URL in a browser or modified an existing URL in the browser's location field, but that results in an HTTP GET request.
You have to use a library (such as the mediawiki.api ResourceLoader module) or utility that can make POST requests; usually you also have to provide it your session cookies and an API:token so MediaWiki can verify that you are the logged-in user with rights to perform the action.
As a hack, you might be able to use the cURL command-line utility, providing it each API parameter with
-F 'action=delete' -F 'token=hexadecimal stuff+\' and the necessary browser cookies with
-H 'Cookie:your session cookies'.
The Network panel of the browser developer tools window (Ctrl+Shift+I) in Firefox and chromium has a "Copy as cURL" menu item that can help, but it's still fiddly.
Depending on what you want to do it's easier to learn how to use a bot or library that handles the details of login, cookies, and tokens for you.
is X not available through the API?
Not all features available in the user interface are available through the API. Such features weren't implemented either because no one has gotten around to it yet or because no one has requested them. For information about filing feature requests, see above.
does my API call on Wikimedia wikis just return an HTML error?
If you use API calls with POST requests make sure that these requests don't use Content-Type: multipart/form-data. This happens for instance if you use CURL to access the API and you pass your POST parameters as an array. The Squid proxy servers which are used at frontend servers at the Wikimedia wiki farm don't handle that correctly, thus an error is returned.
Instead, use the "value1=key1&value2=key2..." notation to pass the parameters as a string, similar to GET requests.
On other wikis which you access directly it doesn't make a difference.
In addition, some software (such as cURL) send an
Expect: 100-continue header for longer POST requests (>1024 bytes). The wikimedia wikis that go through Squid servers can't cope with this. If you are still getting HTML errors with post requests, and are not logged in, try setting a blank Expect header (e.g. using cURL on the command line, use the option
do really long API urls not work?
There is a maximum limit of the url size that can be used with the API when making GET requests. This limit varies depending on the website. Wikimedia's limit is roughly around 8100 characters. To get around this limit use POST requests instead (you may also need to set the Expect header, as described above)