From MediaWiki.org
Jump to navigation Jump to search
This page is a translated version of the page API:FAQ and the translation is 46% complete.

Outdated translations are marked like this.
Other languages:
Bahasa Indonesia • ‎Deutsch • ‎English • ‎Ido • ‎Limburgs • ‎Nederlands • ‎Tiếng Việt • ‎Türkçe • ‎bosanski • ‎català • ‎dansk • ‎español • ‎français • ‎hrvatski • ‎italiano • ‎kurdî • ‎occitan • ‎polski • ‎português • ‎português do Brasil • ‎sicilianu • ‎suomi • ‎svenska • ‎čeština • ‎беларуская • ‎беларуская (тарашкевіца)‎ • ‎български • ‎русский • ‎українська • ‎العربية • ‎فارسی • ‎हिन्दी • ‎অসমীয়া • ‎মেইতেই লোন্ • ‎ಕನ್ನಡ • ‎ไทย • ‎中文 • ‎日本語 • ‎粵語 • ‎한국어

请同时阅读API:首页 。该指南回答了这里没有提及的问题,并汇总了其他一些有用的页面。



  1. 阅读此常见问题
  2. 尝试在这里的API文档您wiki的api.php寻找您问题的答案
  3. 如果您不能在网络上找到回答您问题的答案,


如果你在API中发现bug或提交新功能,可以将其报告至Phabricator。首先请 搜索现有bug(请不要提交重复的bug),并在提交有关API的新bug时输入 MediaWiki-API作为项目。如果您请求的功能或提交的bug有关某一扩展(例如防滥用过滤器AbuseFilter、FlaggedRevs),请添加至对应扩展的项目中,例如“MediaWiki-extensions-AbuseFilter”。

弄清楚該用什麼 action 或 submodule?

MediaWiki API 很龐大,且擴充功能又進一步擴大了它。 一些建议:

  • 如果您尝试获取有关页面的信息,您可能会使用prop= action = query的子模块。 Other query submodules return lists of pages and meta-information about the wiki. View the generated API help of all query submodules.
  • If you see a wiki page doing something interesting after initial page load, it must be making an API request.
    • Open your browser's developer console and look for its network requests to api.php.
    • All the code running on Wikimedia wikis is open source, so you can read the source code making API requests. One strategy to locate source code is to append ?uselang=qqx to the wiki page URL to see the message keys near where API results are presented, then you can search for this message key in the localized message files i18n/en.json of core and extensions.
  • You can view the entire expanded generated API help on one page by appending recursivesubmodules=1, here it is.

The links to generated API help above go to English Wikipedia. You should browse the generated API help on the wiki where you'll be making API requests, since different wikis have different configurations and different sets of extensions.


发送HTTP请求至api.php。举例来说,对于作为维基百科,对应的URL是https://zh.wikipedia.org/w/api.php。大多数维基的API请求地址是相似的,大多数将URL中的index.php换成api.php即可。在1.17前,MediaWiki支持Really Simple Discovery:每一HTML页面源码中都有一个链接指向一个相对标准偏差RSD描述符,说明在哪里可以找到API。如果你无法在第三方维基上(不是由维基媒体基金会托管的维基)找到API的URL,可以联系该维基站点的拥有者。某些维基禁用了API,详见$wgEnableAPI

For example, on the English Wikipedia, the URL is https://en.wikipedia.org/w/api.php . Most wikis have api.php at a similar URL: just use api.php in place of index.php in page actions. From 1.17 onwards, MediaWiki supports Really Simple Discovery; the HTML source of every page has an RSD link pointing to an RSD descriptor which indicates where to find the API. If you can't figure out the URL of api.php on a third-party (non-Wikimedia-operated) wiki, contact its owner. The wiki may not enable the MediaWiki API, see $wgEnableAPI.


  • 使用Special:ApiSandbox
  • 启用浏览器的开发人员控制台,并在与wiki交互时查看api.php的网络请求


Pass &format=someformat in the query string. See the list of output formats for more information.

檢查一個 API module 是否可用?

You can use action=paraminfo to request information about the API modules and submodules (such as query+geosearch) that you want to invoke. The paraminfo.modules array in the response must contain a path key for each module and submodule, anything missing is not available.

If an API module isn't available and you know which extension implements it, you can check if that extension is loaded by querying the siteinfo meta information for siprop=extensions and look for its name in the returned list.




An error response from the API will set the MediaWiki-API-Error HTTP header and return an error structure. For an example error response, visit https://en.wikipedia.org/w/api.php?action=blah.


If you just want the raw wikitext without any other information whatsoever, it's best to use index.php's action=raw mode instead of the API: https://en.wikipedia.org/w/index.php?action=raw&title=Main_Page. Note that this will output plain wikitext without any formatting. 另见action=raw 文档

To get more information about the page and its latest version, use the API: https://en.wikipedia.org/w/api.php?action=query&prop=revisions&titles=Main_Page. See also the documentation for the prop=revisions module.

You can retrieve 50 pages per API request: https://en.wikipedia.org/w/api.php?action=query&prop=revisions&rvprop=content&titles=Main_Page%7CArticles. This also works with generators.


If you just want the HTML, it's best to use index.php's action=render mode instead of the API: https://en.wikipedia.org/wiki/Main_Page?action=render. 另见action=render 文档

With the advent of RESTBase, on Wikimedia wikis you can instead request the cached HTML of a page, for example https://en.wikipedia.org/api/rest_v1/page/html/Main_Page Unlike ?action=render this returns a complete HTML document (i.e. <html><head>various metadata</head><body>...</body></html>); you could use an HTML parsing library to get the inner HTML of the <body> tag (see the documentation).

To get more information distilled from the wikitext at parse time (links, categories, sections, etc.), you can:


MediaWiki 1.26中更改了默认的延续行为。 If you request additional data based on continue information from an API response, you must update your code. Either

Also, since MediaWiki 1.25 an improved output structure for JSON and PHP formats has been available if you add formatversion=2 to your requests. As of July 2015, this is still considered experimental because a few API modules may get further improvements in this mode. If you are willing to risk needing to make future changes to adapt, it's much nicer to process API results with formatversion=2.


我遇到了HTTP 403错误?

This could mean you are not passing a User-Agent HTTP header or that your User-Agent is empty or blacklisted m:User-Agent policy. See API:Client code for more information. Also, it could mean that you are passing & in the query string of a GET request: Wikimedia blocks all such requests, use POST for them instead.


The wiki you are querying contains private content and requires users to log in in order to be able to read all pages. This means that a client needs to be logged in to query any information at all through the API. See API:Login for more information. It's not currently possible to query the contents of whitelisted pages without logging in, even though they are available in the regular user interface.


This is usually because you are either not passing a token at all (read about tokens in the documentation of the module you are using) or because you are having trouble staying logged in. It's also possible you are reusing a type of token that can't be reused (see module documentation for details) or that you are using a token that's associated with an expired session. In general, when using cached tokens, refetch the token (see API:Tokens) and try again before giving up.

do I get warnings instead of tokens (Action 'edit' is not allowed for the current user)?

You either don't have the right to execute the action you requested, or you are having trouble staying logged in.


The action you are attempting must be requested using HTTP POST. You probably clicked on api.php URL in a browser or modified an existing URL in the browser's location field, but that results in an HTTP GET request. You have to use a library (such as the mediawiki.api ResourceLoader module) or utility that can make POST requests; usually you also have to provide it your session cookies and an API:token so MediaWiki can verify that you are the logged-in user with rights to perform the action. As a hack, you might be able to use the cURL command-line utility, providing it each API parameter with -F 'action=delete' -F 'token=hexadecimal stuff+\' and the necessary browser cookies with -H 'Cookie:your session cookies'. The Network panel of the browser developer tools window (Ctrl+Shift+I) in Firefox and chromium has a "Copy as cURL" menu item that can help, but it's still fiddly.

Depending on what you want to do it's easier to learn how to use a bot or library that handles the details of login, cookies, and tokens for you.


Not all features available in the user interface are available through the API. Such features weren't implemented either because no one has gotten around to it yet or because no one has requested them. For information about filing feature requests, see above.


If you use API calls with POST requests make sure that these requests don't use Content-Type: multipart/form-data. This happens for instance if you use CURL to access the API and you pass your POST parameters as an array. The Squid proxy servers which are used at frontend servers at the Wikimedia wiki farm don't handle that correctly, thus an error is returned.

Instead, use the "value1=key1&value2=key2..." notation to pass the parameters as a string, similar to GET requests.

On other wikis which you access directly it doesn't make a difference.

In addition, some software (such as cURL) send an Expect: 100-continue header for longer POST requests (>1024 bytes). The wikimedia wikis that go through Squid servers can't cope with this. If you are still getting HTML errors with post requests, and are not logged in, try setting a blank Expect header (e.g. using cURL on the command line, use the option --header 'Expect:').

过长的API URL是否真的无法正常工作?

使用 GET 請求調用 API 時是有 URL 長度上限的。這個上限根據不同網站而不同。Wikimedia 的上限是大約 8100 字元。避開這個上限的方法是改用 POST(你可能也需要設置如前面所說的Expect header)