API:FAQ/zh

注意：此页面仍在建立中

请同时阅读quick start guide. 该指南回答了这里没有提及的问题，并汇总了其他一些有用的页面.

获取帮助？

 * 1) 阅读此常见问题
 * 2) 尝试在这里的API文档或您wiki的api.php寻找您问题的答案
 * 3) If you can't find the answer to your question on the web, you can ask your question on the mediawiki-api mailing list.
 * 4) 在IRC频道上提问

报告新bug或功能请求？
如果你在API中发现bug或提交新功能，可以将其报告至Phabricator. 若要报告bug，请先阅读search for existing bugs，在页面中选择“MediaWiki”产品->“API”组件并查看该bug中是否已经存在. 若所报告的bug或新功能存在于扩展组件中（如AbuseFilter或FlaggedRevs），请将它们提交到对应的组件中（在"MediaWiki extensions"列表下）.

调试API？
发送HTTP请求至api.php. 举例来说，对于英语维基百科，对应的URL是http://en.wikipedia.org/w/api.php. 大多数维基的API请求地址是相似的，大多数将URL中的index.php换成api.php即可. 在1.17前，MediaWiki支持Really Simple Discovery：每一HTML页面源码中都有一个链接指向一个相对标准偏差RSD描述符，说明在哪里可以找到API. 如果你无法在第三方维基上（不是由维基媒体基金会托管的维基）找到API的URL，可以联系该维基站点的拥有者. 某些维基禁用了API，详见.

控制输出格式？
Pass  in the query string. See the list of output formats for more information. Note that effort is underway to remove all output formats except JSON, so try to use JSON whenever possible.

检测错误？
An error response from the API will set the  HTTP header and return an. For an example error response, see http://en.wikipedia.org/w/api.php?action=blah. See also the documentation about errors and warnings.

获取页面内容（wiki文本）？
If you just want the raw wikitext without any other information whatsoever, it's best to use index.php's action=raw mode instead of the API: http://en.wikipedia.org/w/index.php?action=raw&title=Main_Page. Note that this will output plain wikitext without any formatting. See also the action=raw documentation

To get more information about the page and its latest version, use the API: http://en.wikipedia.org/w/api.php?action=query&prop=revisions&titles=Main_Page. See also the documentation for the prop=revisions module.

You can retrieve 50 pages per API request: http://en.wikipedia.org/w/api.php?action=query&prop=revisions&rvprop=content&titles=Main_Page|Articles. This also works with generators.

获取页面内容（HTML）？
If you just want the HTML, it's best to use index.php's action=render mode instead of the API: http://en.wikipedia.org/w/index.php?action=render&title=Main_Page. See also the action=render documentation.

To get more information distilled from the wikitext at parse time (links, categories, sections, etc.), use the API parse module: http://en.wikipedia.org/w/api.php?action=parse&page=Main_Page. See also the documentation for the action=parse module.

我遇到了HTTP 403错误？
This could mean you're not passing a  HTTP header or that your   is empty or blacklisted User-Agent policy. See the quick start guide for more information. Also, it could mean that you're passing  in the query string of a GET request: Wikimedia blocks all such requests, use POST for them instead.

do I get the readapidenied error?
The wiki you're querying contains private content and requires users to log in in order to be able to read all pages. This means that a client needs to be logged in to query any information at all through the API. See the quick start guide for more information. It's not currently possible to query the contents of whitelisted pages without logging in, even though they're available in the regular user interface.

我遇到badtoken错误？
This is usually because you're either not passing a token at all (read about tokens in the documentation of the module you're using) or because you're having trouble staying logged in. It's also possible you're reusing a type of token that can't be reused (see module documentation for details) or that you're using a token that's associated with an expired session. In general, when using cached tokens, refetch the token (generally using action=tokens) and try again before giving up.

do I get warnings instead of tokens (Action 'edit' is not allowed for the current user)?
You either don't have the right to execute the action you requested, or you're having trouble staying logged in.

某件事不能通过API可用？
Not all features available in the user interface are available through the API. Such features weren't implemented either because no one has gotten around to it yet or because no one has requested them. For information about filing feature requests, see above.

does my API call on Wikimedia wikis just return an HTML error?
If you use API calls with POST requests make sure that these requests don't use Content-Type: multipart/form-data. This happens for instance if you use CURL to access the API and you pass your POST parameters as an array. The Squid proxy servers which are used at frontend servers at the Wikimedia wiki farm don't handle that correctly, thus an error is returned.

Instead, use the "value1=key1&value2=key2..." notation to pass the parameters as a string, similar to GET requests.

On other wikis which you access directly it doesn't make a difference.

In addition, some software (such as cURL) send an  header for longer POST requests (>1024 bytes). The wikimedia wikis that go through Squid servers can't cope with this. If you are still getting HTML errors with post requests, and are not logged in, try setting a blank Expect header (e.g. using cURL on the command line, use the option ).

过长的API URL是否真的无法正常工作？
There is a maximum limit of the url size that can be used with the API when making GET requests. This limit varies depending on the website. Wikimedia's limit is roughly around 8100 characters. To get around this limit use POST requests instead (you may also need to set the Expect header, as described above)