API talk:Main page

Subpages

 * General Archive


 * Completed Requests


 * Login Functionality


 * Page Save Functionality : everything related to saving pages back to the wiki using API

Please add your suggestions and ideas to Bugzilla with component field set to "API".

possible category problem
Hi, using the API on large categories gives me less results than it should, see en:Category:Uncategorized from September 2006 for example. My program using queri API reports 4712 articles, when there are actually over 5000 articles in that category. I have tested my software, and I don't see that it is doing anything wrong, and indeed works perfectly on smaller categories, though I still don't rule out that I am at fault. I would be greatful if someone could use their queri API software on that category and see how many results they are given. thanks Martin 09:57, 17 September 2006 (UTC)


 * Using the C# example from the en user manual gives even less results, so I think something somewhere is going wrong with large cats. Martin 09:51, 18 September 2006 (UTC)


 * Hi Martin, I will take a look in the next few days. Thanks for reporting. --Yurik 13:35, 18 September 2006 (UTC)
 * Yurik fixed a related bug in queryapi.php a few days ago. Phe 15:33, 18 October 2006 (UTC)

Protected pages
Hi. I have another request... would it be possible to add  or something similar to retrieve the list of protected articles? I could not find any other way (other than downloading ) for doing so. Thanks! Paolo Liberatore 12:46, 3 October 2006 (UTC)
 * I am not sure the list of protected articles is available from the database. If it is, I can certainly expose it. --Yurik 16:24, 3 October 2006 (UTC)
 * That would be great! Thank you. Paolo Liberatore 16:16, 10 October 2006 (UTC)
 * As an alternative that could be possibly be simpler to implement, the entire record of an article from the page table could be retrived via . Paolo Liberatore 17:52, 6 November 2006 (UTC)
 * I just noticed there is a page_restrictions field in the page table. I will have to find out what it might contain (the Page table is not being very descriptive, and afterwards add that to prop=info. Filtering by that field is problematic because its a tiny blob (i don't think mysql can handle blob indexing, but i might be wrong). --Yurik 22:31, 6 November 2006 (UTC)
 * Here is my reading of the source code. The point where page_restrictions is interpreted is . There,   reads this field, parses it and stores the result in the   array. This array is then used by   to return an arrary containing the groups that are allowed to perform a given action. As far as I can see, this array is such that   is an array that contains the groups that are allowed to perform the edit action, etc. Except that, if this array is empty, no restriction (other than default ones) apply.
 * In the database dump, the page_restrictions field appears to be either empty or something like  or just  . The explanation of the table says that this is a comma-separated list, but the comments in Title.php mention that is an old format (I think that the value 'sysop' is actually in this format; no page in wikien current has a comma in the page_restrictions field). Paolo Liberatore 15:52, 11 November 2006 (UTC)

Category belonging to an article
Actually query.php can't get category in a redirect but there is sensible use of such categorized redirect e.g. original book's titles is a redirect to the local book's title and can be categorized. It'll nice if api.php solve this problem. Phe 15:37, 18 October 2006 (UTC)
 * Afaics fixing it in query.php seems not difficult, in genPageCategoryLinks use $this->existingPageIds instead of $this->nonRedirPageIds but I'm unsure if it'll break existing bot... Phe 15:42, 18 October 2006 (UTC)
 * Added in the new api. --Yurik 06:32, 14 May 2007 (UTC)

How to retrieve the html format?
I am interested in retrieving a formatted html version of the content of an article, with the links parsed and pointing to wikimedia pages, but without tabs, menu options, etc. The article as you can see it inside wikipedia.--Opinador 11:26, 5 November 2006 (UTC)
 * The easiest way would probably be to add action=render to the regular wiki request: http://en.wikipedia.org/w/index.php?title=Main_Page&action=render. I plan to add it to the API, but it will not happen soon - currently the internal wiki code is not very well suited for this kind of requests. --Yurik 16:17, 6 November 2006 (UTC)
 * Thank you, I think is enough for me at this moment. But probably it will be useful to have in API this aproach, or at least to have a parameter to force expand templates before returning the page results.--Opinador 10:26, 7 November 2006 (UTC)

Diffs
Are there any plans to provide raw diffs between revisions? This would be particularly helpful for vandalism fighting bots who check text changes for specific keywords and other patterns. The advantage would be that diffs require much less bandwidth and therefore increase bot response time. What do you think? Sebmol 15:24, 21 November 2006 (UTC)

Count
Would it be possible to add an option which woud simply return the number of articles which satisfied the supplied conditions? This would be especially helpful counting large categories, like en:Category:Living people. HTH HAND —Phil | Talk 15:59, 29 November 2006 (UTC)
 * I second that emotion. Zocky 23:26, 9 December 2006 (UTC)

version/image info
I think it would be useful to get information about the sofware version (i.e. core version, extensions, hooks/functions installed) and images (i.e. checksum would useful to see if an image has changed). -Sanbeg 21:08, 7 December 2006 (UTC)
 * Some of it is available through the siteinfo request. More will be done later. --Yurik 01:59, 22 May 2007 (UTC)

Select category members by timestamp
Hello, is it possible to retrieve cm with a parameter that lists only starting from a certain timestamp? This should be possible, since a timestamp is stored in the categorylinks database table. Is this the correct place to ask, or should I file a bug in bugzilla? Bryan 12:18, 24 December 2006 (UTC)


 * Similarly get all templated embedding pages by timestamp would be useful for monitoring when a template gets used because it contains structured data that can be reflected on other websites.82.6.99.21 15:14, 9 February 2007 (UTC)

Formatting bug
When I call the API with "action=opensearch", it will only return results in the JSON format. It doesn't matter what the search term is or what I pass to the format argument. Forgive me if this isn't the proper place to report a bug like this. 68.253.13.166 20:14, 3 February 2007 (UTC)


 * Bug 9143] is tracking this, and has been postponed. It has more details. --Yurik 02:01, 22 May 2007 (UTC)

feedwatchlist bug
Even after successfully logging in with "action=login", if we don't use cookies but rather pass the lgusername, lguserid and lgtoken returned as parameters along with "action=feedwatchlist" then we get a HTTP 500 error ( tried this with http://en.wikipedia.org/w/api.php just now). The documentation says that both parameters and cookies should work. Jyotirmoyb 05:52, 20 February 2007 (UTC)
 * If you are trying to reproduce this bug with a browser, make sure to clear the cookies set by a successful login. Jyotirmoyb 05:55, 20 February 2007 (UTC)
 * Is it normal behaviour that watchlist topics appear approx. 1 day late? Thanks, 84.160.6.41 21:05, 9 March 2007 (UTC)


 * Both issues has been fixed -- Err 500 now returns "not logged in" feed element, 1 days has been fixed. --Yurik 02:02, 22 May 2007 (UTC)

YAML Formatted Errors
I'm working on writing some methods to access the API, requesting my results in YAML format. It works fine when I send a proper request, which is most of the time, but occasionally I do something wrong and get the helpful error message. However, on having my code process the YAML result, I'm getting an error. Specifically, it reports "ArgumentError: syntax error on line 12, col 2: ` For more details see API'" Is this because the result is not properly formatted YAML, or a Ruby error? I don't know very much about YAML, learning as I go along, but I like the format so far. Is anyone else having this issue, and is it an issue at all? Thanks. Eddie Roger 04:21, 21 February 2007 (UTC)
 * Is it still broken? --Yurik 02:04, 22 May 2007 (UTC)

JSON Callback
It would be really nice if the api had a JSON callback, à la Yahoo Web Services. This way, any website could make an AJAX request to Wikipedia! --Bkkbrad 17:16, 26 February 2007 (UTC)
 * Done. --Yurik 18:45, 15 May 2007 (UTC)

mw-plusminus data
MediaWiki recently add mw-plusminus tags to Special:Recentchanges and Special:Watchlist, which display the number of characters added/removed by the edit. I was wondering if it may be possible for api.php to both retrieve this info and somehow append the data to query requests that involve page histories, recentchanges, etc. I would most like to see this available in usercontribs queries, as that data is currently not retrievable through any other medium (save for pulling up each edit one at a time and compaing the added/removed chars). Thanks. AmiDaniel 23:55, 26 February 2007 (UTC)

Query Limits
Hi all, another suggestion from me. On the API it is stated that the query limit of 500 items is waived for bot-flagged accounts to the MediaWiki max query of 5000. Would it be possible to extend this same waiver to logged-in admin accounts? I have multiple instances in some of my more recent scripts that query, quite literally, thousands of items (backlinks, contribs, etc.) that are primarily only used by myself and other administrators on enwiki. While having to break these queries up into units of 500 causes a rather dramatic performance decrease on the user's end, I can't imagine it's particularly friendly on the servers either (or rather, submitting two or three requests of 5000 each would likely be less harmful to the servers than twenty or thirty requests of 500 each). Generally, admins are considered similarly well-trusted not to abuse such abilities, and they are also small in numbers on all projects. Also, the MediaWiki software itself allows similar queries of upto 5000, and I most presume that loading this data from MediaWiki is far more burdenous on the servers than submitting the same query through API. This clearly more a political matter than a technical one, but I'd like to know if there are any objections to implementing such a change. Thanks. AmiDaniel 05:05, 1 March 2007 (UTC)

ApiQuery
There should be possibility of registering methods in $mQueryListModules, $mQueryPropModules and $mQueryMetaModules for example by settings variables in LocalSettings.php:

$wgApiQueryListModules["listname"] = "ApiQueryMethodName";

and then in ApiQuery.php something like:

83.12.240.122 15:39, 15 March 2007 (UTC) Krzysztof Krzyżaniak (eloy) 


 * Thanks, done. --Yurik 03:28, 15 May 2007 (UTC)

partial content
it is great to be able to retrieve wikipedias content by an API, but nobody is really going to need the whole page.

would it be possible to pass a variable which adds only the lead paragraph (everything before the TOC) to the XML, not the whole content ? This would be ideal for an implementation into another context, then adding a "View Full Article" link underneath.

any chance of this becoming a possibility ?--83.189.27.110 00:49, 19 March 2007 (UTC)


 * I'd also consider useful to be able to retrieve a single section from the article, so that one could first load the article lead and the TOC, and then see only some of the section(s). This would be a great improvement of efficiency especially for people on slow links (e.g., mobile) and for "community" pages, such as en:Wikipedia:Administrator's noticeboard, some of which tend to be very long. Tizio 15:19, 20 March 2007 (UTC)

external Weblinks
Hi all, API is really practical, but I miss a list of all external weblinks from a page. In de:Spezial:Linksearch all weblinks of a page are probably listed, api could only display the list..?--Luxo 20:42, 20 March 2007 (UTC)
 * done. --Yurik 02:04, 22 May 2007 (UTC)

Broken POST?
I just downloaded 1.9.3 to a new dev box of mine, and my scripts seem to have all broken due to a problem with POST. I've switched to GET to test this, and it works fine, but when POSTing, I get the generic help output and not the YAML results I expect. Did someone change something with the latest release? Eddie Roger 03:04, 21 March 2007 (UTC)


 * Bump? Anyone? I've kept playing with this with no luck. Can anyone at least point me towards the source file that is responsible for returning the results or handling POST requests?


 * Update: Sorry, Ruby was broken, not MediaWiki. Ruby 1.8.4 had a bug in its post method. Upgrading fixed. Hope that helps someone.

Watchlist not working any longer?
Hi Yuri. It seems that the watchlist feature of api.php is not working any longer :-( . It is not even listed in the help! I went to svn, but I don't see any recent change that justifies the change. Tizio 14:33, 2 April 2007 (UTC)
 * Ops, just found this: . Sorry for the duplicate. Tizio 14:39, 2 April 2007 (UTC)


 * Domas removed it because it "failed to conform to db standards" and advised developers to "just use Special:Watchlist", which sucks, but sobeit. Unless it is completely rewritten to conform to db standards (not use the master db, but instead use that dedicated to watchlists, etc.), it will be gone. AmiDaniel 18:16, 2 April 2007 (UTC)
 * Nuts, it was soooo useful :-( Especially on a slow connection, the difference is really evident. Tizio 15:04, 3 April 2007 (UTC)


 * I changed the DB group being used. Unless there is another issue with the query, we should get someone to reinstate it. --Yurik 03:29, 15 May 2007 (UTC)


 * Thanks!!!! (as I said, that was really useful; as the rest of the API, btw). Tizio 12:03, 19 May 2007 (UTC)

returning the beginning/end of a document
Is there any possibility of an API function that returns the first x bytes at the beginning or end of a document? An increasing amount of metadata is stored at the beginning or end of wikipedia articles, talk pages, etc. And tools like "popups" that preview pages don't need to transmit the whole document to preview part of it. I am thinking of client, server, and bandwidth efficiency. Outriggr 01:43, 20 April 2007 (UTC)

Current status ?
Hi,

I have started developping a tool in Java to deal with some maintenance tasks, especially for the disambiguation pages. For the moment, I have planned to do two parts :
 * Much like CorHomo, a way to find and fix all articles linking to a given disambiguation page.
 * A way to find and fix all links to disambiguation pages in a given article.

To finish my tool, I need a few things done in the API :
 * Retrieving the list of links from an article.
 * Submitting a new version of an article.

Do you have any idea when these features will be available in the API ?

How can I help ? (I am rather a beginner in PHP, I have just developped a MediaWiki extension, here, but I can learn).

--NicoV 13:24, 23 April 2007 (UTC)


 * For your "Retrieving the list of links from an article" problem, check the query.php what=links query. It's already been implemented, just not brought over into api.php. As for submitting a new version of an article, I'm afraid there is no ETA on this, and I'm honestly not sure how Yurik is going to about doing this. For now, I'd recommend you do it as it's been done for years--submitting an edit page with "POST". It's not the greatest solution, but it works. If you'd like to help, please write to w:User_talk:Yurik; I'm sure your help is needed! AmiDaniel 18:10, 23 April 2007 (UTC)


 * Thanks for the answer. I was hoping to use a unique API, but no problem in using query.php for the links. For submitting a new version, do you know where I can find an example of submitting an edit page ? --NicoV 20:33, 23 April 2007 (UTC)


 * Unfortunately, dealing with forms and especially MediaWiki forms (as submitting all forms requires fetching userdata from cookies typically) in Java is not particularly easy. For a basic example in submitting a form in Java, see this IBM example. For most of the stuff I've developed, instead of trying to construct my own HttpClient and handle cookies myself, I've simply hooked into the DOM of a web browser and let it take care of the nitty gritty of sending the POST's and GET's. You can see an example of this method using IE's DOM here. I've not found a good way to get the latter to work with Java as Java does not have very good ActiveX/COM support, though it can sort of be done by launching a separate browser instance. I'm afraid I can't really help much more, unfortunately, as it's a problem that's not been well-solved by anyone. It's easy to build MediaWiki crawler's, but not so easy to build bots that actually interact with MediaWiki. AmiDaniel 21:20, 23 April 2007 (UTC)


 * A en-wiki user has written something for page editing in java: en:User:MER-C/Wiki.java. HTH. Tizio 13:10, 26 April 2007 (UTC)


 * I have used parts of the java class you linked, and it's working :) Thanks --NicoV 07:57, 22 May 2007 (UTC)

Ok, thanks. I will take a closer look at the examples. --NicoV 05:38, 27 April 2007 (UTC)


 * Some have been done (links, etc). See the main page. --Yurik 11:48, 14 May 2007 (UTC)


 * Thanks, I will try to use it when it's available on the French Wikipedia --NicoV 20:28, 14 May 2007 (UTC)

Tokens
I, personally, don't understand why state-changing actions currently require tokens. Shouldn't the lg* parameters be enough to determine whether the client is allowed to perform a certain action? If so, why do you need tokens?

On a side note, I intend to start writing a PHP-based bot using this API, and will try to include every feature the API offers. Since both this talk page and its parent page have been quiet for two weeks, I was wondering if new API features are still posted here. If that is the case, I'll monitor this page and add new features to my (still to be coded) bot when they appear. I'll keep you informed on my progress. --Catrope 17:20, 9 May 2007 (UTC)


 * The motivation for tokens at Manual:Edit token is that they are used to prevent session hijacking. Tizio 19:02, 15 May 2007 (UTC)
 * Ah, I understand now. Didn't think it through as deeply as you did. --Catrope 20:19, 22 May 2007 (UTC)
 * Would it be possible to at least grab editing tokens by API, and require session data, cookie tokens, etc. to get them? The same can be done with index.php (if not mixed up in a lot of other HTML). Gracenotes 17:04, 4 June 2007 (UTC)

Login not working ?
Hi, is there a problem with the "login" action? Whenever I try (in the last hour), I get the following message:  -- NicoV 20:57, 21 May 2007 (UTC)


 * Yes, unfortunately I had to disable login action until a more secure solution is implemented. The current implementation allowed countless login attempts, allowing crackers to break weak passwords by brute force. Disabling it was the only solution. Any help with fixing login module would be greatly appreciated and bring it back faster. --Yurik 01:58, 22 May 2007 (UTC)


 * Would it be possible while disabling it to return an understandable message, like result=Illegal or an other value like result=LoginDisabled ? Currently, when asking XML result, the result is not XML. My tool wasn't prepared for this :). That way, when I get this answer, I would be able to validate the login using an other method (by going through the Special:Login page).
 * Concerning help, I am not very good at PHP, so I doubt I can help you much. I suppose simple tricks like delaying the answer of the login action wouldn't be sufficient. --NicoV 07:17, 22 May 2007 (UTC)


 * The format is obviously not handled properly - even if everything else fails, proper format should be used. Please file as a bug. Thanks for the logindisabled suggestion - I might implement something along those lines later. Now, if only someone could help with the login php code :) Its not hard, just annoying :( --Yurik 21:07, 22 May 2007 (UTC)

I just sent a patch of to Yurik that fixes these security vulnerabilities (the big one that Yurik mentioned at least) and re-enables the login, so this will hopefully make it in the repo by tomorrow and will be synced up on Wikimedia's servers within the week. Sorry for the inconvenience. AmiDaniel 10:36, 23 May 2007 (UTC)


 * That's great news, but in the mean time, the Special:Login page has just been modified with a capcha (at least on the French wikipedia). So the method I was using to edit pages is not working any more :( Is there a way to edit pages with a tool ? --NicoV 15:30, 23 May 2007 (UTC)

Changes in revision query
Was there a recent change in the way revision queries are processed? It seems like the revision ids are no longer returned automatically whereas before they were and api.php still claims they are. Also, it seems oldid (i.e. the previous revision's ID) is no longer returned and there's no way to enable that option within the query. Any pointers how I can get that information? Sebmol 05:52, 22 May 2007 (UTC)
 * How can we get the revision ids ? --NicoV 07:49, 22 May 2007 (UTC)
 * Because not everyone needed the IDs, they have been added to the list of available properties. To get them, simply include "ids" as one of the rcprop values |Main%20Page&rvprop=ids|timestamp|user|comment. Both pageid and revid are there. Sorry, but i had to make this change to make it cleaner. --Yurik 21:04, 22 May 2007 (UTC)
 * Ok, thanks. --NicoV 21:36, 22 May 2007 (UTC)

Feature request for links
Would it be possible to have an option on the "links" request to filter out links that are imported by templates ? I am currently working on a tool to help fixing links to disambiguation pages, and I'd be interested in getting only the links that are directly in the page.

This request could probably also be applied to other functions (templates, extlinks, ...).

PS: Thanks for the API --NicoV 21:36, 22 May 2007 (UTC)


 * It might be possible, but I am afraid this might be a bit complex: the api would have to enumerate recursively every template used by the pages, and subtract all links found in templates from those on the page itself. The problem is that all links from the template pages will be found, not just those included in other pages (&lt;includeonly>, etc). Plus some links might appear both on the page and in a template, and doing this will eliminate them.
 * The real solution would be to add a bit flags (short int) field to the pagelinks table to record during page save/update what links are real and what links come from templates. --Yurik 03:11, 23 May 2007 (UTC)

Changes to redirects
Have there been any changes to redirects? Since a few days, querying for content revisions no longer seems to automatically redirect when using &redirects. Thanks for the API! Tom De Smedt
 * Shouldn't be. Please provide an example of the broken URL request. --Yurik 18:47, 25 May 2007 (UTC)
 * Yurik, here is an example request: http://en.wikipedia.org/w/api.php?action=query&redirects&format=xml&prop=revisions&rvprop=content&titles=wolf - this query used to return the content of the "Gray wolf" article, but now returns a redirect page. --Tomdesmedt 18:59, 27 May 2007 (UTC)
 * Thanks, fixed in r22502. --Yurik 08:06, 28 May 2007 (UTC)
 * OK, works fine now! Thanks again for the API --Tomdesmedt 12:00, 29 May 2007 (UTC)

Limits
Limits on some queries (like logevents) is lower than allowed via common MW interface. - VasilievVV 15:34, 29 May 2007 (UTC)
 * RC (list=recentchanges) also has this problem. api.php limits it to 500, but I can get up to 5000 using the regular interface. --Catrope 15:37, 29 May 2007 (UTC)


 * This has recently been changed to allow the limit of 5000 and 50 to fast and slow queries, respectively, to sysops as well as bots. We're hesitant enabling higher limits to non-sysops and non-bots, however, until we can do some serious efficiency testing with the interface. AmiDaniel 08:35, 1 June 2007 (UTC)
 * Why is it necessary to make different limits for bots and normal users? They have equal limits in UI (5000). So I think it would be better to set 5000 limit to all queries, that doesn't read page content - VasilievVV 08:38, 2 June 2007 (UTC)

InterWiki table
Would it be possible to implement retrieval of the InterWiki table through something like this? api.php?action=query&meta=siteinfo&siprop=interwiki I'm currently working on a Recent Changes enhancement, and need to parse wikicode inside comments. Luckily, comments only allow very little wikicode, but I still need to parse InterWiki links. is not guaranteed to work for all InterWiki bindings (more exactly, it doesn't work if iw_local=0 for that binding), so that's not a good solution. Also, I need to place IW links in a different CSS class, and I can't do that if I can't tell IW from internal links.

Thanks in advance. --Catrope 20:16, 31 May 2007 (UTC)


 * Sounds quite do-able to me -- only problem is that interwiki tables tend to be quite obscenely large, and so we may have to split up the returns. Please add a bug to http://bugzilla.wikimedia.org where it's much more likely to be acted upon :). Thanks. AmiDaniel 08:33, 1 June 2007 (UTC)
 * Posted, thanks for the link. --Catrope 13:29, 1 June 2007 (UTC)

List members of a category (list=categorymembers)
This doesn't seem to be implemented yet, but it works (mostly) in query.php. Is there a ETA for it to move here? I spoke with Yurik this afternoon about a bug in query.php. Carl CBM 22:05, 4 June 2007 (UTC)
 * Considering that Yurik is the only one implementing it, and he is extremely busy with his primary work... Probably soon :) --Yurik 23:50, 4 June 2007 (UTC)
 * I posted here mostly to leave a record of the conversation. I wasn't sure if there is a deployment plan for the new API, but it looks like there isn't. In the meantime, there are still the various libraries that do it by screen scraping. CBM 18:43, 5 June 2007 (UTC)
 * Not sure what you mean by deployment -- it is deployed on all WikiMedia servers, and is part of the normal distribution. It is disabled by default, but can easily be enabled with a small configuration change. Libraries really ought not to do the screenscraping -- simply because HTML changes very often (by far more often than the API specs :)) --Yurik 03:57, 6 June 2007 (UTC)
 * I meant the timeline for the full implementation of API.php, sorry. I agree that screenscraping is a method of last resort. CBM 03:43, 7 June 2007 (UTC)

Bad title error
Compare the result of the following two queries: http://www.mediawiki.org/w/api.php?action=query&prop=info&titles=API|Dog&format=jsonfm http://www.mediawiki.org/w/api.php?action=query&prop=info&titles=API||Dog&format=jsonfm In the latter query, the second title is empty (and thus invalid), which causes the API (to rightfully) throw an error. The downside is that it destroys my entire query (the original query that caused my error here contained 500 titles, of which one was empty). This is pretty unfriendly behavior, but fixing this raises the issue of having to return an error and a query result in one response (which is currently impossible).

Alternatively, I can make sure there are no empty titles in my query (which fixed my problem), but are empty titles the only ones that trigger the "invalid query" error? If not, could someone provide a list of exactly what causes api.php to return "invalid query"?

Thanks in advance --Catrope 15:32, 5 June 2007 (UTC)

Silent normalization of interwiki titles
See Bug 10147.