API talk:Main page

Subpages

 * General Archive

Unless you comment is related to the whole API, please post it on one of the API subpages.

Please add your suggestions and ideas to Bugzilla with component field set to "API".

Navigate in a mediawiki content from another site ?
Hi,

With a group of friends we are working a mediawiki platform to share content about design. Maybe can API users give us some advices ?

We are trying to make other sites able to build them own navigator in our content.

The advantage for the users of theses site is that they could have personalised acces to content that they were looking on the site they were. The advantage to the wiki is to make a better participation through "edit" links on other site that come back to the wiki in edit mode.

After 2 days of researches on the web I found :
 * A site doing the same for wikipedia : http://encyclopedie.snyke.com/articles/sciences_cognitives.html
 * This API
 * A way to extract html content : http://en.design-platform.org/api.php?action=parse&page=Brand

Yet I can't manage to use the extracted content. It only prints a layout of the html code, not interpretating it.

If you can help us it would be super !

Thanks !

--Thibho 16:00, 28 June 2009 (UTC)


 * Some news, a friend told me that it would be better to use xml layout : http://en.design-platform.org/api.php?action=parse&page=Brand&format=xml . Then we would have to build a script to extract content in the "text" balise, to tranform links and to build a lik that redirect on the wiki in edit mode. This is out of my abilities so I continue my researches looking if can find examples to start it. --Thibho 17:17, 28 June 2009 (UTC)


 * So the solutions we use are :


 * Option 1 : Using iframes
 * This integrate a minimaliste skin of the wiki in the site with an iframe.
 * This solution provides the way to keep integrated style skin up-to-date since only a minimalistic part of the wiki is integrated and the rest can be managed by the site owner.

 Your browser does not support iframes.
 * In the wiki localsettings.php :

switch ($_SERVER["SERVER_NAME"]) {	case "for-iframes.wikidomaine.com": $wgDefaultSkin = "minimalist_skin"; $wgAllowUserSkin = false; break;

default: $wgDefaultSkin = "wiki_original_skin"; break; }
 * The main problem of this is that the users cannot copy/past the url since it is always the same. We are looking for a solution for this aspect.


 * Option 2 : feel and tast with a full site-like skin and same menu
 * This use the same type of swich as precedently but with a full skin and copuing the site menu.
 * This solution can be simpler if there is only one partner site or if you create a skin template that can be easily modified for each site.


 * There is also many solutions based on scripts, page re-writting, API, etc... wich have the advantage to be managed entirely by the partner site but that require more expetise from the site webmaster.


 * Thanks !


 * --Thibho 21:15, 7 July 2009 (UTC)

Protected pages
Hi. I have another request... would it be possible to add  or something similar to retrieve the list of protected articles? I could not find any other way (other than downloading ) for doing so. Thanks! Paolo Liberatore 12:46, 3 October 2006 (UTC)
 * I am not sure the list of protected articles is available from the database. If it is, I can certainly expose it. --Yurik 16:24, 3 October 2006 (UTC)
 * That would be great! Thank you. Paolo Liberatore 16:16, 10 October 2006 (UTC)
 * As an alternative that could be possibly be simpler to implement, the entire record of an article from the page table could be retrived via . Paolo Liberatore 17:52, 6 November 2006 (UTC)
 * I just noticed there is a page_restrictions field in the page table. I will have to find out what it might contain (the Page table is not being very descriptive, and afterwards add that to prop=info. Filtering by that field is problematic because its a tiny blob (i don't think mysql can handle blob indexing, but i might be wrong). --Yurik 22:31, 6 November 2006 (UTC)
 * Here is my reading of the source code. The point where page_restrictions is interpreted is . There,   reads this field, parses it and stores the result in the   array. This array is then used by   to return an arrary containing the groups that are allowed to perform a given action. As far as I can see, this array is such that   is an array that contains the groups that are allowed to perform the edit action, etc. Except that, if this array is empty, no restriction (other than default ones) apply.
 * In the database dump, the page_restrictions field appears to be either empty or something like  or just  . The explanation of the table says that this is a comma-separated list, but the comments in Title.php mention that is an old format (I think that the value 'sysop' is actually in this format; no page in wikien current has a comma in the page_restrictions field). Paolo Liberatore 15:52, 11 November 2006 (UTC)

Diffs
Are there any plans to provide raw diffs between revisions? This would be particularly helpful for vandalism fighting bots who check text changes for specific keywords and other patterns. The advantage would be that diffs require much less bandwidth and therefore increase bot response time. What do you think? Sebmol 15:24, 21 November 2006 (UTC)

Count
Would it be possible to add an option which woud simply return the number of articles which satisfied the supplied conditions? This would be especially helpful counting large categories, like en:Category:Living people. HTH HAND —Phil | Talk 15:59, 29 November 2006 (UTC)
 * I second that emotion. Zocky 23:26, 9 December 2006 (UTC)
 * Unfortunately this is harder than it looks - issuing a count(*) against database if the client asks for list=allpages would grind db to a halt :). Same goes for list of all revisions, user contribution counters (we even had to introduce an extra db field to keep that number instead of counting), etc. So no plans for now, unless some other alternative is given. --Yurik 09:41, 8 July 2007 (UTC)

version/image info
I think it would be useful to get information about the sofware version (i.e. core version, extensions, hooks/functions installed) and images (i.e. checksum would useful to see if an image has changed). -Sanbeg 21:08, 7 December 2006 (UTC)
 * Some of it is available through the siteinfo request. More will be done later. --Yurik 01:59, 22 May 2007 (UTC)

Select category members by timestamp
Hello, is it possible to retrieve cm with a parameter that lists only starting from a certain timestamp? This should be possible, since a timestamp is stored in the categorylinks database table. Is this the correct place to ask, or should I file a bug in bugzilla? Bryan 12:18, 24 December 2006 (UTC)


 * Similarly get all templated embedding pages by timestamp would be useful for monitoring when a template gets used because it contains structured data that can be reflected on other websites.82.6.99.21 15:14, 9 February 2007 (UTC)


 * Not sure what you mean. Please file a feature req in bugzilla with the sample request/response. --Yurik 09:43, 8 July 2007 (UTC)

mw-plusminus data
MediaWiki recently add mw-plusminus tags to Special:Recentchanges and Special:Watchlist, which display the number of characters added/removed by the edit. I was wondering if it may be possible for api.php to both retrieve this info and somehow append the data to query requests that involve page histories, recentchanges, etc. I would most like to see this available in usercontribs queries, as that data is currently not retrievable through any other medium (save for pulling up each edit one at a time and compaing the added/removed chars). Thanks. AmiDaniel 23:55, 26 February 2007 (UTC)
 * Done for rc & wl. --Yurik 20:09, 8 July 2007 (UTC)

Query Limits
Hi all, another suggestion from me. On the API it is stated that the query limit of 500 items is waived for bot-flagged accounts to the MediaWiki max query of 5000. Would it be possible to extend this same waiver to logged-in admin accounts? I have multiple instances in some of my more recent scripts that query, quite literally, thousands of items (backlinks, contribs, etc.) that are primarily only used by myself and other administrators on enwiki. While having to break these queries up into units of 500 causes a rather dramatic performance decrease on the user's end, I can't imagine it's particularly friendly on the servers either (or rather, submitting two or three requests of 5000 each would likely be less harmful to the servers than twenty or thirty requests of 500 each). Generally, admins are considered similarly well-trusted not to abuse such abilities, and they are also small in numbers on all projects. Also, the MediaWiki software itself allows similar queries of upto 5000, and I most presume that loading this data from MediaWiki is far more burdenous on the servers than submitting the same query through API. This clearly more a political matter than a technical one, but I'd like to know if there are any objections to implementing such a change. Thanks. AmiDaniel 05:05, 1 March 2007 (UTC)
 * done. By amidaniel. :) --Yurik 05:58, 17 July 2007 (UTC)

partial content
it is great to be able to retrieve wikipedias content by an API, but nobody is really going to need the whole page.

would it be possible to pass a variable which adds only the lead paragraph (everything before the TOC) to the XML, not the whole content ? This would be ideal for an implementation into another context, then adding a "View Full Article" link underneath.

any chance of this becoming a possibility ?--83.189.27.110 00:49, 19 March 2007 (UTC)


 * I'd also consider useful to be able to retrieve a single section from the article, so that one could first load the article lead and the TOC, and then see only some of the section(s). This would be a great improvement of efficiency especially for people on slow links (e.g., mobile) and for "community" pages, such as en:Wikipedia:Administrator's noticeboard, some of which tend to be very long. Tizio 15:19, 20 March 2007 (UTC)

returning the beginning/end of a document
Is there any possibility of an API function that returns the first x bytes at the beginning or end of a document? An increasing amount of metadata is stored at the beginning or end of wikipedia articles, talk pages, etc. And tools like "popups" that preview pages don't need to transmit the whole document to preview part of it. I am thinking of client, server, and bandwidth efficiency. Outriggr 01:43, 20 April 2007 (UTC)

Current status ?
Hi,

I have started developping a tool in Java to deal with some maintenance tasks, especially for the disambiguation pages. For the moment, I have planned to do two parts :
 * Much like CorHomo, a way to find and fix all articles linking to a given disambiguation page.
 * A way to find and fix all links to disambiguation pages in a given article.

To finish my tool, I need a few things done in the API :
 * Retrieving the list of links from an article.
 * Submitting a new version of an article.

Do you have any idea when these features will be available in the API ?

How can I help ? (I am rather a beginner in PHP, I have just developped a MediaWiki extension, here, but I can learn).

--NicoV 13:24, 23 April 2007 (UTC)


 * For your "Retrieving the list of links from an article" problem, check the query.php what=links query. It's already been implemented, just not brought over into api.php. As for submitting a new version of an article, I'm afraid there is no ETA on this, and I'm honestly not sure how Yurik is going to about doing this. For now, I'd recommend you do it as it's been done for years--submitting an edit page with "POST". It's not the greatest solution, but it works. If you'd like to help, please write to w:User_talk:Yurik; I'm sure your help is needed! AmiDaniel 18:10, 23 April 2007 (UTC)


 * Thanks for the answer. I was hoping to use a unique API, but no problem in using query.php for the links. For submitting a new version, do you know where I can find an example of submitting an edit page ? --NicoV 20:33, 23 April 2007 (UTC)


 * Unfortunately, dealing with forms and especially MediaWiki forms (as submitting all forms requires fetching userdata from cookies typically) in Java is not particularly easy. For a basic example in submitting a form in Java, see this IBM example. For most of the stuff I've developed, instead of trying to construct my own HttpClient and handle cookies myself, I've simply hooked into the DOM of a web browser and let it take care of the nitty gritty of sending the POST's and GET's. You can see an example of this method using IE's DOM here. I've not found a good way to get the latter to work with Java as Java does not have very good ActiveX/COM support, though it can sort of be done by launching a separate browser instance. I'm afraid I can't really help much more, unfortunately, as it's a problem that's not been well-solved by anyone. It's easy to build MediaWiki crawler's, but not so easy to build bots that actually interact with MediaWiki. AmiDaniel 21:20, 23 April 2007 (UTC)


 * A en-wiki user has written something for page editing in java: en:User:MER-C/Wiki.java. HTH. Tizio 13:10, 26 April 2007 (UTC)


 * I have used parts of the java class you linked, and it's working :) Thanks --NicoV 07:57, 22 May 2007 (UTC)

Ok, thanks. I will take a closer look at the examples. --NicoV 05:38, 27 April 2007 (UTC)


 * Some have been done (links, etc). See the main page. --Yurik 11:48, 14 May 2007 (UTC)


 * Thanks, I will try to use it when it's available on the French Wikipedia --NicoV 20:28, 14 May 2007 (UTC)

Hi, I've also started a Java API (see my blog entry: Java API for MediaWiki query API, I'd like to here opinions about the design and usability of the library. Thanks Axelclk 17:57, 22 June 2007 (UTC)

Real current status
Could somebody please tell the current status of the api regarding fetching a list of links from a site via api.php?query&titles=Albert%20Einstein&prop=links ?

This works just fine with the latest mediawiki on wikipedia but on my mediawiki v1.10 it dont work at all. error reported: unknown_prop. I investigated some time and recognized that there is no includes/api/ApiQueryLinks.php in 1.10. The import of the module also is commented out in ApiQuery.php, like this:

private $mQueryListModules = array (     'info' => 'ApiQueryInfo',      'revisions' => 'ApiQueryRevisions'   }   // 'links' => 'ApiQueryLinks'   // ...some other modules as well

So, what's going on? On API:Query - Page Info it is stated that the link fetch feature is available with MW 1.9. I am confused...
 * So am I :) From what I remember, it has been working for a long time. There is a new release coming out shortly, that will include all the proper changes. As for now, I would suggest simply copying the entire API directory from the current SVN - API would not mess up anything, so if worst comes to worst, it will simply not work :) --Yurik 21:37, 13 July 2007 (UTC)


 * Unfortunatly, this didn't work for me. I checked out the latest trunk/phase3/api. After moving it into includes, nothing works at all :( The PHP engine stated: Fatal error: Call to undefined function wfScript in /srv/www/vhosts/wikit/htdocs/mediawiki-1.10.0/includes/api/ApiFormatBase.php on line 88 . So, I thought to try to get just the class ApiQueryLinks to work with the 1.10.0 api-code and included the class in ApiQuery. After that I tried to call the api. But it seems like the ApiQueryGeneratorBase-class cant be found anywhere. Hopefully the api will work again with one of stable releases to come... --Bell 09:09, 16 July 2007 (UTC)

Hi,you coul try to run 'unstable' vrsion - I actually found it to be reasonably stable. Just sync it from the svn and run updata.php to get the databases up to date. Also, the 10.1 is out. --Yurik 06:07, 17 July 2007 (UTC)

Tokens
I, personally, don't understand why state-changing actions currently require tokens. Shouldn't the lg* parameters be enough to determine whether the client is allowed to perform a certain action? If so, why do you need tokens?

On a side note, I intend to start writing a PHP-based bot using this API, and will try to include every feature the API offers. Since both this talk page and its parent page have been quiet for two weeks, I was wondering if new API features are still posted here. If that is the case, I'll monitor this page and add new features to my (still to be coded) bot when they appear. I'll keep you informed on my progress. --Catrope 17:20, 9 May 2007 (UTC)


 * The motivation for tokens at Manual:Edit token is that they are used to prevent session hijacking. Tizio 19:02, 15 May 2007 (UTC)
 * Ah, I understand now. Didn't think it through as deeply as you did. --Catrope 20:19, 22 May 2007 (UTC)
 * Would it be possible to at least grab editing tokens by API, and require session data, cookie tokens, etc. to get them? The same can be done with index.php (if not mixed up in a lot of other HTML). Gracenotes 17:04, 4 June 2007 (UTC)

Login not working ?
Hi, is there a problem with the "login" action? Whenever I try (in the last hour), I get the following message:  -- NicoV 20:57, 21 May 2007 (UTC)


 * Yes, unfortunately I had to disable login action until a more secure solution is implemented. The current implementation allowed countless login attempts, allowing crackers to break weak passwords by brute force. Disabling it was the only solution. Any help with fixing login module would be greatly appreciated and bring it back faster. --Yurik 01:58, 22 May 2007 (UTC)


 * Would it be possible while disabling it to return an understandable message, like result=Illegal or an other value like result=LoginDisabled ? Currently, when asking XML result, the result is not XML. My tool wasn't prepared for this :). That way, when I get this answer, I would be able to validate the login using an other method (by going through the Special:Login page).
 * Concerning help, I am not very good at PHP, so I doubt I can help you much. I suppose simple tricks like delaying the answer of the login action wouldn't be sufficient. --NicoV 07:17, 22 May 2007 (UTC)


 * The format is obviously not handled properly - even if everything else fails, proper format should be used. Please file as a bug. Thanks for the logindisabled suggestion - I might implement something along those lines later. Now, if only someone could help with the login php code :) Its not hard, just annoying :( --Yurik 21:07, 22 May 2007 (UTC)

I just sent a patch of to Yurik that fixes these security vulnerabilities (the big one that Yurik mentioned at least) and re-enables the login, so this will hopefully make it in the repo by tomorrow and will be synced up on Wikimedia's servers within the week. Sorry for the inconvenience. AmiDaniel 10:36, 23 May 2007 (UTC)


 * That's great news, but in the mean time, the Special:Login page has just been modified with a capcha (at least on the French wikipedia). So the method I was using to edit pages is not working any more :( Is there a way to edit pages with a tool ? --NicoV 15:30, 23 May 2007 (UTC)

Limits
Limits on some queries (like logevents) is lower than allowed via common MW interface. - VasilievVV 15:34, 29 May 2007 (UTC)
 * RC (list=recentchanges) also has this problem. api.php limits it to 500, but I can get up to 5000 using the regular interface. --Catrope 15:37, 29 May 2007 (UTC)


 * This has recently been changed to allow the limit of 5000 and 50 to fast and slow queries, respectively, to sysops as well as bots. We're hesitant enabling higher limits to non-sysops and non-bots, however, until we can do some serious efficiency testing with the interface. AmiDaniel 08:35, 1 June 2007 (UTC)
 * Why is it necessary to make different limits for bots and normal users? They have equal limits in UI (5000). So I think it would be better to set 5000 limit to all queries, that doesn't read page content - VasilievVV 08:38, 2 June 2007 (UTC)

Bad title error
Compare the result of the following two queries: http://www.mediawiki.org/w/api.php?action=query&prop=info&titles=API|Dog&format=jsonfm http://www.mediawiki.org/w/api.php?action=query&prop=info&titles=API||Dog&format=jsonfm In the latter query, the second title is empty (and thus invalid), which causes the API (to rightfully) throw an error. The downside is that it destroys my entire query (the original query that caused my error here contained 500 titles, of which one was empty). This is pretty unfriendly behavior, but fixing this raises the issue of having to return an error and a query result in one response (which is currently impossible).

Alternatively, I can make sure there are no empty titles in my query (which fixed my problem), but are empty titles the only ones that trigger the "invalid query" error? If not, could someone provide a list of exactly what causes api.php to return "invalid query"?

Thanks in advance --Catrope 15:32, 5 June 2007 (UTC)


 * This problem still exists. The same thing happens if there is a bad title as opposed to an empty title; example:  |API|Fe%5Bd%5D|Computer this query just returns a "bad title" error but doesn't tell you which one of the titles caused the error, and doesn't return any information about the titles that are OK.  --Russ Blau 14:26, 18 September 2007 (UTC)


 * Thanks. Is there a bug filed for this? --Yurik 17:28, 18 September 2007 (UTC)

Searching content?
There's a comment from 2006 in the General Archive above that says:
 * Search
 * list of articles that contain string query

Is this being implemented in the API? -- SatyrTN 21:55, 7 June 2007 (UTC)
 * A feature like this would be essential for search-and-replace bots. --Catrope 07:40, 8 June 2007 (UTC)
 * So would that be a "yes"? :) -- SatyrTN 14:02, 13 June 2007 (UTC)
 * I have no idea if it's going to be implemented or not, I'm not in charge of developing the API. I was just saying that it's very useful, and should be implemented.--Catrope 15:29, 13 June 2007 (UTC)
 * I agree that it would be an excellent feature to have. My biggest concern at the present though is the ability to unit test the API: we are in dire need to have good testing framework before we move forward. --Yurik 01:11, 14 June 2007 (UTC)
 * What sort of framework, exactly, are you thinking of? There is a lot of bot code around that might be a starting point. CBM 02:51, 14 June 2007 (UTC)

Page watched ?
With the API, is there a way to know if a page is watched by the user ?

That would be useful for me: I have written a tool to help fixing links to disambiguation pages, and currently when the tool submits a new version of a page, the page is always unwatched. I can probably deal with this, but that would mean reading a lot more from Wikipedia, because the "wpWatchthis" checkbox is far from the begining of the page. --NicoV 21:12, 8 June 2007 (UTC)
 * You don't need the checkbox, the buttons right on top of the page are enough. There will be a "watch" button if the page isn't watched, and an "unwatch" button if it is. Of course when editing pages is implemented in the API, you'll no longer need all of this. --Catrope 08:47, 11 June 2007 (UTC)
 * Yes, thanks. The only drawback seems the waste of bandwidth (the watch button is almost at the end of the HTML file, while the checkbox is earlier, and a lot earlier on the French wikipedia because a lot of automatic stuff is added in between). I have already tested with the checkbox, but it doesn't work how I'd like it to for users having a preference of automatically watching pages they are editing. I will try the Watch button. --NicoV 21:39, 12 June 2007 (UTC)
 * Hmm, maybe request the watchlist (through the API) at the beginning of your program, then check every page title against that list? --Catrope 12:57, 13 June 2007 (UTC)
 * Thanks again, but I have decided to use the "watch"/"unwatch" button, until (I hope) there's a way of getting this info when retrieving page data through the API. --NicoV 17:40, 14 June 2007 (UTC)
 * You really don't need to. With two requests to the API (one to log in and one to get your watchlist through action=query&list=watchlist) you can store your watchlist in an array. When editing a page, simply check if it's in the array. --Catrope 19:42, 14 June 2007 (UTC)

DEFAULTSORT key
I know very little about programming and computers, but this API thing sounds like it could be used to get useful data such as a list of biographical articles without DEFAULTSORT keys. Am I right to think that a query could be run that could detect all articles with w:Template:WPBiography on their talk pages, but which didn't have the DEFAULTSORT magic word somewhere in the article, and furthermore that the categories (cl) function with "Parameters: clprop=sortkey (optional)" could detect existing pipe-sorting in the categories and output that data as well? Or am I misunderstanding the purpose and limits of API? Carcharoth 10:16, 18 June 2007 (UTC)
 * It's possible, but:
 * The API can list articles with a certain template in them or all articles in a certain category, but it can't search through them. You'll have to write a script that does that.
 * That script could distinguish DEFAULTSORT, pipe-sorted and non-sorted articles, but it can't automatically correct them to use the DEFAULTSORT magic. You'll have to do that by hand.
 * I'll write that script some time this week (as the DEFAULTSORT issue is present on BattlestarWiki as well), so keep an eye on this page to see when it's there. --Catrope 15:06, 18 June 2007 (UTC)

Feature request : list=random
Let the API generate a list of random pages as the list. Parameters: Would be useful for tools that look for pages/images matching criteria that are not easily obtained otherwise, e.g., pages with no (trivial) categories. --Magnus Manske 23:51, 23 June 2007 (UTC)
 * Namespace(s)
 * Redirect filter
 * Limit

Future edit API
I think that when the edit functionality is to be implemented, it should be defined as POST, and not GET. → Aza Toth 23:43, 24 June 2007 (UTC)
 * I think it shouldn't deny either kind of request as both have their uses. It's the names and values contained in the requests that should determine the functionality, not the kind of request. --Nad 04:13, 25 June 2007 (UTC)
 * nah, there's a reason for GET and POST to be different HTTP-verbs. one to get information from the server, one to change information on the server. btw, login should never work with GET, you don't want your password in the server logs. -- D 12:02, 25 June 2007 (UTC)
 * GET and POST are allowed for all API requests, and there is no easy way to change that for just one action. IMO, users who are stupid enough to send a GET request with their password or other sensitive stuff should bear the consequences. BTW, action=edit users will be forced to use POST in most cases, since the query string supplied with GET is limited to 255 characters, while most articles are substantially longer. In fact, the longest Wikipedia article comes close to being 450KB in size. --Catrope 18:11, 26 June 2007 (UTC)

API Interface for .NET
I've been putting together an Open Source .NET interface for the MedaiWIKI API, I've looked around the site for where to list it, but haven't found anything that looks like the right place, can anyone point me in the right direction as to where? 65.27.174.205 02:15, 1 July 2007 (UTC)
 * I have made something similar a while back, but haven't published it. Not sure if mediawiki would want to keep this, but you could always add it to sourceforge. --Yurik 03:42, 2 July 2007 (UTC)
 * Yeah not a big deal, currently have it on google hosting maybe i'll move it. Thanks 65.27.174.205 21:43, 2 July 2007 (UTC)
 * It seems there is no sources there? Where the sources or library could be downloaded? uk:User:Alex_Blokha


 * On http://sourceforge.net/projects/jwbf/ you can find a API Interface for Java, what are you thinking about a list of projects which supply connectors to MediaWiki API ?
 * Will create a page for them, thanks for the link. Please sign your posts with --~ . --Yurik 18:49, 8 July 2007 (UTC)

Why there is no wsdl?
Why you do not release wsdl api? http://www.google.com/search?hl=uk&q=define:wsdl Why should we write for each platform/language new wrapper, instead of just using wsdl, which is implemented and tested on each platform? For example on .net platform, wsdl serice is included in project with 3 clicks of mouse. uk:User:Alex_Blokha
 * WSDL is just for web services, whereas we have simple HTTP get/post request-response protocol to allow many different clients to seamlessly use our API. A WSDL wrapper might be useful, and could be added at a later date. Feel free to contribute. --Yurik 04:02, 17 July 2007 (UTC)
 * I don't program with php. The only thing I know about it, that wsdl support is included in last versions of php. But if you can give windows web-hosting, the web-service based on existing frameworks for wikipedia can be created. By me for example. uk:User:Alex_Blokha
 * Although the MediaWiki API is written in PHP, its output is still XML by default. You can change that to JSON, PHP, and some other formats by setting the format= parameter. --Catrope 14:32, 1 August 2007 (UTC)

imageinfo/json
why doesn't imageinfo use a list for the image revisions? using stringified numbers as keys to a map is a bit strange, especially the extra "repository"-entry makes it a bit hard to parse. within the revision, "size", "width" and "height" look very much like numbers to me, so why are these strings, too? -- ∂ 23:42, 6 August 2007 (UTC)
 * Fixed the size values. Not sure how to implement the list yet (need to move rep name somewhere else). --Yurik 03:34, 7 August 2007 (UTC)
 * oh, very nice, thank you :) i guess the repository is not fixed easily, pushing down the revisions one level would be a bit incoherent.. -- ∂ 08:13, 7 August 2007 (UTC)
 * I fixed this by adding page-level tag "imagerepository". --Yurik 08:45, 9 August 2007 (UTC)
 * thanks again :) -- ∂ 12:46, 10 August 2007 (UTC)

exturlusage
is there a reason list=exturlusage contains a "p"-tag instead of an "eu" as i would expect? -- ∂ 00:58, 8 August 2007 (UTC)
 * Thx, fixed. --Yurik 08:44, 9 August 2007 (UTC)

Protectedpages
Yurik, I posted this on your enwiki talk page, then realized that this might be a better venue.

I was wondering if you (or any other API users) would be interested in reviewing an API query module I've cooked up, to see if it'd be worthy of a commit. It's basically a clone of ApiQueryAllpages, except it implements Special:Protectedpages instead of Special:Allpages, removing the need for bot developers to either screen-scrape Special:Protectedpages for a list of protected pages, loop through QueryLogevents, or run QueryAllpages through QueryInfo for protection information.

Perhaps it'd be better implemented by extending an existing module? Let me know what you think. :) &mdash; madman bum and angel 07:08, 8 August 2007 (UTC)

ApiQueryProtectedpages.php


 * I added this to the list=allpages - makes more sense there. Hope it does not ruin db performance. --Yurik 08:43, 9 August 2007 (UTC)


 * Thanks! I shouldn't think it would, but I'll watch it carefully.  If it does, then I suppose we can switch limits for that query to the slow query limits.  Madman 16:18, 9 August 2007 (UTC)


 * Unfortunately it is not the number of items that takes long time, it's the query itself. Will monitor it later on. --Yurik 05:28, 13 August 2007 (UTC)

rvlimit
hi! the documentation says rvlimit       - limit how many revisions will be returned (enum) No more than 50 (500 for bots) allowed. this seems not to be true: when logged in, i can get 500 revisions without having the bot flag set. -- ∂ 12:46, 10 August 2007 (UTC)
 * I think administrators have the same limits of bots. Tizio 16:14, 14 August 2007 (UTC)

result-less lists
another thing that bugs me: getting f.e. list=backlinks to a page with zero links to it, i get [] back. i'd expect { "query": { "backlinks": [] } } instead. the same goes for format=xml where i get a simple and many other places than just list=backlinks -- ∂ 18:16, 11 August 2007 (UTC)
 * Please file a bug with a sample query url. --Yurik 05:27, 13 August 2007 (UTC)
 * I filed a similar bug (bug 10887) Bryan Tong Minh 21:35, 14 August 2007 (UTC)

Post/edit data
Posting data via the api isn't possible at this time. So, which other method can be used for it right now? It seems like there are some alternatives:


 * post form data via http
 * post data via mediawikis xml-import interface
 * use one of the bot frameworks, that capsulate that job, so we don't have to think further

But there aren't any PHP bot frameworks right now. Does anybody have implemented posting data via php yet? If so, please link/post your solution. thanks!


 * I'm new to http programming, so forgive me if I'm misunderstanding your question. You can use index.php for writing a page. I have some java code which does that (and more that is specific to my bot) aten:User:WatchlistBot/source.java the writing is in Page.java, in the put method. I got started using en:User:Gracenotes/Java_code which is much less code and shows how to read/write pages. Mom2jandk 20:23, 28 August 2007 (UTC)


 * Extension:Simple Forms allows editing/creating of articles from URL, but doesn't use edit-tokens. Once API supports editing, SimpleForms would use that in preference to it's own methods, but for now it is a working PHP way of editing articles. --Nad 21:03, 28 August 2007 (UTC)

How to find API URL?
Let's say I'm creating some tools that get information (via the API) off a wiki specified by the user.

So the tools need the API's URL, but how does the user know the URL of the API? For example, on Wikipedia mod rewrite is used to give pretty URLs so the API is avalaible at http://en.wikipedia.org/w/api.php. We know it is there because we are interested in that kind of thing and read medaiwiki.org, but if my tools asked the user for the URL of the wiki's API, I think it is unlikely they would have known to use this address.

Is there a specified pattern to where the non-modrewrite versions of the URLs go in a wiki, or is the /w thing just a weak convention? Is there an established way for the URL of the api to be advertised?

Ideally, I'd like the user to just be able to give the URL of the wiki's main page, and for the URL of the API to be somehow discoverable from that. I think the normal URL of the main page is the easiest address for the user to give, especially given that tools using the API will be used by people who don't know what an "API", "web services", "mod rewrite" etc are. Jim Higson 15:00, 24 August 2007 (UTC)


 * If the user can give you the location of index.php (available by editing a page and looking at the URL), you should be able to replace index.php with api.php. But really you have no control over HTTP rewriting, so on a particular server it's possible that api.php is not accessible from the same place index.php is accessible from. CBM 16:39, 24 August 2007 (UTC)


 * Yes, I know about replacing the "index.php" with "api.php", but on most Mediawiki installations this won't work because it will try to rewrite the URL to index.php?title=api.php. Asking the user to edit a page and look at the URL seems not as user friendly as if they just had to enter the URL of the main page.
 * How about this as a suggestion for advertising the API: the main interface accepts action=advertiseapi, and when this is present responds with the URL for api.php? This way the API could be found quite easily by automated tools without preknowledge of that particular wiki's URL structure. Jim Higson 13:28, 25 August 2007 (UTC)

http response codes
I wasn't sure where to ask this, so please let me know if somewhere else would be better. I'm porting my bot (en:User:WatchlistBot) to java, using the API. I get an http response code of 400 when I try to write this page, and a 403 code when I try to write other pages (the 403 code is recent, the other has been happening for awhile). The pages seem to be written correctly despite the error codes. I'm an experienced java programmer, but new to http. I looked up generally what the error codes mean, but it's not very helpful. Can anyone tell me specifically what this means? I have an older version of the source posted and linked from the bot page. I can update that if it would help. Mom2jandk 20:11, 28 August 2007 (UTC)
 * A useful online tool for debugging http requests and responses is http://web-sniffer.net. Extract the exact request your bot is making, then make that same request from the web-sniffer and it will show you exactly what the server is responding with. --Nad 21:12, 28 August 2007 (UTC)
 * I had a look at the java code and noticed that "wpSave" was missing from the post request vars, to replicate a normal post MW may like to have that set to "Save page"? --Nad 21:27, 28 August 2007 (UTC)

GET method preview
See 11173 for details. Looking for some feedback.

Addendum: I have tested the LivePreview feature, and have determined that any vulnerabilities this could possibly introduce already basically exist via LivePreview and even normal Preview. Currently, to submit a preview, all you need is to send POST data for the wpTextbox1 and wpPreview=Show+preview (I believe). Splarka 23:45, 3 September 2007 (UTC)

Login using CURL + API
Hi, i tried to let the user login using api method

the code is like this:

$postfields = array; $postfields[] = array("action", "login"); $postfields[] = array("lgname", "$id"); $postfields[] = array("lgpassword", "$pass"); $postfields[] = array("format", "xml");

foreach($postfields as $subarray) {             list($a, $b) = $subarray; $b= urlencode($b); $postedfields[] = "$a=$b"; }

$urlstring = join("\n", $postedfields); $urlstring = ereg_replace("\n", "&", $urlstring); $ch = curl_init("http://mydomain.com/api.php"); curl_setopt($ch, CURLOPT_HEADER, 0); curl_setopt($ch, CURLOPT_POST, 1); curl_setopt($ch, CURLOPT_POSTFIELDS, $urlstring); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 0); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); $buffer = curl_exec($ch); curl_close($ch);

however, this code doesnot login the user when it is executed.

may i know whether i can use curl with mediawiki api?

or is there any other way to make the login works besides sending header location???


 * I don't see where you specify that cookies are to be used. I don't use libcurl under PHP, but the C binding has the options CURLOPT_COOKIEFILE, CURLOPT_COOKIEJAR, and CURLOPT_COOKIE, which are probably the same in PHP. Tizio 12:08, 12 September 2007 (UTC)
 * Try Snoopy, a PHP class that makes this much easier, and also supports cookies. --Catrope 14:23, 12 September 2007 (UTC)
 * Oh, i'm trying to put the variables like lgname, lgpassword in the link that will be post to the api, the links will look like this : http://mydomain.com/api.php?action=login&lgname=$id&lgpassword=$pass&format=xml

as i echo $buffer, it gave me the output like

 -



the output looks just like what we type in the browser address bar! how come curl is not working and if i type the link in the address bar, it works?

i'm too new to mediawiki api... can anyone please guide me..? please??? If i must use the cookie in this case, how can i apply it?

Thanks a lot!!! --Liyen


 * Try

curl_setopt($ch, CURLOPT_COOKIEFILE, "cookies.txt"); curl_setopt($ch, CURLOPT_COOKIEJAR, "cookies.txt");
 * before curl_exec. Tizio 16:53, 12 September 2007 (UTC)

Count Revisions
It would be nice to get the total number of revisions while getting info for a given page. Currently one needs to make several calls if the number os revisions is larger than 50 (current limit). Is this being considered or are performance issues at stake? -- Sérgio

Get page contents after a given revision?
Hi, is is possible to get the page contents after a given revision? I know how to get the contents of the revision but what I currently want is the full page after the revision has been incorporated. --Sérgio
 * Those are the same. The contents of revision 12345 are the contents of the page after revision 12345 was incorporated. --Catrope 13:54, 8 October 2007 (UTC)

Wikipedia API URL bad guess
$ w3m -dump http://en.wikipedia.org/api.php Did you mean to type api.php ? You will be automatically redirected there in five seconds.

No, I meant to type http://en.wikipedia.org/w/api.php ... you see one cannot guess where it might be on different installations. Anyway, The above Wikipedia message could give a better guess. Jidanni 00:46, 24 October 2007 (UTC)
 * I think people going to en.wikipedia.org/Dog meaning to go to /wiki/Dog instead are a lot more common than people mixing up /api.php and /w/api.php Most people who use api.php know where it's located. --Catrope 22:34, 30 October 2007 (UTC)

Sorting
Is there any function for sorting the query-results ? Especially for lists of pages ? Augiasstallputzer 12:16, 11 November 2007 (UTC)
 * No. But it's fairly trivial to write a script (PHP or otherwise) that sorts the list for you. --217.121.125.222 20:01, 11 November 2007 (UTC)

Problems with backlinks query
Hi,

I am using the backlinks query for Wikipedia Cleaner and it seems that it's not working any more. I was using it with the titles parameter but I tried the bltitle and it's not working. The example provided in API:Query - Lists is not working also: http://en.wikipedia.org/w/api.php?action=query&list=backlinks&bltitle=Main%20Page&bllimit=5&blfilterredir=redirects returns no backlinks. --NicoV 21:07, 16 November 2007 (UTC)
 * It does for me. --Catrope 10:25, 17 November 2007 (UTC)
 * Yes, it's working now, but yesterday there was no tag, only the  tag. --NicoV 18:38, 17 November 2007 (UTC)

Used templates' paramaters
It'll be a great possibility to get used templates' parameters. Sorry, that was me :)

I mean - now I have to get page content and parse it for templates parameters (like Commons categories or coordinates) and this action generates a huge traffic. || WASD 01:22, 20 December 2007 (UTC)
 * Unfortunately, these parameters aren't stored in the database separately (only in the article text), so we can't get them efficiently. --Catrope 13:34, 20 December 2007 (UTC)
 * All right then, at least now I know that it's not possible :) || WASD 19:59, 20 December 2007 (UTC)

Hi,

these parameters aren't stored in the database separately, but wouldnt it be useful to have the possibilty to parse default from the API? So, not anybody has to parse templates ba himself. Only a parameter which replaces {{ and | against < and of course template/xml closing ;)? No less traffic/computing time, but easier use. merci & greetz VanGore 22:31, 1 September 2008 (UTC)
 * The only thing this does is moving computing time away from the client to the server. Now parsing template parameters just once for one page may be cheap, but parsing them over and over again for hundreds or thousands of clients is not. That's why we try to move computing time from the server to the client if that's a responsible thing to do, like in this case. If the parameters were stored in the database, retrieving them on the server side would be cheaper then extracting them on the client side, so in that case this feature would obviously be added. But since it's not, it won't. --Catrope 11:30, 2 September 2008 (UTC)

Hi Catrope,

thanks for your reply. I understand that big projects like wikimedia-projects cant't offer xml-parsed templates. But for most Dataquerys, people use their own dumps. Would ist be possible to implement the template2xml in the only software, but not as default? I tried to reunderstand template-structur, I tried the hints in de:Hilfe:Personendaten/Datenextraktion but I didn't understand it. Do you know a documentation about the mediawiki templates? merci & greetz VanGore 10:12, 3 September 2008 (UTC)
 * Judging by the database queries in that document, I'd say it's way out of date. This wiki also has information about the general database layout and the table that keeps track of template usage. Template parameters are only stored in the actual wikitext, so you can't find them in any database tables. The wikitext itself is stored in the text table. --Catrope 13:44, 3 September 2008 (UTC)

Hi Catrope, I'd say it's way out of date. - Yes I searche too long for cur ;). Thanks for the rest, I will try no with theses information. Last Question: do know wehre I can find the php-code for templates? merci & greetz VanGore 15:08, 3 September 2008 (UTC)
 * The PHP code for parsing templates is probably in includes/parser/Parser.php, but the parser is a complex part of the MW code that's difficult to understand. You'd probably be better off writing your own code to extract template arguments. If you need advice, try talking to Tim Starling, he wrote the parser. --Catrope 15:23, 3 September 2008 (UTC)

Hi Catrope,

cool, I didn't saw the wood for the trees;) Reading includes/Parser.php and Manual:Parser.php is quite good for understanding, writing [my] own code to extract template arguments will be easier, I'll hope;) 	otherwise I'll come back or contact Tim.... merci & greetz VanGore 16:02, 3 September 2008 (UTC)

usercontribs not working any longer?
hi!

for example http://de.wikipedia.org/w/api.php?action=query&list=usercontribs&ucuserprefix=84.167 does not list edits made in 2008. i have to use "ucend" explicitly, but iirc yesterday i did not have to. so, is there no possibility to get the latest edits of a range (without setting "ucend")? -- 85.180.68.18 09:27, 27 March 2008 (UTC)
 * ucuserprefix now also sorts by username before it starts sorting by date. I know that's weird, but it's for performance reasons. --Catrope 20:02, 28 March 2008 (UTC)
 * ok, thx. so (as said in ) for me there's no opportunity to change that behavior? the only solution i can image now is a self-written script which gets the result of api.php and postprocesses the data offline. is there a better way? -- 85.180.71.89 17:06, 29 March 2008 (UTC)
 * I'll request a DB schema change so we can go back to the previous, more intuitive behavior while not killing performance. --Catrope 12:11, 6 April 2008 (UTC)

Getting the URL of a page
It would be very useful to have the full URL to the page in "prop=info". Is there any way to get it in another way? Thanks! -- OlivierCroquette 18:48, 3 June 2008 (UTC)
 * It's not that difficult. From /wherever/api.php you need to go to /wherever/index.php?title=Foo_bar . If the wiki uses pretty URLs, you'll automatically be redirected to the pretty URL. --Catrope 10:55, 20 June 2008 (UTC)

Generator + categoryinfo bug
I think I have found a bug in the API. (I don't have a bugzilla account and don't know how to use bugzilla so I'll report here instead.)

Bug description:

When using a generator to get a list of categories and then using "prop=categoryinfo", then the categoryinfo is not shown for all the categories in the list. I have tested this on several Wikimedia projects and I see the same bug on all of them.

Background:

We were discussing how to use the API to find redirected categories that still contain pages, so we can fix those pages. See en:Template talk:Category redirect. Using the API would be way more efficient than our current approaches.

Examples:

Here are two different queries that show the bug. The first query lists the categoryinfo for the hard redirects. The other query lists the categoryinfo for our "soft" redirects (that is categories with our "this category is redirected" template on them).

http://en.wikipedia.org/w/api.php?action=query&generator=allpages&gapnamespace=14&gapfilterredir=redirects&gaplimit=500&prop=categoryinfo

http://en.wikipedia.org/w/api.php?action=query&generator=categorymembers&gcmtitle=Category:Wikipedia_category_redirects&gcmnamespace=14&gcmlimit=500&prop=categoryinfo

I hope you guys can fix this. Since this would be a very good way for us to find the pages that need their categorising fixed. (Easier for the bots and humans, and costs less server load.) And it would work on all projects.

--Davidgothberg 12:35, 26 August 2008 (UTC)


 * This is really a symptom of a very different bug: categories that have a description page but never had any members aren't considered categories by the software. Filed at BugZilla. --Catrope 12:57, 26 August 2008 (UTC)


 * Oh! Your answer solves the problem in our case! Since if those categories never have had any members then we don't need their category info. Well, I think it solves it for all such usage, since as long as you know that "no category info" = empty category, then it should be okay. That should perhaps be documented in the API documentation.
 * Thanks for your answer. I know some bot owners on some projects that will be very happy to be able to use this query now. I'll report it to them.
 * --Davidgothberg 13:11, 26 August 2008 (UTC)
 * Knowing that no categoryinfo means no members is nice, but of course this is in fact a bug and should be fixed. The bug causing it is outside of the API, though, which is why I filed it at Bugzilla. --Catrope 14:37, 26 August 2008 (UTC)

Difference with query.php
There was an useful feature in query.php, when getting category members, allowed prop included timestamp (revision table) and touched (page table), with api.php we can get only the timestamp (action=query, list=categorymembers, cmprop = ids, title, sortkey, timestamp), touched is useful because you can implement a category members cache on client side, when categorymembers changes by adding/removing cat in article, touched is updated and the client know it must update its cache. Phe 07:58, 27 August 2008 (UTC)
 * You can get touched by using generator=categorymembers&prop=info --Catrope 11:15, 27 August 2008 (UTC)
 * Nice tip, thanks, and it was given as an example in the documentation ... Phe 15:56, 27 August 2008 (UTC)

is a xml-Schema or DTD for the api-Responses available?
Hi there. I'm working on a java-Client to access the mediawiki-API. I would like to use XML as the response format and JAXB to evaluate the API-responses. So it would be great if there is already a Schema or DTD to generate the corresponding Java-Classes. Thanks, --Gnu1742 10:50, 29 August 2008 (UTC)
 * We're working on such a feature, see this bug. --Catrope 21:00, 31 August 2008 (UTC)

Problems with action=edit
Hi, I have some problems with api.php?action=edit. It always tries to edit w/api.php instead of the page with the specified title. At first I though that it is a mod_rewrite problem, but the query parameters seem to be correct.
 * Seems to be this bug http://www.organicdesign.co.nz/MediaWiki_1.11_title_extraction_bug
 * If disabling your rewrite rules fixes this, it's not a bug in MediaWiki. --Catrope 20:59, 19 October 2008 (UTC)
 * Only a problem if you're using short url's. I have found a quick n' easy workaround, in your LocalSettings.php, wrap your $wgArticlePath="/$1" line with the condition: if (preg_match("/api\.php$/", $_SERVER['PHP_SELF'])) { ... } --FokeyJoe (2009-11-25)

Location Search?
Hi. This is my first time participating in a wikipedia discussion, so I apologize if I'm not following proper etiquette posting here like this. Over the summer, I had used some kind of location search to request a list of articles that were in a particular location. You could filter the results by the category the point of interest fell under (Natural formation, government institution, business, etc.). I can't seem to find any mention of this old functionality, which was quite handy. All I find are 2 jailbroken iphone apps, that seem to do this sort of thing.

The first is this app which is using a cached copy of the data set from the wikipedia-world project. It's close, but i'm sure there used to be (or maybe should be) a way of making these kinds of queries through the API.

The other is Geopedia, an iphone app which seems to do the kind of searching I thought was possible. However, there's no documentation, no homepage and no way for me to contact the developer to ask how they did this.

Am I going crazy? I could have sworn that there was a way of searching wikipedia articles for those that are within some distance of a specific coordinate pair. Can someone point me in the right direction?
 * As far as I know these coordinates aren't stored in the database separately, so any implementation that isn't hugely inefficient would have to make such a database table, either automatically using an extension, or using a database dump. --Catrope 18:25, 24 November 2008 (UTC)

Account simulation
With the ability to simulate an account when performing an action, mediawiki could be completely integrated into virtually any site. This would require a new sort of flag in localsettings to allow trusted systems to perform that way. For example, I'm interested in only the history and parsing/editing bits, and have my own account management, access control system, and page generation system. 75.75.182.36 01:57, 28 February 2009 (UTC)
 * You could just use action=login to log in with the account you want to 'simulate'. You can allow authentication from other sources with AuthPlugins. --Catrope 10:10, 28 February 2009 (UTC)

Persistent Connections?
Is there any possibility wikimedia's HTTP can be upgraded to 1.1, in particular, to allow persistent HTTP connections? Or is there some other way I'm missing of connecting to a persistent server so I don't have to re-establish connection for every single usage of the API? Language Lover 17:09, 20 April 2009 (UTC)
 * It appears to me (and good old openssl and wireshark) that the secure.wikimedia.org server does indeed allow persistent connections and does support HTTP/1.1 keep-alive as standard. -- JSharp 01:31, 21 April 2009 (UTC)
 * Oops, I think you might need this: . It's the ssl-enabled link to the en.wp API. :) -- JSharp 01:36, 21 April 2009 (UTC)
 * Are there any news? I am facing the same Problem. My application sends many but very short requests (with very short answers). So connection opening and closing takes most of the time and traffic for each request. Persistent connections would speed up things. (Even secure.wikimedia.org seems to disallow persistent connections).

How to integrate
Hi

Apologies for being totally ignorant of HTML and MediaWiki, but I wonder if there's a way using MediaWiki markup or using HTML to get the output of a query (such as this one) into a Wikipedia page, formatted in some reasonable way. As I don't regularly check this site, I would appreciate a talkback at my en talk page (in sig below).

Thanks Bongo matic 04:24, 1 October 2009 (UTC)

Image license information
Is there a way to get the license of an image through the api?


 * |File:Ixodholmal1.jpg By category is probably easiest, assuming the site categorizes by license. There is no built in module though for license information. Splarka 08:45, 22 January 2010 (UTC)

Possible to detect a page's existance?
Hi there. I'm currently working on a bit of PHP software. I'm wondering if it's possible to detect a page's existance using the API? If not, is there any external way of doing so? Thanks. Smashman2004 21:42, 7 July 2010 (UTC)
 * Fail reply is fail. Smashman2004 16:41, 15 July 2010 (UTC)

Throttling
Does the API implement any type of throttling for non-Bot users? There is discussion at the Spam attacks article on the Signpost this weeks about the need for such throttling. Apparently the attacker mentioned in the article was able to post at an average rate of 1 article per second, which is a little bit scary. Best practice among bot operators is generally no faster than 1 post every 5 seconds. Hard-coding such a limit into the API (at least for accounts not approved as Bots) might not be a bad idea. Kaldari 17:15, 17 August 2010 (UTC)
 * Manual:$wgRateLimits applies to the api as well as normal edits as far as i know. Bawolff 23:23, 17 October 2010 (UTC)

API/userid?
Is there a way to get someone's userid from the api?

raw code
The documentation says that for the source code of pages "index.php?action=raw" should be used. Is this mandatory or is there a way to do it over the API? --:Slomox:: &gt;&lt; 00:13, 22 November 2010 (UTC)
 * yes you can. See API:Query_-_Properties. Bawolff 03:58, 22 November 2010 (UTC)

Support for multiple categories when using

 * jlaska 66.187.233.202 - I see there are extensions that allow querying for ages that are members in multiple categories (such as Extension:CategoryIntersection and Extension:Multi-Category_Search). Those are nice as they provide a Special: page, however they don't appear to add API support.  From what I've tested, it appears that   does not support joining multiple   values.  Are there plans to add this support, or alternative API methods for finding pages that exist in multiple categories?

Extract headings and subheadings
Hi. Can the API extract headings and subheadings easily? Cheers, 131.111.1.66 11:20, 13 March 2011 (UTC).
 * Sure --Catrope 11:40, 13 March 2011 (UTC)
 * Thank you Catrope; very useful! 86.9.199.117 22:04, 13 March 2011 (UTC)

Extract all links of a given page
Hi. Can I extract all links of a given page easily using the API? Thanks. 86.9.199.117 22:02, 13 March 2011 (UTC)
 * You can get all links on a page or get all links to a page. --Catrope 22:45, 13 March 2011 (UTC)
 * Thanks. Randomblue 11:11, 14 March 2011 (UTC)
 * Is it possible to sort them by the order they are in the page? Helder 14:59, 14 March 2011 (UTC)
 * No. --Catrope 17:49, 14 March 2011 (UTC)

Stripping off templates, refs, interwiki links, etc.
Hi. Can the API keep only the text (with links) and headings from an article, stripping off all the rest? Thank you. Randomblue 11:11, 14 March 2011 (UTC)
 * No, you'd have to do that yourself somehow. --Catrope 11:24, 14 March 2011 (UTC)

Extracting template information
I would like, for example, to extract information from infoboxes. Suppose that I'm interested in country infoboxes. I would like to be able to extract, for each page that transcludes the country infobox, basic information such as "capital", "population", "president", etc. Has this already been done? What would be the best way to do this? Cheers, 173.192.170.114 17:39, 22 March 2011 (UTC).
 * This is not available from the API directly. There are projects like DBpedia that have done work in this direction --Catrope 14:51, 23 March 2011 (UTC)

A way to retrieve article class?
Even though I've searched the MediaWiki API thoroughly I haven't found a way to retrieve article class information, such as A-class, good, featured etc. Exported article text does not contain this information, nor is there a property corresponding to it. Moreover, dumping the page content with perl's WWW::Mechanize doesn't help because the relevant text is generated on they fly and is not captured by Mechanize.

I'd appreciate any pointers… Patrinos 08:52, 9 April 2011 (UTC)
 * Concepts like featured articles were invented by Wikipedians, but they don't exist as such in the MediaWiki software. You may be able to detect featured-ness using categories or templates used on the page or something like that. --Catrope 10:15, 12 April 2011 (UTC)

apfrom bug?
Hi. 1) I load from '%' http://en.wikipedia.org/w/api.php?action=query&list=allpages&apfrom=%&aplimit=11&format=xml 2) apfrom is '%d' so, I get http://en.wikipedia.org/w/api.php?action=query&list=allpages&apfrom=%d&aplimit=11&format=xml 3) now apfrom is %25 so, I get http://en.wikipedia.org/w/api.php?action=query&list=allpages&apfrom=%25&aplimit=11&format=xml 4) voilà, i'm on '%' again, why? Emijrp 09:40, 9 April 2011 (UTC)
 * Because you didn't percent-encode your parameters when constructing your URL. --R&#39;n&#39;B 11:32, 10 April 2011 (UTC)

API queries from PHP
Are all API queries callable using PHP, e.g. for MediaWiki extensions? Randomblue 15:49, 10 April 2011 (UTC)
 * Yes, see API:Calling internally. However, for most things you'll probably want to use core functionality or a database query rather than going through the API. --Catrope 10:17, 12 April 2011 (UTC)

retrive data from a category with more then 500 articles
I want to make a list of the Italian_verbs from Wiktionary. I use the query: To get the first 500 items. How do I get the rest of the data? (I think It has to do with cmcontinue but I don't understand how to use it... thanks Jobnikon 17:31, 8 May 2011 (UTC)
 * http://en.wiktionary.org/w/api.php?action=query&list=categorymembers&cmtitle=Category:Italian_verbs&cmlimit=500
 * At the bottom, you'll see . So to get the next 500 results, repeat the same API call with   (the   part is what you get when you XML-decode then URL-encode  ). --Catrope 16:36, 9 May 2011 (UTC)

List of images with links
Please is there a way in the API to get list of images in a wiki with the pages that link to this images --Mohamed Ouda 14:15, 2 July 2011 (UTC)
 * Not in one request, no. You can get the list of all images with list=allimages and you can get the list of pages linking to a specific image with list=imageusage. The latter only takes one image at a time though. --Catrope 10:21, 3 July 2011 (UTC)

Get translations from wiktionary?
I wonder how to use wiktionary to translate words, for exaple to get a datadump with all the english words, and their translation in spanish? I have seen it done on this site http://www.dicts.info/doc/dl/download.php so its possible. But how do they do it?
 * Probably by downloading a dump of all Wiktionary content and parsing the translations out of it. MediaWiki itself doesn't treat the translations specially, they're just words appearing in a box with fancy styling, so the API doesn't provide any way to retrieve these short of grabbing the entire page content and parsing out the translations yourself. --Catrope 09:01, 28 October 2011 (UTC)

Problems with MW1.18?
We're in soft launch for a new wiki and playing around with the MW1.18 (yes - we know it's beta). In testing out the HotCat gadget, we've noticed a problem with our API. It wasn't an issue with MW1.17 and while we are playing around with things during the soft launch, I can't think of any settings that would have an impact on that. The API was working correctly before MW1.17 - and nothing was changed between the upgrade and test of API. It doesn't seem to output all of the data. Examples:
 * |timestamp&rvlimit=1 Query result at WikiQueer: does not include a "revision".
 * |timestamp&rvlimit=1 Query result at the Commons: includes the revision.

Anyone have any ideas? --Varnent 01:41, 11 November 2011 (UTC) (updated URLs)
 * This was fixed in trunk in but it was overlooked and didn't make it into the 1.18 beta. I've tagged it for 1.18 so it'll go in the final release. You can try applying the diff of that revision locally (it's an easy one), that should fix it. --Catrope 13:13, 11 November 2011 (UTC)
 * Excellent - thank you!! --Varnent 19:27, 11 November 2011 (UTC)

enable API
If this is the article on how API works, shouldn't there be a short blurb that you add $wgEnableAPI = true to enable it? Or is that not correct? These instructions are always so subpar. Igottheconch 06:57, 12 December 2011 (UTC)
 * The API is enabled by default in non-ancient versions of MediaWiki. Also, if you feel the instructions are "so subpar", by all means go and improve them. It's a wiki, anyone can edit. --Catrope 14:53, 14 December 2011 (UTC)

Doubled Content-Length in HTTP Header
I posted this on the help desk, but it probably is more appropriate here:


 * MediaWiki version: 16.0
 * PHP version: 5.2.17 (cgi)
 * MySQL version: 5.0.91-log
 * URL: www.isogg.org/w/api.php

I am trying to track down a bug in the api which is causing a double content-length in the header. This is causing a lot of issues with a python bot. Here is the report from web-sniffer showing the content of the api.php call from this wiki. All other pages when called, i.e. the Main page, etc. only report 1 content-length. Is the api forcing the headers? Why is doubling only the one?


 * Status: HTTP/1.1 200 OK
 * Date: Mon, 30 Jan 2012 14:31:25 GMT
 * Content-Type: text/html; charset=utf-8
 * Connection: close
 * Server: Nginx / Varnish
 * X-Powered-By: PHP/5.2.17
 * MediaWiki-API-Error: help
 * Cache-Control: private
 * Content-Encoding: gzip
 * Vary: Accept-Encoding
 * Content-Length: 16656
 * Content-Length: 16656

As you can see this is a Nginx server. On an Apache server with 16.0, only one content-length is sent. Could that be the issue and how do I solve it? Thanks.

-Hutchy68 15:10, 30 January 2012 (UTC)

Wanted: showcase of cool uses
I'd like to get a showcase of uses of the MediaWiki API -- can anyone link to apps or tools that use it well or interestingly? Sumana Harihareswara, Wikimedia Foundation Volunteer Development Coordinator 22:55, 1 February 2012 (UTC)