API talk:Main page

Renaming this page, front doors to the APIs
This page is no longer a Main_page for APIs (plural), it's an introduction to the MediaWiki action API, our biggest and oldest API. The next step is to rename the page to more accurately reflect its role. Already the API navigation template links to it as the "MediaWiki APIs" heading and as "Introduction and quick start" (I just now renamed the latter from "Quick start guide"). My choice is:
 * API:Introduction to the action API

which accurately if incompletely describes its content, so unless someone has a better suggestion I'll rename the page to that. API:Main_page will of course redirect to the new name.

Alternatives:
 * trimming to "API:Action API" makes the page mysterious
 * expanding "API:Introduction to the MediaWiki action API" feels unnecessarily long
 * "API:Action API introduction and quick start" is even more informative and would match its item in the API navigation template, but is too wordy. :-)

Note that for certain developers, API:Web APIs hub is a better introduction to "the APIs that let you access free open knowledge on Wikimedia wikis." Eventually I think "Web APIs hub" should be the destination of the heading MediaWiki APIs in the API navigation template.

This is part of T105133, "Organize current and new content in the API: namespace at mediawiki.org".

-- SPage (WMF) (talk) 20:50, 31 August 2015 (UTC)


 * The current aliases, as seen in the redirects. Cpiral (talk) 18:55, 19 January 2016 (UTC)

Server returned HTTP response code: 500 for URL: https://www.wikidata.org/w/api.php
When I try to login to Wikidata I keep getting a 500 error. I don't have any problems with logging in to Wikipedias. --jobu0101 (talk) 20:43, 20 January 2016 (UTC)

wgWikibaseItemId from history page
Hi. I would like to retrieve the Wikidata ID of an wikipedia article from its revision history page. Command  only works from main page (it returns null from history page). Any advice or solution ? Thanks in advance. --H4stings (talk) 09:00, 4 June 2016 (UTC)
 * This is not an API question. You might be able to reach the wikidata people by filing a task or posting to their mailing list. -- Krenair (talk &bull; contribs) 18:08, 4 June 2016 (UTC)

List of all pages edited by a user
Hi, any suggestions for the following request? - How to get: - all page titles or page IDs (including or excluding older page versions) - a user X has edited yet - optionally limiting to discussion pages?

I look and tryed the API sandbox, but the query-list-allpages does not have relevant attributs.

Thank you for your help and all the best, --Liuniao (talk) 07:08, 6 July 2016 (UTC)


 * You'll probably want to look at API:Usercontribs. To limit to only discussion pages, you would need to first know the namespace IDs for all the talk namespaces, which you can get from API:Siteinfo (e.g., ). For argument's sake, let's say you have no custom namespaces on your wiki and you want to see my talk page contributions. Your ultimate query would look like this:


 * |3|5|7|9|11|13|15 ?action=query&list=usercontribs&ucuser=RobinHood70&ucnamespace=1|3|5|7|9|11|13|15


 * If you wanted only the most recent edit on each page, then you'd add |3|5|7|9|11|13|15&ucshow=top &ucshow=top to that (or use &uctoponly= if your wiki is 1.22 or older). – Robin Hood  (talk)  18:52, 6 July 2016 (UTC)

What is maximum URL length I can use with Wikipedia API ?
Sometimes I get HTTP 414 error (Request-URL Too Long) when I pass very long requests (with more than 50 page titles). I limit it roughly but I'l like to know exact limit. Anyone knows that? Нирваньчик (talk) 00:20, 17 July 2016 (UTC)


 * There's no byte limit that I'm aware of in the MediaWiki software, the limit usually comes from the servers themselves. It's often set to 8k (8192 bytes), but different servers could be using different sizes. I would hope that all the MW servers are the same, whatever that might be, but there are no guarantees. The only way to be sure would be to track sizes that succeed or fail on whichever server(s) you're using. Unless, of course, you can get in touch with a server admin and ask them to find out directly.


 * You should also be aware that there is a limit built into MediaWiki for the number of titles you can put in a single query, and that limit is typically 50 titles unless you have higher permissions (typically bot or admin permissions), in which case, it rises to 500. There are also a couple of special cases where the number is lower, but those are noted on the individual API pages. Whether it's the normal limit or a smaller one, though, you shouldn't get a 414 error from MediaWiki, it should just spit out a warning in the output. – Robin Hood  (talk)  01:36, 17 July 2016 (UTC)

Question
What does API stand for?--Mr. Guye (talk) 23:27, 6 April 2017 (UTC)


 * Application Programming Interface. It's a generic computer term for how other programs can interact with yours. In this case, the API is how you can control a wiki to do things like editing pages, retrieving information, etc. – Robin Hood  (talk)  04:27, 7 April 2017 (UTC)
 * See API. --Tacsipacsi (talk) 18:40, 7 April 2017 (UTC)

all pages that have new content or are deleted
I already asked it (but not very clearly). In order to create a fresh "pages-meta-current" dump to work with one should try many API requests just to get a list of recently "changed pages". Is it possible to get all titles that have recent changes (after TS XXXX-XX-XXTXX:XX:XXZ), aka: list of all pages that for any reason, either edited or created or moved from another title or uploaded or imported from another project or merged with another title or even deleted, in one pass? The idea is to get all titles for which we will ask the newer revision as well as all titles to delete from current dump. The returned title containing xml should include at least:
 * type="newedit" or "deleted"
 * "deleted" will include:
 * pages deleted using the delete command
 * pages deleted because where moved and user choose not to leave a redirection
 * "newedit" will include:
 * new pages (which will include the page an uploaded file creates)
 * edited pages
 * imported pages
 * pages that moved and user choose to leave a redirection (the original title, not the new one)
 * pages that merged and now have a new content
 * ns=
 * title=
 * timestamp=

This will be a good API for getting a fresh "pages-meta-current" dump to work with. This will be very useful to most of the bots as they can have a more recent dump, in fewer steps, to work with. --Xoristzatziki (talk) 10:23, 16 June 2017 (UTC)


 * Have you tried with Grabbers? If you have a working MediaWiki database with an imported dump, grabNewText.php will update it to the current state. It uses API:RecentChanges and API:Logevents and handles moves, deletions, restores and uploads (updating page and revision, it doesn't actually upload files). Imports may not be handled, though, but I can add it for you. --Ciencia Al Poder (talk) 08:58, 17 June 2017 (UTC)


 * Beyond the fact I do not use an exact db, and even more a MediaWiki db, the point is (if, of course, this is easily done) that everyone be able to get the titles changed (which means edited in any way, new by any means and deleted for any reason) in one single API and not to have plus one script (plus another bunch of parameters, plus RDBMS maintenance). Apart from this, thanks for the information about the existence of Grabbers. I must take a look at it in time. --Xoristzatziki (talk) 15:50, 18 June 2017 (UTC)

Getting history of titles for a page
I have old links (for example Car from 2014-03-03) and want to get the corresponding wikitext (as of 2014-03-03). Its simple to look up the revision history for the page currently titled Car - but that is not what I am after, I want the revision history for the page which was titled Car on 2014-03-03 (The content of Car could have been moved to Vehicle on 2015-03-10, and a new Car could have been created, so the current history is useless).

As far as I can tell there is no way to do this short of "replaying" all log-actions involving page names, hoping the replay doesn't diverge from the reality? The database does not seem to retain the old page titles.

Is there any (other) way? Loralepr (talk) 19:55, 25 August 2017 (UTC)
 * You can read logs to see renamings. --wargo (talk) 09:41, 26 August 2017 (UTC)
 * Yes, but its not done with renamings, it would require to reapply deletions, restores, page history merges and possibly a few more things as well. On top of that the log table only goes back to 2004, and there is no dump from that point in time to apply these log events to. So I would have to apply the log events backwards from the current state. To add to the fun these old log events are missing the page id they applied to, that was only added later. Loralepr (talk) 10:07, 26 August 2017 (UTC)

It seems you've already answered your question. Mediawiki is a very complex tool, and there certainly is no way to account for all its edge cases, such as database corruption or someone fiddling directly with the database thereby completely breaking the history of pages. There are even more problems (https://phabricator.wikimedia.org/T39591, https://phabricator.wikimedia.org/T41007). For a naive scenario looping through logevents will get the data in most cases.

There is certainly no API that can account for all such cases and actions related to the titles and moves of a page. That requires a separate analysis tool to go through the database or dumps to recreate a new database which can then be queried to get that data. There would also be a need to be aware of how mediawiki database schema changed.

Even then there will still be edge cases when the tool will not show the correct data. There are also hidden revisions that won't show up in the database, so it is truly impossible to get some revisions at a specific date. 10:57, 26 August 2017 (UTC)


 * Thanks, that is basically what I figured out - my main hope was that I overlooked some API or some dump containing this data - no luck here. I also had the idea to take all old article dumps as (consistent?) snapshots - but they get very sparse very soon (Archive.org: 2015:9 Dumps 2014:2 Dumps 2013:0 2012:0 2011:1 Dump) Loralepr (talk) 11:38, 26 August 2017 (UTC)

Some redirects are not being produced
Rdcheck for Libidibia paraguariensis shows "Caesalpinia paraguariensis" as the first #R, but it's the only #R absent from the output of the API call :



The missing #R isn't malformed, and it's 8 months old. Other pages with multiple #Rs don't have this problem: ex rdcheck & corresponding API for Fabales. ~ Tom.Reding (talk ⋅dgaf) 18:59, 15 March 2018 (UTC)
 * See API:Query. You might also use the  parameter to get more at once. Anomie (talk) 13:55, 16 March 2018 (UTC)

Help with javascript
How do I get the first page id of query.pages.title; no matter what page it is? For reference, I'm writing this on Code.org, calling the code

, which, when run with the page Pi, gives 23601 undefined It seems like I can only get the page id on its own, but can't get the title. And an array doesn't work... Please help. Bardic Wizard (talk) 22:27, 22 March 2018 (UTC)


 * pages is an object, where the keys are the page id, and the values are the page itself. You need to access the page by its page id that you got before:


 * --Ciencia Al Poder (talk) 10:31, 23 March 2018 (UTC)

Token placement documentation/error
I'm trying to use the API to apply an instance of (P31) value of to. Following the wbsetclaimvalue documentation, I'm using, with a csrftoken produced here. This error is the result:. Since the csrftokens only last 10 secondary apparently, I don't feel like trying to troubleshoot this by hand. What's the problem(s) here? ~ Tom.Reding (talk ⋅dgaf)


 * Are you using POST to submit the request? Because it's basically what's saying the message. --Ciencia Al Poder (talk) 09:22, 19 April 2018 (UTC)


 * I don't know what POST is (other than at computer startup...), I'm just using my browser.  ~ Tom.Reding (talk ⋅dgaf)  12:44, 19 April 2018 (UTC)


 * Well, that's why it requires the parameters to be on a POST body, to disallow it being triggered by someone following a link on a browser. See POST_(HTTP) for information about what's POST. You can use Resource Loader for doing a POST request using JavaScript: Documentation is in ResourceLoader/Core_modules --Ciencia Al Poder (talk) 13:10, 19 April 2018 (UTC)


 * , thanks, I figured queries & action would use the same platform, given the short-lived token. Do you know if there is a way for AWB to do this, i.e. via module? I have no experience with JavaScript.  ~ Tom.Reding (talk ⋅dgaf)  13:44, 19 April 2018 (UTC)
 * Scratch that, looks like I need to use Pywikibot.  ~ Tom.Reding (talk ⋅dgaf)  14:08, 19 April 2018 (UTC)

U
Y Nliz77 (talk) 18:12, 5 May 2018 (UTC)