API talk:Main page

From mediawiki.org

Renaming this page, front doors to the APIs[edit]

This page is no longer a Main_page for APIs (plural), it's an introduction to the MediaWiki action API, our biggest and oldest API. The next step is to rename the page to more accurately reflect its role. Already the {{API}} navigation template links to it as the "MediaWiki APIs" heading and as "Introduction and quick start" (I just now renamed the latter from "Quick start guide"). My choice is:

API:Introduction to the action API

which accurately if incompletely describes its content, so unless someone has a better suggestion I'll rename the page to that. API:Main_page will of course redirect to the new name.

Alternatives:

  • trimming to "API:Action API" makes the page mysterious
  • expanding "API:Introduction to the MediaWiki action API" feels unnecessarily long
  • "API:Action API introduction and quick start" is even more informative and would match its item in the {{API}} navigation template, but is too wordy. :-)

Note that for certain developers, API:Web APIs hub is a better introduction to "the APIs that let you access free open knowledge on Wikimedia wikis." Eventually I think "Web APIs hub" should be the destination of the heading MediaWiki APIs in the {{API}} navigation template.

This is part of phab:T105133, "Organize current and new content in the API: namespace at mediawiki.org".

-- SPage (WMF) (talk) 20:50, 31 August 2015 (UTC)Reply

The current aliases, as seen in the redirects. Cpiral (talk) 18:55, 19 January 2016 (UTC)Reply

Server returned HTTP response code: 500 for URL: https://www.wikidata.org/w/api.php[edit]

When I try to login to Wikidata I keep getting a 500 error. I don't have any problems with logging in to Wikipedias. --jobu0101 (talk) 20:43, 20 January 2016 (UTC)Reply

wgWikibaseItemId from history page[edit]

Hi. I would like to retrieve the Wikidata ID of an wikipedia article from its revision history page. Command mw.config.get( 'wgWikibaseItemId' ) only works from main page (it returns null from history page). Any advice or solution ? Thanks in advance. --H4stings (talk) 09:00, 4 June 2016 (UTC)Reply

This is not an API question. You might be able to reach the wikidata people by filing a task or posting to their mailing list. --Krenair (talkcontribs) 18:08, 4 June 2016 (UTC)Reply
Hello @H4stings: ! Please report your experience also here: phab:T185437 --Valerio Bozzolan (talk) 21:05, 2 August 2018 (UTC)Reply

List of all pages edited by a user[edit]

Hi, any suggestions for the following request? - How to get:
- all page titles or page IDs (including or excluding older page versions)
- a user X has edited yet
- optionally limiting to discussion pages?

I look and tryed the API sandbox, but the query-list-allpages does not have relevant attributs.

Thank you for your help and all the best, --Liuniao (talk) 07:08, 6 July 2016 (UTC)Reply

You'll probably want to look at API:Usercontribs. To limit to only discussion pages, you would need to first know the namespace IDs for all the talk namespaces, which you can get from API:Siteinfo (e.g., [1]). For argument's sake, let's say you have no custom namespaces on your wiki and you want to see my talk page contributions. Your ultimate query would look like this:
<url to api.php on your wiki>?action=query&list=usercontribs&ucuser=RobinHood70&ucnamespace=1|3|5|7|9|11|13|15
If you wanted only the most recent edit on each page, then you'd add &ucshow=top to that (or use &uctoponly= if your wiki is 1.22 or older). Robin Hood  (talk) 18:52, 6 July 2016 (UTC)Reply

What is maximum URL length I can use with Wikipedia API ?[edit]

Sometimes I get HTTP 414 error (Request-URL Too Long) when I pass very long requests (with more than 50 page titles). I limit it roughly but I'l like to know exact limit. Anyone knows that? Нирваньчик (talk) 00:20, 17 July 2016 (UTC)Reply

There's no byte limit that I'm aware of in the MediaWiki software, the limit usually comes from the servers themselves. It's often set to 8k (8192 bytes), but different servers could be using different sizes. I would hope that all the MW servers are the same, whatever that might be, but there are no guarantees. The only way to be sure would be to track sizes that succeed or fail on whichever server(s) you're using. Unless, of course, you can get in touch with a server admin and ask them to find out directly.
You should also be aware that there is a limit built into MediaWiki for the number of titles you can put in a single query, and that limit is typically 50 titles unless you have higher permissions (typically bot or admin permissions), in which case, it rises to 500. There are also a couple of special cases where the number is lower, but those are noted on the individual API pages. Whether it's the normal limit or a smaller one, though, you shouldn't get a 414 error from MediaWiki, it should just spit out a warning in the output. Robin Hood  (talk) 01:36, 17 July 2016 (UTC)Reply

Question[edit]

What does API stand for?--Mr. Guye (talk) 23:27, 6 April 2017 (UTC)Reply

Application Programming Interface. It's a generic computer term for how other programs can interact with yours. In this case, the API is how you can control a wiki to do things like editing pages, retrieving information, etc. Robin Hood  (talk) 04:27, 7 April 2017 (UTC)Reply
See w:API. --Tacsipacsi (talk) 18:40, 7 April 2017 (UTC)Reply

all pages that have new content or are deleted[edit]

I already asked it (but not very clearly). In order to create a fresh "pages-meta-current" dump to work with one should try many API requests just to get a list of recently "changed pages". Is it possible to get all titles that have recent changes (after TS XXXX-XX-XXTXX:XX:XXZ), aka: list of all pages that for any reason, either edited or created or moved from another title or uploaded or imported from another project or merged with another title or even deleted, in one pass? The idea is to get all titles for which we will ask the newer revision as well as all titles to delete from current dump. The returned title containing xml should include at least:

  • type="newedit" or "deleted"
    • "deleted" will include:
      • pages deleted using the delete command
      • pages deleted because where moved and user choose not to leave a redirection
    • "newedit" will include:
      • new pages (which will include the page an uploaded file creates)
      • edited pages
      • imported pages
      • pages that moved and user choose to leave a redirection (the original title, not the new one)
      • pages that merged and now have a new content
  • ns=
  • title=
  • timestamp=

This will be a good API for getting a fresh "pages-meta-current" dump to work with. This will be very useful to most of the bots as they can have a more recent dump, in fewer steps, to work with. --Xoristzatziki (talk) 10:23, 16 June 2017 (UTC)Reply

Have you tried with Grabbers? If you have a working MediaWiki database with an imported dump, grabNewText.php will update it to the current state. It uses API:RecentChanges and API:Logevents and handles moves, deletions, restores and uploads (updating page and revision, it doesn't actually upload files). Imports may not be handled, though, but I can add it for you. --Ciencia Al Poder (talk) 08:58, 17 June 2017 (UTC)Reply
Beyond the fact I do not use an exact db, and even more a MediaWiki db, the point is (if, of course, this is easily done) that everyone be able to get the titles changed (which means edited in any way, new by any means and deleted for any reason) in one single API and not to have plus one script (plus another bunch of parameters, plus RDBMS maintenance). Apart from this, thanks for the information about the existence of Grabbers. I must take a look at it in time. --Xoristzatziki (talk) 15:50, 18 June 2017 (UTC)Reply

Getting history of titles for a page[edit]

I have old links (for example [[Car]] from 2014-03-03) and want to get the corresponding wikitext (as of 2014-03-03). Its simple to look up the revision history for the page currently titled [[Car]] - but that is not what I am after, I want the revision history for the page which was titled Car on 2014-03-03 (The content of [[Car]] could have been moved to [[Vehicle]] on 2015-03-10, and a new [[Car]] could have been created, so the current history is useless).

As far as I can tell there is no way to do this short of "replaying" all log-actions involving page names, hoping the replay doesn't diverge from the reality? The database does not seem to retain the old page titles.

Is there any (other) way? Loralepr (talk) 19:55, 25 August 2017 (UTC)Reply

You can read logs to see renamings. --wargo (talk) 09:41, 26 August 2017 (UTC)Reply
Yes, but its not done with renamings, it would require to reapply deletions, restores, page history merges and possibly a few more things as well. On top of that the log table only goes back to 2004, and there is no dump from that point in time to apply these log events to. So I would have to apply the log events backwards from the current state. To add to the fun these old log events are missing the page id they applied to, that was only added later. Loralepr (talk) 10:07, 26 August 2017 (UTC)Reply

It seems you've already answered your question. Mediawiki is a very complex tool, and there certainly is no way to account for all its edge cases, such as database corruption or someone fiddling directly with the database thereby completely breaking the history of pages. There are even more problems (https://phabricator.wikimedia.org/T39591 , https://phabricator.wikimedia.org/T41007). For a naive scenario looping through logevents will get the data in most cases.

There is certainly no API that can account for all such cases and actions related to the titles and moves of a page. That requires a separate analysis tool to go through the database or dumps to recreate a new database which can then be queried to get that data. There would also be a need to be aware of how mediawiki database schema changed.

Even then there will still be edge cases when the tool will not show the correct data. There are also hidden revisions that won't show up in the database, so it is truly impossible to get some revisions at a specific date. 10:57, 26 August 2017 (UTC)

Thanks, that is basically what I figured out - my main hope was that I overlooked some API or some dump containing this data - no luck here. I also had the idea to take all old article dumps as (consistent?) snapshots - but they get very sparse very soon (Archive.org: 2015:9 Dumps 2014:2 Dumps 2013:0 2012:0 2011:1 Dump) Loralepr (talk) 11:38, 26 August 2017 (UTC)Reply

Some redirects are not being produced[edit]

Rdcheck for Libidibia paraguariensis shows "w:Caesalpinia paraguariensis" as the first #R, but it's the only #R absent from the output of the API call https://en.wikipedia.org//w/api.php?action=query&format=json&prop=redirects&titles=Libidibia%20paraguariensis&formatversion=1:

{"continue":{"rdcontinue":"54611033","continue":"||"},"query":{"pages":{"12177450":{"pageid":12177450,"ns":0,"title":"Libidibia paraguariensis","redirects":[{"pageid":12177452,"ns":0,"title":"Ibir\u00e1-Ber\u00e1"},{"pageid":12182563,"ns":0,"title":"Guayaca\u00da Negro"},{"pageid":12182575,"ns":0,"title":"Ibir\u00c1-Ber\u00c1"},{"pageid":12322863,"ns":0,"title":"Ibir\u00e1-ber\u00e1"},{"pageid":13464516,"ns":0,"title":"Guayaca\u00fa Negro"},{"pageid":16241672,"ns":0,"title":"IbirA-BerA"},{"pageid":16391122,"ns":0,"title":"Guayacau Negro"},{"pageid":16450966,"ns":0,"title":"Ibira-bera"},{"pageid":16500548,"ns":0,"title":"GuayacaU Negro"},{"pageid":16579276,"ns":0,"title":"Ibira-Bera"}]}}}}

The missing #R isn't malformed, and it's 8 months old. Other pages with multiple #Rs don't have this problem: ex rdcheck & corresponding API for w:Fabales.   ~ Tom.Reding (talkdgaf)  18:59, 15 March 2018 (UTC)Reply

See API:Query#Continuing queries. You might also use the rdlimit parameter to get more at once. Anomie (talk) 13:55, 16 March 2018 (UTC)Reply

Help with javascript[edit]

How do I get the first page id of query.pages.title; no matter what page it is? For reference, I'm writing this on Code.org, calling the code

var query = getText("webappurlenter");var q2 = encodeURI(query);var url = "https://en.wikipedia.org/w/api.php?action=query&prop=extracts&exchars=175&explaintext=true&titles="+q2+"&format=json&indexpageids=";
startWebRequest(url, function(status, type, content) {
var contentJson = JSON.parse(content);
var test = contentJson.query.pageids[0];
var title = contentJson.query.pages.title;
console.log(title);
console.log(test);
});

, which, when run with the page w:en:Pi, gives

23601
undefined

It seems like I can only get the page id on its own, but can't get the title. And an array doesn't work... Please help. Bardic Wizard (talk) 22:27, 22 March 2018 (UTC)Reply

pages is an object, where the keys are the page id, and the values are the page itself. You need to access the page by its page id that you got before:
var title = contentJson.query.pages[test].title;
--Ciencia Al Poder (talk) 10:31, 23 March 2018 (UTC)Reply

Token placement documentation/error[edit]

I'm trying to use the API to apply an instance of (P31) value of taxon (Q16521) to Amauroderma albostipitatum (Q39049474). Following the wbsetclaimvalue documentation, I'm using https://www.wikidata.org/w/api.php?action=wbsetclaimvalue&claim=Q28431751$0e968588-444f-3892-6fa4-02f9abe12e34&snaktype=value&value={"entity-type":"item","numeric-id":16521}&token=f02853d6921b9a756dbbcc3c88f2ef4c5ad7a6e5+\\&baserevid=554233065, with a csrftoken produced here. This error is the result: "code": "mustpostparams","info": "The following parameter was found in the query string, but must be in the POST body: token.". Since the csrftokens only last 10 secondary apparently, I don't feel like trying to troubleshoot this by hand. What's the problem(s) here?   ~ Tom.Reding (talkdgaf) 

Are you using POST to submit the request? Because it's basically what's saying the message. --Ciencia Al Poder (talk) 09:22, 19 April 2018 (UTC)Reply
@Ciencia Al Poder: I don't know what POST is (other than at computer startup...), I'm just using my browser.   ~ Tom.Reding (talkdgaf)  12:44, 19 April 2018 (UTC)Reply
Well, that's why it requires the parameters to be on a POST body, to disallow it being triggered by someone following a link on a browser. See wikipedia:POST_(HTTP) for information about what's POST. You can use Resource Loader for doing a POST request using JavaScript: Documentation is in ResourceLoader/Core_modules#mediawiki.api --Ciencia Al Poder (talk) 13:10, 19 April 2018 (UTC)Reply
Ciencia Al Poder, thanks, I figured queries & action would use the same platform, given the short-lived token. Do you know if there is a way for AWB to do this, i.e. via module? I have no experience with JavaScript.   ~ Tom.Reding (talkdgaf)  13:44, 19 April 2018 (UTC)Reply
Scratch that, looks like I need to use Pywikibot.   ~ Tom.Reding (talkdgaf)  14:08, 19 April 2018 (UTC)Reply

Prevent MediaWiki API from removing booleans properties when displaying result data[edit]

Hello everyone,

I'm facing a quite important problem. I'm currently creating a custom API extension for my project so I created a class that I called ApiCategories which extends of ApiBase MediaWiki class. This custom API returns data that contains booleans but they are removed by MediaWiki if false or replaced by empty string if true.

I looked through many MediaWiki class to find my problem and I finally found that the ApiResult::applyTransformations($path[], $transformations[]) is removing booleans if the flag "nobool" is not set into the $transformations[] parameter according to the following comment placed above the method

BC: (array) This transformation does various adjustments to bring the * output in line with the pre-1.25 result format. The value array is a * list of flags: 'nobool', 'no*', 'nosub'. * - Boolean-valued items are changed to '' if true or removed if false, * unless listed in META_BC_BOOLS. This may be skipped by including * 'nobool' in the value array.

Now the problem is : How to set this flag ? How should I proceed to pass the flag "nobool" to the ApiResult::applyTransformations method ? — Preceding unsigned comment added by TheBiigMiike (talkcontribs) 10:06, 26 July 2018 (UTC)Reply

This isn't really the best place for your question, but to answer it: You're on the wrong path, your module can't set 'nobool'. You want to set the META_BC_BOOLS metadata item on your data array, as mentioned in the documentation you quoted. That might look something like $data[ApiResult::META_BC_BOOLS] = [ 'keys', 'of', 'your', 'bool', 'fields' ];. If $data contains subarrays, each subarray with boolean fields would get the same treatment.
But note that this conversion of boolean values is standard behavior for the JSON and PHP output formats with formatversion=1 and clients should be expecting it. To get real booleans from all modules, clients should specify formatversion=2. It might be better if you follow the convention and don't override it. Anomie (talk) 12:37, 26 July 2018 (UTC)Reply

Restore revision API call[edit]

I cannot seem to find an appropriate API call for restoring a revision by its revision ID. Can anyone point me to this? Thank you Gstupp (talk) 22:28, 30 July 2018 (UTC)Reply

Look at API:Edit's undo and undoafter parameters. Robin Hood  (talk) 22:59, 30 July 2018 (UTC)Reply

Text fields input process ... could be better. (API sandbox)[edit]

Trying out the API sandbox, and seeing my typed-in "titles" field input being ignored,
... while not knowing you have to actually hit ENTER to make it stick/(become active input) ...
don't strikes me as very intuitive.

Adding some default greyed text to those kind of fields, when there empty, that at least give of some hit to this particular input process would have been my solution.
--MvGulik (talk) 03:45, 1 August 2018 (UTC) :-/Reply

See phab:T188886. Anomie (talk) 13:25, 16 August 2018 (UTC)Reply

Reorganizing action API documentation[edit]

We are currently working to reorganize, update, and clarify the information on the MediaWiki Action API pages. Our goal is to make the information more accessible and usable by a variety of audiences. See phab:T198916

We believe these pages are visited mainly by API developers who need less context about how and why APIs work and more information about endpoints and specific actions. Because of this, we plan to move much of the contextualizing information to other pages, where folks who may not be ready to use APIs for their technical contributions yet, can find additional resources about APIs in general.

The main changes we are planning are: to rename and redirect the page to be make it specifically about the action API, to re-organize and simplify the main page (https://www.mediawiki.org/wiki/API:Main_page), move contextual information to a new page, re-organize the sidebar, update and highlight documentation for some of the most popular and frequently used actions, and provide templates for documentation so that other technical contributors can also update the documentation in a cohesive way.

If you have suggestions for things you would like to see or share, please let us know. It is our goal to make these pages useful to as many people as possible.

User:SSethi (WMF), User:SRodlund (WMF)

I'm not sure how my observations fit into your plans, so I'll just write my observations and let you decide how it fits into your plans. (:
The auto-generated documentation available from api.php IMO does a good job of documenting the basic technical details of each endpoint, such as which parameters exist and what they mean. I like to think of it as the "reference card" for the API for users who already know what it is they need to do and just need a reminder of the specific parameter names or values. Transclusion of Special:ApiHelp allows for including these reference cards in the on-wiki documentation, although as T201977 points out you can't do so for extensions that aren't installed locally.
The auto-generated documentation intentionally doesn't try to get into the more complicated "how" and "why", leaving that to this on-wiki documentation. For example, the auto-generated documentation doesn't describe generators or continuation or CORS in detail if at all. That's what I think is the kind of information that is particularly important to have here where it's easier to edit, this is the user guide to the auto-generated reference card. And, of course, what is here could be better organized, better written, and better focused to the audience (rather than mixing together information for client developers, extension developers, and wiki sysadmins).
I'll be happy to explain anything about the API that you might need to know in order to write better documentation. You can find me on IRC, email, or ping me from on-wiki talk page posts. BJorsch (WMF) (talk) 13:51, 24 August 2018 (UTC)Reply
SSethi (WMF) removed the search box for the API from {{API}} in August.[2] It looked like this:
I think it should either be restored in {{API}} or added somewhere to API:Main page. You can get the same result by starting a standard search with API: but many people may not know that. PrimeHunter (talk) 23:02, 8 November 2018 (UTC)Reply

Export table made with a template to join data[edit]

Good morning,

I created a table with a lot of columns thanks to a template. I used this template to join data from different pages, thus it contains many different {{#ask:}}. I only found how to export each {{#ask}} when I actually want to export the entire table. Does anyone have a solution for me? Thank you in advance. AnaisBce (talk) 07:51, 31 August 2018 (UTC)Reply

I'm not sure that I understand your question as stated, but if the use of the Extension:Semantic MediaWiki #ask parser function is important to the solution you may get better help at the Semantic MediaWiki community portal. --BDavis (WMF) (talk) 19:46, 31 August 2018 (UTC)Reply

Code stewardship infobox[edit]

The code stewardship infobox is a nice addition to the API pages, but it has only been added to some of them. I propose editing the API namespace template to ensure this information appears on all pages related to the MediaWiki action. In addition, we should make a distinction between code stewardship and documentation stewardship -- so folks know who to contact for support and questions. --SRodlund (WMF) (talk)

Question: How to get wikitext of an article and expand all templates in it through the API?[edit]

I'm learning to use the API through the documentation and I can't find a good solution for this issue. I'd like to write a script that will get the pure wikitext of every page on a wiki without templates. The problem is that by using action=expandtemplates with the `text=` in the request, I get 414 saying my request URI is too large because I use every article's whole wikitext in the request.

It's a shame that action=parse can't expand templates by itself and return the pure wikitext without templates at all. — Preceding unsigned comment added by Doronbehar (talkcontribs) 16:23, 30 January 2019 (UTC)Reply

@Doronbehar: Use POST request always if you’re not 100% sure the parameters will be short enough. I don’t know what script are you using, but it should be pretty easy to set it to use POST (e.g. use the --post-data="text=wikitext" option in wget or --data "text=wikitext" in curl; you can keep the short parameters in the query string). —Tacsipacsi (talk) 21:55, 30 January 2019 (UTC)Reply

API problem[edit]

I have a problem with, I think, simple API query: Contribs of Paweł Ziemian Bot The same query with other username works fine: Contribs of Paweł Ziemian and Contribs of MalarzBOT. Anyone knows what the problem is? The query was working fine for all users till November 2018. Few days ago I found, that this query is not working now. Malarz pl (talk) 18:30, 19 February 2019 (UTC)Reply

This is an issue that should be reported on phabricator --Ciencia Al Poder (talk) 10:39, 20 February 2019 (UTC)Reply
phab:T216656. Malarz pl (talk) 20:45, 20 February 2019 (UTC)Reply

Need clarity on where to post[edit]

Instructions say "Unless your comment is related to the whole API, please post it on one of the API subpages".

I am trying to refer people with questions and I do not see where they should go.

Is there no general place where I can send people? In this case, the question is about someone wanting to report intent to make lots of calls, and wondering about the etiquette of that. Blue Rasberry (talk) 13:32, 3 April 2019 (UTC)Reply

Does API:Etiquette answer your concerns? If not, you can ask on the mediawiki-api mailing list. — Preceding unsigned comment added by Ciencia Al Poder (talkcontribs) 09:14, 4 April 2019 (UTC)Reply

Get download URL of Commons file[edit]

How do I get the download URL of a commons file from the filename? --jobu0101 (talk) 06:12, 1 June 2019 (UTC)Reply

@Jobu0101: You can use query+imageinfo, e.g. https://commons.wikimedia.org/w/api.php?action=query&titles=File:Albert%20Einstein%20Head.jpg&prop=imageinfo&iiprop=url. —Tacsipacsi (talk) 15:47, 1 June 2019 (UTC)Reply
@Tacsipacsi: Thank you very much. In case it is a svg file is there a way to generate the url for a png preview of arbitrary size? --jobu0101 (talk) 13:58, 10 June 2019 (UTC)Reply
@Jobu0101: You can use iiurlwidth/iiurlheight. This works also for non-vector images, but, of course, the resulting image may be of inferior quality. (By the way, it’s in the documentation I linked above.) —Tacsipacsi (talk) 14:48, 10 June 2019 (UTC)Reply
You can also get the image directly by using the Manual:Thumb.php script. Example: https://commons.wikimedia.org/w/thumb.php?f=Albert_Einstein_Head.jpg&w=180 --Ciencia Al Poder (talk) 09:26, 11 June 2019 (UTC)Reply

Working Group Recommendation for Paid API Access[edit]

The Revenue Working group is advising to require payment for API usage by significant or major users of the APIs for Wikipedia and Wikidata.

The full detail can be found on their project page

Please give your views to this on its Talk Page Nosebagbear (talk) 21:47, 21 September 2019 (UTC)Reply

notoken[edit]

I had this error already years ago - but I forget the solution (too long ago). When using nameGuzzler, since a few days appears an error: notoken - the \token\ parameter must be set What is where to change to get the functionality back? Florentyna (talk) 07:34, 12 October 2019 (UTC)Reply

@Florentyna: Likely nameGuzzler should be fixed, but you haven’t provided any link, so I have no clue what this nameGuzzler is and how to fix it. —Tacsipacsi (talk) 14:09, 12 October 2019 (UTC)Reply

Where are the API subpages, please?[edit]

The banner here says to go to them, but does not link that request to a list of them. Please link. — Preceding unsigned comment added by WhitWye (talkcontribs)

I don't see where in the main page there's a banner about subpages. Could you be more precise? --Ciencia Al Poder (talk) 16:38, 31 January 2020 (UTC)Reply
I have a hunch that WhitWye is accessing this wiki from a mobile device and that Extension:MobileFrontend is "helping" by removing the Template:API navbox from the html delivered to that device. This is a similar problem to one we discovered on Wikitech. SRodlund (WMF) and I need to find a reasonable fix for this. A lot of work has been done to make the API pages better organized and more readable, but the mobile/small viewport display needs improvement. --BDavis (WMF) (talk) 16:43, 31 January 2020 (UTC)Reply
Not a mobile device — Firefox 72.0.1 on Ubuntu 18, which shows a banner saying in part, "Unless your comment is related to the whole API, please post it on one of the API subpages." This being a wiki, it would be helpful to link "API subpages" to an index of API subpages, as it is not obvious (to me at least) where to find them. --WhitWye (talk) 18:08, 31 January 2020 (UTC) Update: Ah, I see, when I said "the benner here" you read that as on the main page, rather than this talk page. The main page does show that Action API box, with sections about writing to it. But what I'm trying to find is information about how the API is used between existing MediaWiki extensions. As the proximate example of what I'm trying to get enough background to understand, after a fresh 1.34 install with ElasticSearch, CirrusSearch, AdvancedSearch and Elastica, the date-order searches fail. At https://www.mediawiki.org/wiki/Help:CirrusSearch I read "Sorting options are currently available from the MediaWiki API by providing the srsort parameter." Where here may I find documentation to help diagnose why this API feature isn't functioning? Obviously this page isn't where to discuss it; but how might I find the right place?Reply
I tried with Special:PermanentLink/3645228 to update the banner here. More edits welcome if anyone thinks they have a better place to send people.
The srsort parameter is part of the action=query&list=search endpoint. I would guess that Special:MyLanguage/API:Search would be a reasonable place to discuss issues with it. --BDavis (WMF) (talk) 22:00, 31 January 2020 (UTC)Reply

Changing the rank[edit]

How do I change the rank of a Wikidata claim using the API? I looked through [3] but didn't find an appropriate action. --jobu0101 (talk) 16:59, 28 February 2020 (UTC)Reply

Editing via API in [R][edit]

There are a few packages to retrieve info from wikidata via [R] (WikidataR and WikidataQueryServiceR). However none yet can edit/create items. Would anyone be able to assist in building a wrapper in [R] to expand WikidataR for this functionality? If someone is able to put the core API aspects together, I can do all the data tidying and structuring in [R].T.Shafee(Evo﹠Evo)talk 11:15, 14 March 2020 (UTC)Reply

I've also made a note here. T.Shafee(Evo﹠Evo)talk 00:36, 15 March 2020 (UTC)Reply
Initial successes using the Quickstatements API documented here. T.Shafee(Evo﹠Evo)talk 02:26, 5 April 2020 (UTC)Reply

What is the fastest way to check if a page exists using the Action API?[edit]

I'm helping out with a client library and want to know what the fastest (and lightest) API call that checks the existence of a page is. Pywikibot uses prop=info and checks if the page ID is nonzero, for instance. Enterprisey (talk) 07:48, 17 April 2020 (UTC)Reply

Disable Api[edit]

How to disable Api? Farvardyn (talk) 09:07, 9 May 2020 (UTC)Reply

Create a rule in your Apache config that explicitly denies access to api.php --Ciencia Al Poder (talk) 21:15, 12 May 2020 (UTC)Reply
But be aware that this breaks some built-in MediaWiki features such as the incremental search (the search suggestions displayed below the search field as you type), the watch star working without an extra page load, recent changes RSS feed etc. —Tacsipacsi (talk) 00:22, 14 May 2020 (UTC)Reply

Empty extract[edit]

This page is returning an empty extract: fr.(...)?action=query&prop=extracts&titles=Alerte%20Rouge%20%28groupe%29&explaintext&rvprop=content&format=json

But the same query works for other pages: fr.(...)?action=query&prop=extracts&titles=B%C3%A9rurier_noir&explaintext&rvprop=content&format=json

(I can't post link to Wikipedia so just add the french version of Wikipedia in the URL's above)

Adding "&exlimit=max&exintro" as suggested in other topics didn't fix the issue.

Am I doing something wrong?

Machine-readable API spec?[edit]

I'm looking for something like an OpenAPI spec for the Action API. I'm pretty sure that doesn't exist yet (we may be incompatible with the schema), but I'll settle for anything in JSON, etc. Seems like I could modify the API help code to emit the file I'm imagining, but hopefully someone has already done this? Adamw (talk) 15:52, 20 May 2020 (UTC)Reply

Hi @Adamw, I was looking for the same thing. I just found out this, which might be useful for you, too. — Roj 13:35, 25 May 2022 (UTC)Reply

Missing "A simple Example" section, which is pointed at by every wiki installation[edit]

In every wiki installation with the new API Sandbox that I've tested (which so far is from 1.31.x through 1.34.x), the API Sandbox has a link at the top that directs to : https://www.mediawiki.org/wiki/API#A_simple_example

the API page on this wiki redirects to this page, API:Main_page, which does not have a "A Simple Example" target for the browser to go to.

According to the API Sandbox link, the simple example should show how to 'get the content of a Main Page' - I feel that since every installation directs here for that example, it should be restored to here until the codebase gets an update to change the link on the Sandbox pages.

--Rayanth (talk) 21:39, 3 August 2020 (UTC)Reply

This was removed in 2018 by SRodlund (WMF) * Pppery * it has begun 19:38, 16 September 2020 (UTC)Reply

Rename to align names[edit]

The series use 2 diverging logics for page titles. Some of this translatables pages may need to be renamed in English in order to align pages' titles. See:

  • Create and edit a page
  • Get the contents of a page → REMAME: Get a page's contents
  • Upload a file
  • Import a page
  • Delete a page
  • Parse content of a page → RENAME: Parse a page's content
  • Watch or unwatch a page
  • Purge cache for page(s)
  • Rollback a page
  • Move a page
  • Patrol a page or revision
  • Restore revisions of a deleted page
  • Change a page's protection level ← ELSE: Change the protection level of a page
  • Change a page's language ← ELSE: Change the language of a page

Yug (talk) 19:18, 16 September 2020 (UTC)Reply

Existence check[edit]

Hi, I would not ask there if I saw any chance to find out things by myself. I tried the last 7 days, I made many hundred tests without success. Without real knowledge of JS I need help that somebody tells me a code snipped that will works - even without my understanding it.

Reading all the API hints I wrote a function - but it does not work es expected

	function pageexist(pageid) {						// pageid = "namespace:pagename"
		mw.loader.using(['mediawiki.api'], function () 
		{	new mw.Api().get({
				action: 'query',
				format: 'json',
				prop:   'info',
				titles: pageid
			}).done(function (json)
			{	if (!json || !json.messages || !json.source || !json.source.code)
					return 'E';					// error ?
				let jlst = json.source.code;
				if	(/"invalid":/.test(jlst) || /"missing":/.test(jlst))
					return '0';					// does not exist
				else
					return '1';					// exists	
				});
		})}

There is a contribution without answer since April 2020, #What is the fastest way to check if a page exists using the Action API?; my asking for help is similar - I just want to check whether a page exists or not, possible in the cheapest way. Please, help me! -- sarang사랑 11:30, 23 July 2021 (UTC)Reply

You don’t even need the API, just load the normal page (using a HEAD request so that the actual page content isn’t transmitted just to be dropped):
/**
 * @param {string} pagename Title of the page to check
 * @param {function} callback Callback function: called with `true` if the page
 *  exists, `false` if it doesn’t, and `null` if an error occurs (e.g. invalid
 *  page name or network error).
 */
function pageexists( pagename, callback ) {
	$.ajax( mw.util.getUrl( pagename ), {
		method: 'HEAD',
		success: function () { callback( true ); },
		error: function ( xhr ) { callback( xhr.status === 404 ? false : null ); }
	} );
}
pageexists( 'API:Main page', console.log ); // logs `true`
pageexists( 'API:Main page that doesn’t exist', console.log ); // logs `false`
pageexists( '[[API:Main page]]', console.log ); // logs `null`
Tacsipacsi (talk) 17:35, 23 July 2021 (UTC)Reply
Hi @Tacsipacsi: - that looks great, just as simple as I desired it! Thank you for your swift answer.
Unfortunately my JS knowledge is very poor. I understand your function that it will retourn a boolean value, that can be used e.g.
	if (pageexists( 'London', success ))
		exist = 'yes';

	if (pageexists( 'London', success ) === true )
		exist = 'yes';

but i does not work that way. Neither the coding sequence e.g.

	let result = false;
	(pageexists( 'User:Example', result )
	if (result === true)		exist = 'yes'; 
Sorry, I am a complete novice in JS and need an explanation even for simple facts. So I would appreciate another help -- sarang사랑 15:06, 24 July 2021 (UTC)Reply
@Sarang: The function takes an asynchronously invoked callback function, so you need to do:
pageexists("London", (success) => {
	if (success) {
		// Page “London” exists
	} else {
		// Page “London” doesn’t exist
	}
});

— ExE Boss (talk) 17:20, 24 July 2021 (UTC)Reply

@Sarang: By the way, your ping didn’t work—you need to put at least one link to your user, talk or contributions page on mediawiki.org in your signature for it to be recognized as such and for pings to work. In the future, your custom signature will be disabled if you don’t include the link. —Tacsipacsi (talk) 22:13, 24 July 2021 (UTC)Reply
The Ajax code makes a good job - with two exceptions:
  1. access to a page which does not exist but has an orphaned talk page claims to be successful, which is not at all; wikimedia commons has thousands of orphaned user talk pages, but this ajax get cannot determine.
  2. it worked well while slowed down with activated alerts, but is too fast without any wait option and does not pass the success properly.
Therefore the ajax get cannot be used for my purpose. -- sarang사랑 11:28, 26 July 2021 (UTC)Reply
@Sarang:
  1. Do you have a concrete example? I tried pageexists( 'User:Tacsipacsi/Archive 2', console.log ); on Commons (c:User talk:Tacsipacsi/Archive 2 exists, its User-namespace counterpart doesn’t), and it correctly logged false.
  2. Which alerts? Which wait? Sorry, I don’t understand this point at all.
Tacsipacsi (talk) 22:18, 26 July 2021 (UTC)Reply
Thank you Tacsipacsi, for all your efforts.
I made a description of all the coding and the results at c:User talk:Sarang/simpleSVGcheck/sandbox.js. -- sarang사랑 11:27, 27 July 2021 (UTC)Reply
@Sarang: Really, user and user talk pages of existing users seem to return 200 OK even if the user page doesn’t exist (but this is not the case for user/user talk subpages, which is why my test with my archive subpage worked as expected). Unfortunately this is intentional, so then only the API remains:
/**
 * @param {string} pagename Title of the page to check
 * @param {function} callback Callback function: called with `true` if the page
 *  exists, `false` if it doesn’t, and `null` if an error occurs (e.g. invalid
 *  page name or network error).
 */
function pageexists( pagename, callback ) {
	if ( pagename.indexOf( '|' ) > -1 ) {
		// `|` is a separator in the API request, so it could
		// lead to unexpected results; but it’s invalid anyways
		callback( null );
		return;
	}
	mw.loader.using( 'mediawiki.api', function () {
		( new mw.Api() ).get(
			{
				action: 'query',
				prop: 'info',
				titles: pagename,
				formatversion: 2
			},
			{
				success: function ( response ) {
					var page = response.query.pages[ 0 ];
					if ( page.invalid ) {
						callback( null );
					} else if ( page.missing ) {
						callback( false );
					} else {
						callback( true );
					}
				},
				error: function () { callback( null ); }
			}
		);
	} );
}
pageexists( 'User:HandigeHarry~commonswiki', console.log ); // logs `false`
By the way, it came to my mind that even though I’m all for backward compatibility, you may not want to support such ancient browsers like Internet Explorer or Chrome and Firefox versions from 2017. If you’re okay with code working only in browsers released in the last three-four years, you can use native async support, which simplifies the code quite a bit:
/**
 * @param {string} pagename Title of the page to check
 * @return {Promise} Promise resolved with `true` if the page exists,
 *  `false` if it doesn’t, and `null` if an error occurs (e.g. invalid
 *  page name or network error).
 */
async function pageexists( pagename ) {
	if ( pagename.indexOf( '|' ) > -1 ) {
		// `|` is a separator in the API request, so it could
		// lead to unexpected results; but it’s invalid anyways
		return null;
	}
	await mw.loader.using( 'mediawiki.api' );
	let response;
	try {
		response = await ( new mw.Api() ).get( {
			action: 'query',
			prop: 'info',
			titles: pagename,
			formatversion: 2
		} );
	} catch {
		return null;
	}
	const page = response.query.pages[ 0 ];
	if ( page.invalid ) {
		return null;
	} else if ( page.missing ) {
		return false;
	} else {
		return true;
	}
}
console.log( await pageexists( 'User:HandigeHarry~commonswiki' ) ); // logs `false`
(be aware of the await in the last line). The main gain is that you no longer need to pass a callback function, but await pageexists( ... ) returns a value, which you can further process in the same function. —Tacsipacsi (talk) 12:51, 29 July 2021 (UTC)Reply

How are results from wbsearchentities ordered?[edit]

Hi everyone,

I cannot find anywhere information on the ordering of the results from wbsearchentities. For example, the request https://www.wikidata.org/w/api.php?action=wbsearchentities&search=Clinton&language=en returns Bill Clinton as the top result, which seems indicative of some ordering by importance or relevance. I would like to be able to have a reasonable expectation (a programmable criteria) as to when to stop looking further down the results for queries that yield too many results.

Does anyone know the formal criteria for this? Thanks! --Ofyalcin (talk) 01:55, 28 July 2021 (UTC)Reply

@Ofyalcin: The results, I believe, are sorted by ID (Bill Clinton (Q1124) has the lowest ID of the results when searching for “Clinton”). — ExE Boss (talk) 11:20, 28 July 2021 (UTC)Reply
@ExE Boss: Thanks for this answer, but I don't think that is the case. It does hold for the first item, but the rest are not ordered by ID. For example, the second result is Q676421 while the third is Q305349. --Ofyalcin (talk) 15:53, 28 July 2021 (UTC)Reply

Small addition to the page (not sure how to use the translation tags)[edit]

I think it'd be helpful to include a link to Special:ApiSandbox, or at least mention it, but I'm not sure what the proper procedure is for the translation tags on this page. Is this something I should get an expert to do, or ought I to try and figure it out myself? JPxG (talk) 06:04, 10 August 2021 (UTC)Reply

@JPxG: You don’t need to get an expert in advance. Try your best; an experienced translation administrator will need to approve the change (and fix it if needed) anyway before it goes live in the translation system. (It will be live immediately, of course, on this very page, but not on translated versions.) —Tacsipacsi (talk) 23:32, 10 August 2021 (UTC)Reply
Cool, thanks (just got this notif now for some reason). JPxG (talk) 03:35, 23 August 2023 (UTC)Reply

Get protection levels via API[edit]

Hi, how can I get local protection levels (e.g. Allow only autoconfirmed users, Allow only administrators etc.) via API? NguoiDungKhongDinhDanh (talk) 17:45, 5 February 2022 (UTC)Reply

Yes See en:WP:VPT. NguoiDungKhongDinhDanh (talk) 16:48, 12 February 2022 (UTC)Reply

2018 cleanup[edit]

This August 2018 edit removed a large amount of text. At least some of the text (like the Api-User-Agent part) was not added to other pages, meaning it was lost. The text should be added to some page, either here or elsewhere. For the time being, I'm considering simply re-adding it myself, if nobody objects here. Enterprisey (talk) 22:06, 3 March 2022 (UTC)Reply

Alright (not waiting for objections), I made Special:Diff/5093066 and meta:Special:Diff/22928874. Perhaps more work remains, but those were the two items I was most interested in. Enterprisey (talk) 22:47, 3 March 2022 (UTC)Reply

When to use rctoponly parameter[edit]

I am a bit confused about rctoponly parameter. Can someone provide best practices on when this parameter should be used? 71.143.198.8 22:05, 8 September 2022 (UTC)Reply

Wikitext of all matching pages[edit]

What is the best way to get wikitext of all matching pages, like I have posted here https://stackoverflow.com/questions/75305175/api-call-to-get-wikimedia-commons-users-uploads-with-categories-and-wikitext Thanks. Jidanni (talk) 08:29, 3 February 2023 (UTC)Reply

You can get the categories if you use the second approach listed on c:Commons:API/MediaWiki#Get files uploaded by a particular user, and use allimages as a generator: https://commons.wikimedia.org/w/api.php?action=query&generator=allimages&gaiuser=FlickrLickr&gaisort=timestamp&prop=categories. If you don’t like how it’s grouped, just postprocess it client-side. I don’t think you can get wikitext in bulk. You might be able to get SDC data using the API, but SPARQL/Wikimedia Commons Query Service is the preferred way to get it, e.g.
SELECT * WHERE { VALUES ?item { sdc:M329387 sdc:M329389 }. ?item ?prop ?val. }
Try it! sdc:M329387 and sdc:M329389 are the page IDs, and you can put as many of them in the brackets as you want (so you should be able to get data about all 200 images in a single request, after you’ve got the page IDs in another single request). —Tacsipacsi (talk) 22:35, 4 February 2023 (UTC)Reply

Action API vs. "action=" URL query parameters[edit]

Mention if the Action API is related to the "action=" query parameters e.g., in

https://www.mediawiki.org/wiki/API_talk:Main_page?action=raw

If no then do mention what API those belong to.

By mention I mean in the main article, not just here on the talk page. Thanks.

Jidanni (talk) 01:33, 6 February 2023 (UTC)Reply

?action=raw is an index.php parameter. -- BDavis (WMF) (talk) 21:30, 6 February 2023 (UTC)Reply