API talk:Parsing wikitext

Minor issues with expandtemplates
Hi,

I just want to report two issues with expandtemplates :
 * expandtemplates seems to work for only one level of templates (templates inside templates are not expanded)
 * expandtemplates doesn't take into account &lt;includeonly&gt; or &lt;noinclude&gt;

For example, on the French Wikipedia, expandtemplates for   gives


 * Possibly you should report this to http://bugzilla.wikimedia.org/, so the issues can be properly tracked and fixed. -- Tbleher 10:16, 22 January 2008 (UTC)
 * Ok, done. There's the same problem for Special:ExpandTemplates. --NicoV 14:54, 22 January 2008 (UTC)
 * Seems to be fixed --NicoV 17:54, 23 January 2008 (UTC)

error with parse...
i am getting the following return...  

my query string looks like this... /api.php?action=parse&page=Houdini

i am using mw 1.15.1 i get nothing of value from the mw debug log and i have verified that my rewrite rules are not adding the title parameter. any thoughts? 67.97.209.36 14:25, 18 November 2009 (UTC)


 * if i change my query string to this...
 * /api.php?action=parse&text=
 * i get the expected result.
 * 67.97.209.36 16:46, 18 November 2009 (UTC)

I'm seeing the same problems. I think we should write it up as a bug. --Duke33 01:58, 14 January 2010 (UTC)


 * I took the liberty of writing it up:22684 --Duke33 17:16, 1 March 2010 (UTC)

Parsing into printable text ?
Hi,

Is there a way to parse a wiki text to get a simplified text (without HTML, external and internal replaced by their text, ...) ?

My need is the following :
 * The project Check Wikipedia uses a configuration file for each wiki (for example: en)
 * It's used among other things to generate pages in Wiki format (for example: en)
 * In the configuration file, you can see for example a description of error n°1: error_001_desc_script= This article has no bold title like &lt;nowiki&gt;Title&lt;/nowiki&gt; , so it contains Wiki text.
 * I am writing a Java program (WikiCleaner) to help fixing the errors reported by this tool. I'd like to display this text in my program as a simple text:  This article has no bold title like Title .

Thanks, NicoV --16:33, 28 March 2010 (UTC)

Error with DPL
DPL and DPL in templates are not expanded. e.g. these fail Hamishwillee 00:02, 19 July 2010 (UTC)
 * api.php?action=query&prop=revisions&titles=yourtitle&rvprop=content&rvexpandtemplates
 * api.php?action=expandtemplates&text=

Simple template output wrapped in paragraph tags
Suppose I have Template:Foo, which contains this wikitext: This calls the hello world template:.

Template:Hello world contains this text: Hello, world!

I would expect the text output to look like this: This calls the hello world template: Hello, world!

This is my API call: http://example.com/w/api.php?action=parse&pst=1&disablepp=1&format=json&redirects=1&text=%7B%7BFoo%7D%7D

The problem
The json returned wraps the output in paragraph tags for some reason, which I don't want.


 * Templates that contain tables do not wrap their output in paragraph tags.
 * When the templates are used on a normal wiki page, no such paragraph tag wrapping occurs

"byteoffset" is a misnomer
In the "sections" array returned by this API call, the field named "byteoffset" holds the offset of the section within the wikitext markup. Contrary to its name, the offset is measured in code points, not bytes. (Note: non-BMP characters count as one. Beware of UTF-16, especially in JavaScript, where .) Keφr 22:11, 10 March 2014 (UTC)

action=parse should give a basetimestamp/starttimestamp when given an oldid
Say, I want to use action=parse to take the wikitext and some of the more "advanced" parser output, like the section list or the XML tree. If I want to transform the wikitext and save it back, with edit conflict resolution, I need a starttimestamp and/or basetimestamp. Without having it here, I have to round-trip the server twice. Kludgy. Keφr 11:11, 11 March 2014 (UTC)

Argument "disablepp" does not work
Hi, I hope that I am doing something wrong. Here's my query:

http://de.wikipedia.org/w/api.php?action=parse&format=json&prop=text|sections|revid&page=Diskussion:Die%20unendliche%20Geschichte&disablepp=true

As you can see in the output, there is still a comment with preprocessor stuff:

--2A02:8071:B486:2300:2210:7AFF:FEF8:7EEE 21:52, 9 October 2014 (UTC)


 * You're right. This is a known bug, as reported here and here. – RobinHood70 talk 08:35, 10 October 2014 (UTC)

Request Header Or Cookie Too Large
Hi, how do I avoid "400 Bad Request - Request Header Or Cookie Too Large"?

or "414 Request-URI Too Long"?

I'm trying to parse the content of this, I would like to make a few automated changes to the source and then to sow it in a html page. אור שפירא (talk) 07:10, 22 July 2015 (UTC)
 * Are you using GET requests or POST requests ? Try using POST requests (for GET requests, parameters are in the URI; for POST requests, they are outside the URI). --NicoV (talk) 07:15, 22 July 2015 (UTC)
 * I am having the same issue. I don't see a way to use action=parse with the  parameter if the payload is larger than about 6KB, since action=parse seems to be limited to GET requests, and most webservers will limit the URL sizes to approximately 8192 bytes.
 * Is there a solution to this? There's a lot of speed-profiling I'd like to do on my wiki that is much easier if I can mix-and-match which parts I send, and then look at the PP report - but I don't have a way to do that if I'm limited to tiny payloads. 198.134.98.50 06:29, 4 August 2021 (UTC)

Import wikitext from parsertree
I wanted to edit wikitext automatically and easily, so I tried to edit it with parsetree. But I couldn't find a way to get wikitext from parsetree. Can't I convert parsetree to wikitext with api? --Gustmd7410 (talk) 23:40, 3 June 2018 (UTC)

Group categories by sections
Currently, it is possible to get list of Categories for a specific section:. Is it possible to get a map (dict in Python) with categories corresponding to each section? Example:

Soshial (talk) 13:43, 9 June 2019 (UTC)

How can the API be used to get the printable version?
Some context: Archiving selected pages from our MediaWiki based project wiki using a browser to manually save them can become a bit tedious, so we have a script to do that. Now we want to switch to the current LTS version 1.31.5, which together with other requirements means the script has to use Bot passwords for logging in. But using Bot passwords means index.php isn't available:

Clients using bot passwords can only access the API, not the normal web interface.

Glueing together what action=parse and prop=text|headhtml return I get an HTML document almost, but not quite, entirely unlike the result from index.php. Indeed it is good enough for the rest of the script to work without further changes -- except for the missing parameter "printable=1" in the links to the stylesheets. The stylesheets retrieved with these links result in very noticeable differences, e.g. the wrong font in a very small size.

Hacking the missing parameter into the links with brute force seems to work, but is not a solution I really like. Is there a way to ask action=parse to provide the printable version of the rendered HTML?

-- Cat&#39;s paw (talk) 17:19, 16 January 2020 (UTC)

"Gives the templates"
What does "gives the templates" mean?

Suppose I have a template named Q. It has something to do with Wikidata.

In my personal wiki, I can find all the pages transcluding Q easily enough with another API. And then I can fetch the page, requesting that it "gives the templates".

But what I really want to know are the template arguments for each instance (I expect many instances on most pages).

Suppose I want to do this as a stopgap measure before wading into Extension:Cargo. Will this API do the job for me or not? What does it mean "gives the templates"? The names of all templates used, or the invocations of each template, with the full text of each invocation?

Undefined "give" likely applies to much else herein. MaxEnt (talk) 21:02, 27 November 2021 (UTC)