Jump to content

Help talk:Extension:ParserFunctions

About this board

31.147.156.166 (talkcontribs)

I've done a lot, but I can't seem to figure out why can't we just use


{{#explode:{{PAGENAME}}|:}}


It just doesn't work. On the other hand, writing the actual words (not substituting them):


{{#explode:Extension:ParserFunctions|:}}


will work. Why is that and what can be done?

Dinoguy1000 (talkcontribs)

{{PAGENAME}} only returns the name of the current page, without the namespace. For example, (on this wiki) for the page Extension:ParserFunctions, it returns ParserFunctions. Therefore, {{ #explode: {{PAGENAME}} | : }} will only return something different from {{PAGENAME}} if the page name itself contains a colon. If you want to get the namespace of a page, you should instead use {{NAMESPACE}}: for Extension:ParserFunctions (again, on this wiki), it returns Extension. Alternatively, you could use {{FULLPAGENAME}}, which will return the namespace and page name together, but there's no point in doing that just to #explode it to get the namespace.

31.147.156.166 (talkcontribs)

I was thinking without the namespace. It's more about removing extra characters from the title, like getting "car" from "John's car" if the page is named "John's car".

Dinoguy1000 (talkcontribs)

For that specific case, you can just do {{ #explode: {{PAGENAME}} || 1 }}: #explode will operate on spaces if you omit or leave blank the second parameter.

If this still doesn't help for what you're trying to do, please post an actual use case from your wiki and I'll see if I can give you better/more tailored advice.

Verdy p (talkcontribs)

Don't forget that {{NAMESPACE}}, {{FULLPAGENAME}} and {{PAGENAME}} can take an optional parameter after a colon and before the closing braces. This simplifies a lot the design of templates without having to perform lot of tests:

  • You can use {{PAGENAME:{{{1|}}}}} to strip the leading whitespaces and colon and the namespace prefix from the value of {{{1|}}}. It will also strip the trailing whitespaces, and other whitespaces and underscores are normalized and compressed to their default meaningful presentation form (the form displayed in the rendered page titles, and in links to members of categories).
  • Or {{FULLPAGENAME:{{{1|}}}}} to just strip the leading whitespaces and colon (the namespace prefix will be kept). It will also strip the trailing whitespaces.
    A valid pagename (without the namespace or any interwiki/language prefix) may still contain a colon, if what is before that colon it is not recognized as a valid namespace or wiki prefix. Using {{#explode:}} will not make this difference and may strip too much in that case...
  • As well, {{NAMESPACE:{{{1|}}}}} will strip the leading colon and the pagename (containing interwiki prefixes followed by a possible remote namespace), keeping only a leading namespace recognized on the local wiki (with its trailing colon if that namespace is not the empty string for the default namespace).
    The form of the namespace (when it is not empty) present in the value returned by {{NAMESPACE:}} or {{FULLPAGENAME:}} is canonicalized to its default letter case (with a single capital initial), and recognized aliases are converted to the canonical name (e.g. the Image: alias will be converted to the canonical File: namespace).
    This allows comparing namespaces directly, e.g. in test labels of {{#switch:...}} or with {{#ifeq:...}}, without having to convert them to a unique letter case form (with {{UCFIRST:{{LC:...}}}}, or {{UC:...}}).
    This canonical form of any namespace may be localized in the default language of the local wiki. If you intend to write code that would work the same in different wikis, e.g. between English Wikipedia (which uses "Template:") and French Wikipedia (which uses "Modèle:" and recognizes "Template:" only as an alias), you may want to compare with namespace names returned by namespace numbers (using {{NS:number}}, where {{NS:0}} is the empty string for the main namespace of the wiki) instead of comparing them with explicit canonical namespace names (the namespace number for the "Template:" namespace in English is the same as the namespace number for the "Modèle:" namespace in French).
  • None of these three functions will touch/remove/canonicalize the namespaces and pagenames if they occur after any "interwiki:" prefix recognized by the local wiki. The links to external wikis will still work, but will be resolved and canonicalized externally by the target wiki itself when visiting them (as well the target wiki will resolve itself any resulting page redirect); such remote resolution may change over time (extremely rarely, hopefully, generally only during the early development phases of a starting wiki, when the localization to its default language is not finalized; but redirects on the target wiki may occur and change quite frequently). As well no test of existence of the target page on the target wiki is done when using any interwiki prefix, it is assumed to exist for the local wiki, which just validates its locally known interwiki prefix.
Reply to "Explode PAGENAME"

ifexist a registered user

3
190.242.129.62 (talkcontribs)

the magic word #ifexist it will always show the second parameter if the first parameter is put "Media:" and then an existing media file, instead if "File:" is put it will show the third parameter if there is no description, however, I don't know if it's possible with users, the user:Google~mediawikiwiki is registered, but as you can see, the user page doesn't exist because it hasn't been created, I've tried with the special pages (logs, listusers and contributions) with non-existent users, but it always gives the second parameter, is there a way it can shows the third parameter if the specified user on the first parameter doesn't exist?

RobinHood70 (talkcontribs)

Currently, there's no way to determine whether a user is registered or not via a parser function, and I can't think of any tricky ways around that limitation, either.

Verdy p (talkcontribs)

Media files are a bit special because they can possibly not exist on the local wiki but may exist in a shared wiki like Commons. #ifexist does not support testing on other wikis (it is much most costly then testing on a local wiki, as it requires using a REST API from another wiki. Usually, links to media files are just there to render an "external" link to the media file as is then not tested. Using "Media:" in #ifexist to conditionally use "File:" to render it is not warrantied to work, except if the file is hosted on the local wiki (e.g. a file in English Wikipedia that is not importable and replaceable by a file with the same name on Commons).

When using external wikis, the parser will not invoke the thumbnail renderer of the local wiki, but will just request the "file:" from the external wiki. However there's a delay for that external renderer to provide a reply. Whereas #ifexist can only perform synchronous requests (in a short completion time).

To support testing files on another wiki with #ifexist, it would require an optional parameter asking if it wants to test another wiki by performing an asynchronous request; the problem is that Mediawiki does not support asynchronous requests, whose completion time is unpredictable. The same would be true idf one wants to used "shared" templates or modules.

So #ifexist is not tuned to allow asynchronous requests, that would block the rendering of the current page being parsed. I have no idea how this could work. #ifexist is supposed to run on the same database as the page being rendered. But if it was working we could test external links to any page on any wiki, using wikilinks with interwikis. Worse, the result would not be cachable in the parser cache; even if that external site has the shard resource, you don't know really how and when it will return the content (you don't even know what will be its metadata, notably the media type and size, which may change at any time (there's no mechanic for cache synchronization between different wikis).

When you use a shared image, the parser assumes that the external file exists and that the external site will generate a thumbnail with the requested size and media type. The actual request to the external server is then not made by the parser and not cached, but made on client side, by the web browser of the visitor, making its own asynchronous requests, with the external site making all the needed parsing and transform to HTML or a thumbnail image, so that the local wiki will never parse the result

Reply to "ifexist a registered user"

Suggestion to state that {{#if: can check for multiple parameters

14
Snowyamur9889 (talkcontribs)

I discovered a while back through testing on another hosted wiki (MediaWiki 1.39.3, running PHP 8.1.19 (fpm-fcgi) ) that {{#if: can check for multiple parameters as the test string.


For example:

{{#if:{{{Param1|}}} {{{Param2|}}} {{{Param3|}}}|Param1, Param2, or Param3 exist|No parameter strings exist}}

{{{Param1|}}} is checked for as the test string, but {{{Param2|}}} and {{{Param3|}}} can also be checked. If at least one of the parameters exist and has a non-empty string argument, the string "Param1, Param2, or Param3 exist" will be displayed. Otherwise, "No parameter strings exist" will be displayed.


I think this note should be mentioned under the "if" section of this documentation. It's not explicitly mentioned you can do this, but the fact you can makes {{#if: significantly more useful for checking for multiple parameters in the test string.

RobinHood70 (talkcontribs)

Even faster for checking that sort of thing is: {{#if:{{{Param1|{{{Param2|{{{Param3|}}}}}}}}}...etc., though it has the disadvantage of being harder to read. The idea there is that if Param1 has a value, neither of the other two parameters needs to be evaluated.

Verdy p (talkcontribs)

That's not correct. If Param1 is explicitly given an empty value, the result of #if: will be always false, independantly of values given or not given to other parameters (that are used to set a default value for Param1, only if Param1 is not passed at all). As well, this is absolutely not faster. The expression given as a default value of a missing parameter is still evaluated before that parameter is checked, and the recursive inclusion of triple-braces is very unfriendly (avoid it completely, it is very errorprone).


Currently Mediawiki still does not perform "lazy evaluation" for expanding templates and parameters, until their values are effectively used and needed to be partially expanded (and finally fully expanded only at end of parsing).

A true lazy expansion would mean that each node in the parser is in either two state: unexpanded and unparsed text, or expanded and cached value of its expansion (the cache would keep the first unexpanded form as the key: if that key is not found in the cache, then the string is still not expanded and still needs to be parsed; nodes would point then either on an unexpanded text, or on a cache entry containing the <unexpanded,expanded> pair of texts; using a cache would allow avoiding unnecessary conditional expansions, and reuse of results of prior evaluation of the same unexpanded text).

MediaWiki currently uses a cache only for transclusions of templates (after parsing and resolving the template name, and its parameters ordered and named in canonical form), but I'm not even sure it uses any cache for parser function calls, so that they could return different values (e.g. for an "enumerator" parser function, or a parser function return different random numbers. This is alerady false for parser functions that return different time parts, which are all based from the same (cached) local time on the server: once you extract any part of that local time, the resulting parsed page will have its stored cache expiration adjusted/reduced to match the precision of the datepart requeste, but this does not affect how the current page. For "lazy parsing", I was speaking about a transient cache used exclusively in memory by the parser (which is not stored in the DB), and does not survive the full parsing and expansion of the current page.

RobinHood70 (talkcontribs)

I hadn't considered empty values. My mistake, and to be honest, that on its own probably makes the rest of this moot. But leaving that aside for the moment, though, as I recall, the parser will tokenize the values to PPNodes no matter what. So, all three get tokenized regardless of whether they're nested or sequential. During expansion, however, I thought the loop put the resultant value in the named/numbered expansion cache as soon as it found the relevant value and didn't fully expand the default values if it didn't need to (i.e., if one of the values was non-blank/non-whitespace). Am I wrong in that? It's been a while since I've looked at that code. I'll grant that even if it does so, the performance gain is minimal, but it's not nothing.

As far as recursive braces go, the only concern I'm aware of is brace expansion which, of course, is ambiguous if you have 5+ of them. Knowing that the preprocessor bases its decisions on the opening braces, unambiguous parsing is as simple as making sure that your braces use spaces as needed: {{{ {{, for example. Is there something I'm not thinking of that complicates triple-brace usage further? Or did I misunderstand your point?

I'm not sure if I've understood your last paragraph correctly, so I don't know if this is a helpful reply, but I believe parser functions are only cached in the sense that the entire page is cached. If you don't set the cache expiry in your PF, a so-called "random" number generator will happily return the same value on every view/refresh until edited or purged. If the PF does set it to a lower value, the page gets re-evaluated every X seconds. To the best of my knowledge, that means that something like a random number generator would cause the entire page to be re-parsed, most likely at every refresh unless they use a cache-friendlier value.

Verdy p (talkcontribs)

You did not understand: Any parser function that is supposed to return a different random number each time it is called from the same rendered page will still not return different numbers: the invocation is made once and cached (in memory only, not stored). What is cached AND stored in the DB is the result of the full page parsing (that's where you need a "purge/refresh", but "purge/refresh" has no effect on the memory cache for multiple invocations of the same parser function all, or module call, which cannot return different values between each call; but may return different values only when refreshing the whole page)

In summary, MediaWiki uses purely functional programming, invocations can be executed in any order, there should be no "side effect" with hidden "state variables". This also allows Mediawiki to delegate part of the work to multiple parallel workers (if supported), running without any kind of synchronization, and reuse their results (synchronization would occur only on the in-memory cache). If we have hidden state variables that are mutable and that influence the result, this gives a strong performance penalty, forcing the evaluation to be purely sequential, and not allowing a completely "lazy" evaluation with all its advantages in terms of performance.

RobinHood70 (talkcontribs)

If you re-read, I actually did say the same thing as your first paragraph. That's what I was getting at when I said that a random number generator that doesn't set a cache-expiry would only be re-evaluated when the cache is invalidated. If the PF does set the cache-expiry, however, it's quite possible to create a random number generator that changes at every refresh, for example, this implementation of #rand. If you edit the page, you'll see a cachetime parameter which you can adjust. The corresponding value will be set in the page's NewPP limit report if you look at the page source, and you can also confirm the time the page was actually cached. (PS, feel free to edit that page if you want to confirm what I'm saying...that's a development wiki, so nobody will mind.)

Are you sure that invocations can be executed in any order? I was under the impression that MediaWiki's parsing was still linear and invocations come in a fixed order (except maybe if you're using the Visual Editor, since that uses Parsoid). That's been a point of some contention, since allowing invocations to come in any order breaks extensions that rely on preservation of state during parsing, like Extension:Variables and those that do other forms of variable assignment (e.g., loading variable values from a database). I know they talked about breaking that in 1.35, but I saw no mention of it in the change list for that or any version thereafter and I was under the impression that it was only Parsoid that did that, not the still-current preprocessor/parser. I honestly haven't had time to look into it thoroughly.

Verdy p (talkcontribs)

Execution order in MediaWiki was made so that it is not significant; this will enve be more critical for Wikifunctions, which must be purely function, and that will run in random order, over possibly lot of backend servers running asynchronously (the first that replies fills the cache so that further invokation is avoided). Lazy evaluation is critical for Wikifunctions to succeed. Various things have been fixed in MediaWiki to make sure that it can behave as a purely functional language, allowing parallelization. There are still things to do and there a some third-part Mediawiki extensions that still depend on sequential evaluation. For a full lazy evaluation it should not be needed to fully process arguments, except as needed from the head (optionally from the tail): this requires virtuallization of string values, so that we don't need the full expansion of the wikicode, but instead can parse it lazily and partially from left-to-right, just as needed to take the correct decision in branches, and then eliminate unparsed parts so that we don't even need to evaluate them.

This is possible in PHP, as well as in Lua, or even in Javascript, by using a string interface and avoiding using functions like "strlen" as much as possible, that requires a full expansion of its parameter string (for example if we use a code that matches only prefixes or suffixes, we jsut need to expand as many initial or final nodes as needed). Internally the "string-like" object would actually be stored as a tree of nodes, as long as they are not expanded, or as a flat list of string nodes (not necessarily concatenated to avoid costly copies and reallocations) if they are expanded. Even the TidyHTML step is not required to take a single physical string argument, where it can as well read characters from a flat list of string nodes (many of them being possibly identical and not takign extra memory, except for the node itself in the tree or list, represented in an integer-indexed table. This means that nowhere in the parsing, expansion and generation of the HTML we could really need to have the whole page in memory in large string buffers, and we can parallelize all steps of parsing/expansion/tidyHTML so that they behave as though they were operating linearily, and incrementally. This would boost the performance, even on a single server thread.

If we really use lazy evaluation, the number of nodes to evaluate would be very frequently reduced (notably for wiki pages that expand a lot of shared templates or parser functions with conditional results (#ifxxx, #switch) or using only part of their parameter (such as "substring" functions from the start or end of the text). The in-memory cache for lazy evaluation can be the tree of nodes itself, whose evaluation and expansion is partial, and that can also be valuated using delegates running asynchronously in parallel, possibly on multiple server/evaluator instances.

RobinHood70 (talkcontribs)

Thanks for all that. In what version of MW is linear parsing officially broken? I thought that was all being done in Parsoid and that we were safe from that kind of change until then, but from what you're saying, it sounds like even the legacy parser is being affected. That completely messes us up, as 75% of our wiki uses a custom extension that relies on linear parsing and state being maintained within any given frame (not to mention we have cross-frame data as well). The idea of it is that it can modify or create variables as well as returning them or inheriting them across frames, not to mention loading data from other pages. The dev team have promised us a linear parsing model as well, but there's been no information anywhere that I've found, so we're really in a holding pattern until we know what's going on.

Is this all documented anywhere? It would be nice to be able to keep abreast of these changes, but I haven't found anything that even remotely touches on these changes the way you just did.

RobinHood70 (talkcontribs)

Oh and apologies to Snowyamur9889 for this getting so far afield.

Till Kraemer (talkcontribs)

Thank you! I was looking exactly for this. Would be great to have it in the documentation. Cheers and all the best!

DocWatson42 (talkcontribs)
Verdy p (talkcontribs)

You're not even required to use space separators between parameter names in the #if: condition. This works only because if all parameters are empty or just whitespaces (SPACE, TAB, CR, LF), you get a string of whitespaces in the 1st parameter, and "#if:" discards leading and trailing whitespaces (but not whitespaces in the middle) from all its parameters. Note that "#if:" also discards HTML comments everywhere from all its parameters (so as well you can freely insert whitespaces or HTML comments just after "#if: ", or around pipes or before the closing double-draces, and you get the same result).

However "#if:" does not discard leading or trailing whitespaces if they are HTML-encoded, e.g. as &#32;, and for that last string, if it is used in the "#if:" condition, it will evaluate as as false condition. So "{{#if: &#32; | true | &#32;false&#32; }}" returns "#&32;false&32;": the expansion of HTML-encoded character entities into " false " is not performed during the evaluation of parser functions or template expansion, but only on the final phase that generates the HTML from the fully expanded wikitext, and cleans it with HTMLTidy (which may compress whitespaces and whick may then move leading and trailing whitespaces in the content of an element outside of that element, where they may be further compressed, except if the element is "preformated" (like HTML "<pre> </pre>" which is left intact; note also that this HTMLTidy step may reencode some characters using numerical character entities, in hexadecimal, decimal or using predefined named entities like "lt", "gt" or "quot", if this is needed to preserve a valid HTML syntax in the content of text elements or the value of element attributes; which reencoding is used exactly at this step does not matter, as they are fully equivalent in HTML and does not affect the generated HTML DOM, and this encoding is not detectable at all in templates or parserfuntions, or in any client-side javascript). Note that "pre" elements are treated much like "nowiki", so that its content in hidden in a uniq tag and not parsed, but regenerated from the "uniqtag" cache that stores its actual value after the TidyHTML step (so its inner whitespaces in the content are left intact, they are just "HTMLized" using HTML-encoding with character entities as needed).

Note also that if there's any "nowiki" pseudo-element in the condition string of "#if", it will always evaluate this condition to false, even if that "nowiki" pseudo element is completely empty. E.g. "{{#if: <nowiki/> | true | false }}" returns "true". Effectively "nowiki" elements are replaced during the early parsing by some "uniq tag" (starting a special character forbidden in HTML and containing a numeric identifier for the content); and are replaced by the actual content at end of template expansion of parser function calls, but just before the HTMLTidy step which may strip part of the content if it starts or ends by whitespaces.

RobinHood70 (talkcontribs)

I just tried {{#if: <nowiki/> | true | false }} on both my testing wiki and WP to be sure, and as I'd thought, it returns "true", not "false", though your reasoning is pretty much correct, otherwise. It sees <nowiki/> as a non-empty value because of the uniq tags that you mention.

Verdy p (talkcontribs)

You're correct (that's what I described, but I made a bad copy-paste from the former code just above). I just fixed my comment above.

Reply to "Suggestion to state that {{#if: can check for multiple parameters"

Tracking pages with expr errors

6
Amire80 (talkcontribs)

If a page is published with the code {{#expr: 1+ }}, it will show a localizable error message in the page body. Here's an example: User:Amire80/oneplus.

Is there a way to find a list of pages that have such errors, e.g. a tracking category or a special page?

As far as I know, there is no such page at the moment.

In the Hebrew Wikipedia, the administrators changed the messages that show these errors so that they display the message and add a page to a category. It works, but it's hacky. Is there currently no other way to do it? (That is, other than modifying the code that implements {{#expr.)

Verdy p (talkcontribs)

Amire, you are clearly abusing your admin privilege, by administratively deleting an answer (and hiding its content in the history) that was on completely topic, and no so long as you state. It also explained what could be done for now and what may eventually be done in MediaWiki to solve such problem.

You say it was not replying the question. But what was really the question? The current lack of support in Mediawiki and the way it works and how error trackings in categories may (or may not) be done in the result of a parser function that is not supposed to return such thnig in the plain-text format (excluding MediaWiki and HTML tags) expected in return by this function.

As your question has no definitive answer for now, I explained a workaround, currently used in many templates (and their possible caveat, minor in most existing usage cases, using "#iferror"). And it was properly formatted. It's not my answer that is too long, but your question which is badly formulated, and is in fact exposing a problem, seeking for solutions or workarounds (what I did). If you don't want any opinions exposed to this case, why posting here to the public? I did not violate any rule here, but you just did it with your privileges.

Amire80 (talkcontribs)

I stand by what I've written in the deletion comment: Your response was very long, and it didn't answer the question. I asked a simple yes-no question that can be answered clearly and briefly, as was done in another response. If you don't understand what the question or don't have the knowledge to answer it, you don't have to write anything. I deleted your response because it could make people think that the question was answered, even though it was not.

In fact, a very large number of the responses you write on Phabricator and on discussion pages in all the wikis in which I saw you writing are too long and off-topic, and I'm really not the only person who openly complains about that.

Verdy p (talkcontribs)

Seeking for solutions (this was clearly not "a yes-no question") and discussing them (because you instantly replied "no" to your question, without discussing possible solutions) appropriately is on topic, and does not justify at all your administrative deletion. You've abused your rights.

Matěj Suchánek (talkcontribs)

I keep track of these errors using simple search: .

And yes, we have Manual:Tracking categories, but we definitely don't track all errors. Something needs to be changed in the code.

Verdy p (talkcontribs)

I suggested (in the message abusively deleted by Amire80) to implement some code in MediaWiki to allow a parser function to post error tracking messages in an alternate "stream", not returned in the single string by the function call, but that would be generated after the main content. For now there's no easy way to correctly implement error tracking in parser functions like "#expr:", and that would not cause further problems to the parsing (it can potentially break the page layout or HTML syntax). Amire80 thinks it is "out of topic", but this is not. The pseudo solution he gave above is not one (described by himself as "hacky", so it was explicitly seeking for better solutions), so for now we use workarounds (like "#iferror:" in templates, to detect errors in expressions, because there's still no other way to do that without modifying the wikicode of pages using "#expr:").

Reply to "Tracking pages with expr errors"

Request time null formatting

4
SamuelRiv (talkcontribs)

Doing some testing, I found that #time will return the formatting string unmodified if the string has no recognized code (or throw an error if the date object is bad). However, it does not appear that #time has a null code to return the date/time object unmodified (or throw an error if the date object is bad). This behavior would be useful for several templates using #time for data validation that would otherwise want to defer to the user (or source) for date formatting. It also seems to me like it would be an expected feature for any parser function, unless I'm missing something. SamuelRiv (talk) 22:26, 5 April 2024 (UTC)

Matěj Suchánek (talkcontribs)

Could you please demonstrate the current and desired behavior using an example?

SamuelRiv (talkcontribs)

Unrecognized or blank formatting string: {{#time:q|1999-01-01}} ; {{#time:|1999-01-01}} ;

q ;  ;

What I request is a formatting string code that validates the date/time object and returns the date/time object string unmodified if valid. Something like {{#time:NULL|1999-01-01}} ; {{#time:NULL|1999-99-99}} would return output 1999-01-01 ; ERROR_CODE.

The reason for requesting this is it seems like null behavior is missing in either case -- the current behavior seems to indicate that some null date would output the formatting string unmodified, but that function is not present either (I don't know what the use case is for outputting unrecognized characters in lieu of an error code). Either way, some kind of null function behavior should be expected unless there's a good reason. (Alternatively, validation should be separable from reformatting.) SamuelRiv (talk) 03:30, 10 April 2024 (UTC)

Tacsipacsi (talkcontribs)

Can’t you use #iferror?

{{#iferror:{{#time:|{{{1}}} }}|Error|{{{1}}} }}
Reply to "Request time null formatting"
Sdkb (talkcontribs)

Does anyone know what's happening when I run {{#time:d M Y|2017 + 13 months}} and get 09 May 2025? It doesn't read 2017 as a year and adds 13 months to the current date.

Verdy p (talkcontribs)

Months are 30.436875 days on average (in the Gregorian calendar, whose years are 365.2425 days on average), but their actual length on calendars is variable and adding months would depend on the starting month and possibly the day, and sometimes on the year if it's in February on leap years (notably when starting on the last few days of months). But calendar months within a year (counted from 1 March to the next year) are aligned as if their length was 30.6 days on average, before rounding the number of days, so distinct calendar months are variable in duration.

When you specify 2017 alone, there's no way to add an exact number of months and get a date with a precision to the day. The function takes the missing month and day from the current date (9 April 2024), and then adds the relevant number of months. The specification "2017" is ignored.

That function could use 1 January of 2017 at 00:00:00) as the base (independantly of the current year) and then add 13 months: 12 months to increase the year from 2017 to 2018, then 1 month to shift from 1 January to 1 February, resulting in 1 February 2018 (at 00:00:00, but you did not tell the function to show the exact time).

When you add a number months it is more simply converted to a number of years, by just dividing this number by 12. The function would then add 1 year and 1 month, but at which day of the month? Some rounding must occur. Other interpretations are possible.

But generally it's not a good idea to add months to any date without specifying an exact date with a precision at least to the day (independantly of the display format "d M Y", which plays no role here).

In NO case you can expect a resulting date in 2017.

Sdkb (talkcontribs)

Thanks for the reply! I'm working on a template where neither the precision of the date nor the units of time in the addition are known in advance. To get around this issue, I had to implement a workaround converting the date to the more precise DMY format (which borrows the current day/month when not specified) before putting it through w:Template:After, which uses unix time. If you know of a better solution, feel free to lmk or tweak the template yourself!

Reply to "Time issue"

Replacing Consecutive Characters

4
70.160.223.43 (talkcontribs)

I'm using replace to remove certain characters like "&" from strings like "One & Two". I'm replacing the character with nothing but I'm left with two consecutive spaces. This is causing issues with file names. Is there a way to replace multiple consecutive characters with one?

This post was hidden by Verdy p (history)
Verdy p (talkcontribs)

If you try using just the "#replace:" function, use it a second time to replace two spaces by one space, after replacing the "&" (assuming that it may or may not have a single leading space and/or a single trailing space).

Note that "#replace:" only performs simple substitutions of literal substring, not replacements by matching regexps.

However I think it is a bad idea to silently drop the ampersand to generate filenames, it would be better to replace it with a hyphen and keep spaces where they are. Beware also that such substitution may break strings containing numerical or named character references (frequenely needed and present in the wikitext, or automatically added by the Mediawiki text editors): you should only replace "&amp;" not "&" alone.

As well, the ampersand character itself is authorized in Mediawiki filenames; but it must often be encoded as a character reference (in HTML text or wikilinks), or URL-encoded (when using it in a URL or in a query parameter). For URL-encoding, see the ParserFunction "urlencode:" (and select the correct form depending on the syntaxic type of target: PATH, QUERY...)

For stripping extra spaces (or underscores) in filenames, you can use the "#titleparts:", or "PAGENAME" functions (the later one also strips a leading ":" or recognized namespace prefix, but not a leading interwiki prefix).

Dinoguy1000 (talkcontribs)

The best method here would probably be to rewrite your template(s) as Scribunto modules, but if that isn't an option or practical for some reason, I'd probably approach it with multiple #replaces: remove the target character(s), replace all spaces with some character you're sure won't appear in the input, replace multiple space-replacement characters with a single one, and finally replace the replacement characters with spaces (these replaces can be reordered a little bit to your liking, e.g. removing the target characters can happen before or after replacing the spaces with a standin character). I might code this something like:

{{ #replace: {{ #replace: {{ #replace: {{ #replace: {{{1}}} | & }} | <!-- space --> | ¬ }} | ¬¬ | ¬ }} | ¬ | <nowiki/> <nowiki/> }}

For filenames specifically, speaking from personal experience, I'd recommend not to remove characters if you don't have to (though you might not have much choice if you're already dealing with a large collection of files that are named that way, of course).

Reply to "Replacing Consecutive Characters"

how to process loops ?

3
Wladek92 (talkcontribs)

Hi all is there an equivalent to 'for' or 'while' to process a list of items rather than to apply statement on a single item ? Thanks. --Christian 🇫🇷 FR (talk) 10:31, 25 June 2023 (UTC)

Cavila (talkcontribs)
Tacsipacsi (talkcontribs)
Reply to "how to process loops ?"

#replace multiple strings?

3
Summary last edited by Tacsipacsi 12:48, 17 June 2023 11 months ago

Use nested #replaces.

V G5001 (talkcontribs)

Is it possible to replace multiple different strings within one string?

For example, I would want to do {{#replace:The dog is jumping.|dog,jumping|cat,walking}} or something similar to receive the output ”The cat is walking.”

Is this possible in any way?

Dinoguy1000 (talkcontribs)

Yes, just nest #replaces: {{#replace:{{#replace:The dog is jumping.|dog|cat}}|jumping|walking}}

Just be aware of the expansion depth limit (to say nothing of code readability); if you need a lot of separate replaces on the same string, it will probably be better to write it in Lua, as a Scribunto module. (You could also use Extension:Variables, but that extension unfortunately has an uncertain future given the direction the parser is headed in.)

V G5001 (talkcontribs)

Thanks, this worked

What is ParserFunctions programming language?

5
Sokote zaman (talkcontribs)

Which programming language does the functions in ParserFunctions use?

Keyacom (talkcontribs)

The #time function uses PHP's datetime format, except that it also defines extra functionality through x-prefixed properties.

The #expr function uses some custom language. Its operators are similar to the ones used in SQL (hence a single equals sign for equality).

Sokote zaman (talkcontribs)

Thank you for your reply What language do other functions use? Thanks

Keyacom (talkcontribs)

Also:

  • #timel uses the same syntax as #time
  • #ifexpr expressions use the same syntax as #expr
  • all of these functions are coded in PHP.
Sokote zaman (talkcontribs)

Thank you for your reply. Thank you

Reply to "What is ParserFunctions programming language?"