Manual:Pywikibot/imageuncat.py

Program to add uncat template to images without categories at commons.

This script is working on the given site, so if the commons should be handled, the site commons should be given and not a Wikipedia or similar.

Parameters
-yesterday       Go through all uploads from yesterday.

-recentchanges   Go through the changes made between 120 minutes and 70 minutes ago. (This overrides the '-recentchanges' default                 generator)

-cat             Work on all pages which are in a specific category. Argument can also be given as "-cat:categoryname" or                 as "-cat:categoryname|fromtitle" (using # instead of |                  is also allowed in this one and the following)

-catr            Like -cat, but also recursively includes pages in                  subcategories, sub-subcategories etc. of the given category. Argument can also be given as "-catr:categoryname" or                 as "-catr:categoryname|fromtitle".

-subcats         Work on all subcategories of a specific category. Argument can also be given as "-subcats:categoryname" or                 as "-subcats:categoryname|fromtitle".

-subcatsr        Like -subcats, but also includes sub-subcategories etc. of                  the given category. Argument can also be given as "-subcatsr:categoryname" or                 as "-subcatsr:categoryname|fromtitle".

-uncat           Work on all pages which are not categorised.

-uncatcat        Work on all categories which are not categorised.

-uncatfiles      Work on all files which are not categorised.

-file            Read a list of pages to treat from the named text file. Page titles in the file may be either enclosed with brackets, or be separated by new lines. Argument can also be given as "-file:filename".

-filelinks       Work on all pages that use a certain image/media file. Argument can also be given as "-filelinks:filename".

-search          Work on all pages that are found in a MediaWiki search across all namespaces.

-logevents       Work on articles that were on a specified Special:Log. The value may be a comma separated list of three values:

logevent,username,total

To use the default value, use an empty string. You have options for every type of logs given by the log event parameter which could be one of the following:

block, protect, rights, delete, upload, move, import, patrol, merge, suppress, review, stable, gblblock, renameuser, globalauth, gblrights, abusefilter, newusers

It uses the default number of pages 10.

Examples:

-logevents:move gives pages from move log (usually redirects) -logevents:delete,,20 gives 20 pages from deletion log -logevents:protect,Usr gives pages from protect by user Usr -logevents:patrol,Usr,20 gives 20 patroled pages by user Usr

In some cases it must be written as -logevents:"patrol,Usr,20"

-namespaces      Filter the page generator to only yield pages in the -namespace       specified namespaces. Separate multiple namespace -ns              numbers or names with commas. Examples:

-ns:0,2,4 -ns:Help,MediaWiki

If used with -newpages, -namepace/ns must be provided before -newpages. If used with -recentchanges, efficiency is improved if                 -namepace/ns is provided before -recentchanges.

If used with -start, -namepace/ns shall contain only one value.

-interwiki       Work on the given page and all equivalent pages in other languages. This can, for example, be used to fight multi-site spamming. Attention: this will cause the bot to modify pages on several wiki sites, this is not well tested, so check your edits!

-limit:n         When used with any other argument that specifies a set of pages, work on no more than n pages in total.

-links           Work on all pages that are linked from a certain page. Argument can also be given as "-links:linkingpagetitle".

-liverecentchanges Work on pages from the live recent changes feed. If used as                 -liverecentchanges:x, work on x recent changes.

-imagesused      Work on all images that contained on a certain page. Argument can also be given as "-imagesused:linkingpagetitle".

-newimages       Work on the 100 newest images. If given as -newimages:x, will work on the x newest images.

-newpages        Work on the most recent new pages. If given as -newpages:x, will work on the x newest pages.

-recentchanges   Work on the pages with the most recent changes. If                 given as -recentchanges:x, will work on the x most recently changed pages.

-unconnectedpages Work on the most recent unconnected pages to the Wikibase repository. Given as -unconnectedpages:x, will work on the x most recent unconnected pages.

-ref             Work on all pages that link to a certain page. Argument can also be given as "-ref:referredpagetitle".

-start           Specifies that the robot should go alphabetically through all pages on the home wiki, starting at the named page. Argument can also be given as "-start:pagetitle".

You can also include a namespace. For example, "-start:Template:!" will make the bot work on all pages in the template namespace.

default value is start:!

-prefixindex     Work on pages commencing with a common prefix.

-step:n          When used with any other argument that specifies a set of pages, only retrieve n pages at a time from the wiki server.

-subpage:n       Filters pages to only those that have depth n                  i.e. a depth of 0 filters out all pages that are subpages, and a depth of 1 filters out all pages that are subpages of subpages.

-titleregex      A regular expression that needs to match the article title otherwise the page won't be returned. Multiple -titleregex:regexpr can be provided and the page will be returned if title is matched by any of the regexpr provided. Case insensitive regular expressions will be used and dot matches any character.

-transcludes     Work on all pages that use a certain template. Argument can also be given as "-transcludes:Title".

-unusedfiles     Work on all description pages of images/media files that are not used anywhere. Argument can be given as "-unusedfiles:n" where n is the maximum number of articles to work on.

-lonelypages     Work on all articles that are not linked from any other article. Argument can be given as "-lonelypages:n" where n is the maximum number of articles to work on.

-unwatched       Work on all articles that are not watched by anyone. Argument can be given as "-unwatched:n" where n is the maximum number of articles to work on.

-usercontribs    Work on all articles that were edited by a certain user. (Example : -usercontribs:DumZiBoT)

-weblink         Work on all articles that contain an external link to                  a given URL; may be given as "-weblink:url"

-withoutinterwiki Work on all pages that don't have interlanguage links. Argument can be given as "-withoutinterwiki:n" where n is the total to fetch.

-mysqlquery      Takes a Mysql query string like "SELECT page_namespace, page_title, FROM page                 WHERE page_namespace = 0" and works on the resulting pages.

-wikidataquery   Takes a WikidataQuery query string like claim[31:12280] and works on the resulting pages.

-searchitem      Takes a search string and works on Wikibase pages that contain it. Argument can be given as "-searchitem:text", where text is the string to look for, or "-searchitem:lang:text", where lang is the langauge to search items in.

-random          Work on random pages returned by Special:Random. Can also be given as "-random:n" where n is the number of pages to be returned, otherwise the default is 10 pages.

-randomredirect  Work on random redirect pages returned by                  Special:RandomRedirect. Can also be given as                 "-randomredirect:n" where n is the number of pages to be                  returned, else 10 pages are returned.

-untagged        Work on image pages that don't have any license template on a                  site given in the format " . .org, e.g.                  "ja.wikipedia.org" or "commons.wikimedia.org".                  Using an external Toolserver tool.

-google          Work on all pages that are found in a Google search. You need a Google Web API license key. Note that Google doesn't give out license keys anymore. See google_key in                 config.py for instructions. Argument can also be given as "-google:searchstring".

-yahoo           Work on all pages that are found in a Yahoo search. Depends on python module pYsearch. See yahoo_appid in                 config.py for instructions.

-page            Work on a single page. Argument can also be given as                 "-page:pagetitle", and supplied multiple times for multiple pages.

-grep            A regular expression that needs to match the article otherwise the page won't be returned. Multiple -grep:regexpr can be provided and the page will be returned if content is matched by any of the regexpr provided. Case insensitive regular expressions will be used and dot matches any character, including a newline.

-onlyif          A claim the page needs to contain, otherwise the item won't                  be returned. The format is property=value,qualifier=value. Multiple (or                 none) qualifiers can be passed, separated by commas. Examples: P1=Q2 (property P1 must contain value Q2), P3=Q4,P5=Q6,P6=Q7 (property P3 with value Q4 and                 qualifiers: P5 with value Q6 and P6 with value Q7). Value can be page ID, coordinate in format: latitude,longitude[,precision] (all values are in decimal                 degrees), year, or plain string. The argument can be provided multiple times and the item page will be returned only if all of the claims are present. Argument can be also given as "-onlyif:expression".

-onlyifnot       A claim the page must not contain, otherwise the item won't                  be returned. For usage and examples, see -onlyif above.

-intersect       Work on the intersection of all the provided generators.