Manual talk:Pywikibot/pagefromfile.py

limitation?
I've used pagefromfile.py and when i try to import an article that is greater than a certain (250?) characters then it will fail. Does anyone know if there is a limitation to the length of the article? And so yes, what can you do about it? --BB70 13:20, 10 May 2006 (UTC)


 * I have not experienced anything like that error. Do you mean the page title is over 250 characters?  --Connel MacKenzie 13:56, 20 July 2006 (UTC)


 * The article itself is over 250 characters including the title BB70 12:30, 28 July 2006 (UTC)


 * Don't work : it uploads only a part of each page.

title
I dont like to have automatically title in page so I inserted line contents = re.search(".*?([^\Z]*)",contents).group(1)

before line page.put(contents, comment = commenttext, minorEdit = False)

--Liso 07:26, 25 June 2006 (UTC)

What didn't work?
Looking at a recent edit to this page, I see a complaint from someone that the proper technique didn't work. I know that on the English Wiktionary, the alternate technique now listed does not work correctly. Could someone please explain where/when/how that technique doesn't work, and what unexpected results were encountered? --Connel MacKenzie 14:00, 20 July 2006 (UTC)


 * python script which I downloded (I dont know if there is new version or not) does not found title of article because it search it betveen xxxx and yyyy ... --Liso 21:04, 26 July 2006 (UTC)

Creating Redirects
I tried creating redirects using Pagefromfile but it became an article.

My text file is like so...

xxxx Article yyyy
 * 1) REDIRECT Article

the script created was so, except for the xxxx and yyyy. What I need is that the Article be not included. That the end result will become


 * 1) REDIRECT Article

What can be added to the script to achieve this? I do not know phyton programming. -- Hiong-eng


 * So do you need strip title from contents of inserted article? Please see Talk:Pagefromfile.py! (you need to edit pagefromfile.py file):) --Liso 21:01, 26 July 2006 (UTC)


 * oops, sorry, I missed that :) thanks! -- Hiong-eng

This has been fixed now, there is a -notitle keyword. To make a redirect, make a file dict.txt that looks like this:

Article title#REDIRECTNew title

and type: python pagefromfile.py -notitle

redirected page
Lets there are two pages A and B. B is redirected to A (B-->A)

I try to do the following xxxx B bla-bla yyyy

and as result I've got the error: Traceback (most recent call last): File "/export1/wiki/pywikipedia/pagefromfile.py", line 324, in    main File "/export1/wiki/pywikipedia/pagefromfile.py", line 320, in main bot.run File "/export1/wiki/pywikipedia/pagefromfile.py", line 141, in run self.put(title, contents) File "/export1/wiki/pywikipedia/pagefromfile.py", line 170, in put contents = page.get + contents File "/export1/wiki/pywikipedia/wikipedia.py", line 638, in get raise IsRedirectPage, self._redirarg

I would like to have such behavior: if B is redirected page then script have to use A as target page.

How I can do it? Can anybody help me? --Dnikitin 03:00, 22 December 2008 (UTC)

Error: invalid charactors
Sometimes, I got the following error 'utf8' codec can't decode byte 0xab in position 88757: unexpected code byte ERROR: Invalid characters found on http://...wiki.ru/wiki/index.php5?useskin=monobook&title=Special:Allmessages&redirect=no&ot=xml, replaced by \ufffd.

What it mean and how I can fix it? --Dnikitin 04:25, 22 December 2008 (UTC)

Error: Unable to add unicode
I am trying to add unicode characters, the pagefromfile doesnt work for the same. My file contents are like this .. any help would be highly appreciated.

<< >>

ನಾಮಪದ



 * 1) ನಂಬಿಕೆಯಳಿತ

Editing Existing Pages Despite -safe?
I have a list of articles that I want to create (automatically creating Wiki pages from a non-Wiki source), but some of the pages have already been created manually. The -safe switch is supposed to have the bot ignore existing pages, but it doesn't seem to be doing so - it overwrites. Oddly, when I stopped the script after it did this to two pages, after trying it again it does skip those pages that it had created on previous tries - but not pages created by others. Any idea why this is? --DragoonWraith 00:59, 9 August 2010 (UTC)