The following discussion has been transferred from [[<tvar|meta>meta:</>|Meta-Wiki]].</translate>
Any user names refer to users of that site, who are not necessarily users of MediaWiki.org (even if they share the same username).</translate>
Standalone library for converting Mediawiki-style WikiText into (eventually) xhtml[edit source]
I'm currently building a web application which allows users to publish structured text. I'd like Mediawiki-style WikiText (exactly as used on wikipedia.org) to be one of the accepted input formats.
Also, I love the way WikiText is rendered to html here on Wikipedia, so I'd like to emulate that as much as possible. I've looked over the Alternative Parsers page and my impression is that WikiText parsing and html rendering is still performed by some PHP modules in the MediaWiki software.
Are there any ongoing efforts with regards to separating these out into a general-purpose C/C++ library? I am aware of the problems associated with having binary extensions, but perhaps MediaWiki could fall back to the (hopefully 'deprecated') php-based parser/renderer if it is not able to load the binary parser/renderer.
So far the most interesting code i've found is flexbisonparser and wiki2xml in the mediawiki svn trunk. But none of them seem to do exactly what I want.
I understand that there are ongoing efforts to standardise Mediawiki-style WikiText. A standard library like I've described briefly here would go a long way in ensuring that such a standard spreads.
Hope to hear from you if you have any solutions.. and sorry for posting on this page if it is wrong. :-) 220.127.116.11 14:14, 19 July 2006 (UTC)
Converting the text to C++ likely isn't going to happen any time soon. We have certain performance-critical portions of MediaWiki (such as diffing) ported to these languages, but the parsing and rendering of wikitext is an extremely complicated process with lots of legacy cruft and a hideous Parser.php. — Ambush Commander(Talk) 23:42, 19 July 2006 (UTC)
I came across this page looking for a Java API/library for converting wikitext to and from HTML or also XML. The ones listed in the table are not what I was looking for as I want to use an API for wikitext rendering, whilst those listed are Java Wiki engines. Obviously in some of the listed software they must be able to do that, but the code isn't exposed through an API. Does anybody know of anything that can do this? If not, I'll just make one myself. --18.104.22.168 14:11, 3 September 2006 (UTC)
I'm just recreating my parser to be better usable through a Java API. I need this to be able to create different plugins for java blog (Roller), forum (JForum) and wiki (JAMWiki) software. It would be nice if you would like to test and give feedback (please mail axelclk gmail com if you are interested. A first beta: http://plog4u.org/index.php/Wikipedia_API ).
End of content from <tvar|meta>meta.wikimedia.org</>.</translate>
<translate> Note that the above conversation may have been edited or added to since the transfer.</translate> <translate>
If in doubt, check the [<tvar|url>//www.mediawiki.org/w/index.php?title=Talk:Alternative_parsers/Archive_1&action=history</> edit history].</translate>
Here is a dump of a early version of some doc User:Djbclark did about a survey of options for offline mediawiki use (Sat Jul 12, 2008) - I don't have time to incorporate it into a nice seperate page at the moment, but I thought it might be useful to some people as is, and also I need to reference it from a mailing list thread :-)
It would be really nice if this supported some form of recursion... All these tools are way to "you are only going to use this with wikipedia, so we can't possibly provide features that would be useful for smaller wikis" oriented...
Where USERNAME is your username (note that mediawiki autocapitalizes this, so for example this would be Dclark, not dclark) and PASSWORD is your mediawiki password (note that this is a very insecure way to pass a password to a program, and should only be used on systems where you are the only user or you trust all other users).
After creating the local store and waiting for it to finish downloading, you will be able to go offline and browse the wiki - however search and "Special:" pages will not work in Google Gears offline mode, and you will not be able to edit pages in offline mode.
The directions at Building a (fast) Wikipedia offline reader produce an environment that takes more time to set up than Google Gears, but is arguably a bit nicer (including local search of page titles - and shouldn't be that hard to extend that to full text).
# THESE ARE NOT STEP-BY-STEP INSTRUCTIONS... interpretation is required.
sudo aptitude install apt-xapian-index xapian-tools libxapian-dev php5-cli
populate wiki-splits with raw .xml.bz2 dump
mv mediawiki_sa offline.wikipedia
Edit Makefile to have line "XMLBZ2 = PICKNAME-articles.xml.bz2"
Edit mywiki/gui/view.py 4th line to: return article(request, "Main Page")
# (Then follow directions it spews)
TODO: Set up cron job to produce rsync-able PICKNAME-articles.xml.bz2 on a regular basis. Package this up.
sudo aptitude install dh-make-perl fakeroot dpkg-dev build-essential
sudo aptitude install libwww-perl libhtml-tree-perl libhtml-tree-perl libhtml-tree-perl
sudo apt-file update
tar -pzxvf HTML-Extract-0.25.tar.gz
fakeroot dpkg-buildpackage -uc -us
sudo dpkg -i libhtml-extract-perl_0.25-1_all.deb
tar -pzxvf Mediawiki-Spider-0.31.tar.gz
fakeroot dpkg-buildpackage -uc -us
sudo dpkg -i libmediawiki-spider-perl_0.31-1_all.deb
You need a script like this to use it:
my $spider2=new Mediawiki::Spider;
print "Now getting wikiwords\n";
print "Got wikiwords:proceeding with d/l\n";
However it only seems to work with older versions of mediawiki (or our mediawiki instance is "weird" in some way it doesn't expect).
WikipediaFS - View and edit Wikipedia articles as if they were real files
sudo aptitude install gvfs-fuse fuse-utils fuse-module python-fuse python-all-dev
sudo easy_install stdeb
tar xvfz wikipediafs-0.3.tar.gz
vi setup.py # Edit so version is correct
dpkg-buildpackage -rfakeroot -uc -us
sudo dpkg -i ../python-wikipediafs_0.3-1_all.deb
This is sort of useless for the purpose of this section, as it requires the user to get a specific set of pages before going offline. Didn't spend enough time with it to see if it worked as advertised.
Kiwix (http://www.kiwix.org) is an offline reader especially thought to make Wikipedia available offline. This is done by reading the content of the project stored in a file format ZIM (see http://www.openzim.org), a high compressed open format with additional meta-data.
Pure ZIM reader
case and diacritics insensitve full text search engine
Bookmarks & Notes
kiwix-serve: ZIM HTTP server
Zim index capacity
Support for Linux / Windows
Solutions to build DVD with Windows Installer and DVD launcher (autorun)