Extension:Html2Wiki

This extension officially lives at https://www.mediawiki.org/wiki/Extension:Html2Wiki This extension to MediaWiki is used to import HTML content (including images) into the wiki.

Imagine having dozens, hundreds, maybe thousands of pages of HTML. And you want to get that into your wiki. Maybe you've got a website, or perhaps a documentation system that is in HTML format. You'd love to be able to use your wiki platform to edit, annotate, organize, and publish this content. That's where the Html2Wiki extension comes into play. You simply install the extension in your wiki, and then you are able to import entire zip files containing all the HTML + image content. Instead of months of work, you could be done in minutes.

Using Git
cd to your 'extensions' directory, and clone the project This way you can  to get the latest enhancements and bug-fixes.

Additional Steps
Composer is used to manage the dependencies on 3rd-party code. (In fact, the extension itself should shortly be installable via composer.) Once you have the Html2Wiki code in your 'extensions' directory, follow these steps to pull in those dependencies. Note: If you do not have composer on your system, you can either install it (it's good to have). Or, use the phar file:

To add the "Import HTML" link to your Tools sidebar, see the  file distributed with the extension.

Configuration parameters
COMING SOON

User rights
Right now the extension is restricted to Admins

Requirements or Dependencies
This extension was built on MediaWiki version 1.25alpha. It may not be compatible with earlier releases since there are a number of external libraries such as jQuery which have changed over time. Contact Us if you have version compatibility issues.

Since parsing the DOM is problematic when using PHP's native DOM manipulation (which is itself based on libxml), we use the QueryPath project to provide a more flexible parsing platform. The best tutorial on QueryPath is this IBM DeveloperWorks article The most recent list of documentation for QueryPath is at this bug: https://github.com/technosophos/querypath/issues/151 The API docs contain a CSS selector reference

Html2Wiki can import entire document sets and maintain a hierarchy of those documents. The $wgNamespacesWithSubpages variable will allow you to create a hierarchy in your wiki's 'main' namespace; and even automatically create navigation links to parent article content. Taking this further, the SubPageList extension creates navigation blocks for subpages.

The document sets we were importing were based on generated source code documentation (coming from an open source documentation generator called Natural Docs) which creates DHTML "mouseovers" for glossary terms. To create similar functionality in the wiki environment, we will rely on the Lingo extension to create a Glossary of terms.

System Elements
Once installed, the Html2Wiki extension makes a new form available to Administrators of your wiki. Simply choose a file, click import and watch as your HTML is magically transformed into Wiki text.

You access the import HTML form at the  page (similar to   for regular media). The Html2Wiki extension also adds a convenient Import HTML link to the Tools panel of your wiki for quick easy access to the importer.

Single File
Enter a comment in the Comment field, which is logged in the 'Recent Changes' content as well as the Special:Log area.

You can optionally specify a "Collection Name" for your content. The Collection Name represents where this content is coming from (e.g. The book, or website). Any unique identifier will do. The "Collection Name" is used to tag (categorize) all content that is part of that collection. And, all content that is part of a Collection will be organized "under" that Collection Name in a hierarchy. This lets you have 2 or more articles in your wiki named "Introduction" if they belong to separate Collections. Specifying an existing Collection Name + article title will update the existing content. In fact, to re-import a single file and maintain it's 'position' in a collection, you would specify the full path to the file.

Zip Files
Choose a zip file to import. The Zip file can contain any type of file, but only html and image files will be processed.

Import a file from Google Drive
When you create a document on Google Drive (aka Google Docs), the hyperlinks in those files become polluted to make them pass through google.com. That's annoying. But more importantly, it makes it that much harder to re-use or move this content into your wiki. Html2Wiki to the rescue! You can "save" your Google Drive document in the form of a complete webpage (including images) by selecting "File -> Download as -> Web page (.html; zipped)". Then, import that zip. Html2Wiki will import the images, the content, and will also "decode" the link references in the document.

Import a blog post / webpage complete
You can easily create a local zip file of a blog post or web page using  and   to then import into your wiki. Let's use the example of this article on "European Commission endorses CC licenses as best practice for public sector content and data" found at http://creativecommons.org/weblog/entry/43316 The above commands will leave you with both a directory named cc.org and also a zip archive named cc.zip You can now import the zip file with Html2Wiki.

Mechanics
Importing a file works like this Select |                      v                    Upload |                      v       Tidy ->  Normalize |                      v   QueryPath ->  Clean |                      v       Pandoc > Convert |                      v                     Save

Dave Raggett's HTML Tidy was, and still is, a venerable tool for validating and ensuring well-formed HTML documents. We use Tidy to try to get source HTML into good enough shape that it can be further processed. As of early 2015, an effort has begun to bring HTML5 support to HTML Tidy. See https://github.com/htacg/tidy-html5 The Tidy documentation is still at http://tidy.sourceforge.net/ and since the new project has not yet made any releases, we're obviously using the Tidy that is built-in to PHP5. See the PHP manual / ref.tidy.php. If for some reason that is not installed on your platform, a local binary  should work. There was a problem where the PHP extension was not available in MediaWiki-Vagrant due to it's usage of the HHVM instead of the Zend PHP interpreter, however that is no longer the case. You can use PHP Tidy in MediaWiki-Vagrant. If you want to do validation/tidy tests, try the W3C's validator and Step-by-step guide although I confess to preferring https://validator.nu/

The "Clean" part is where we do the dirty work. This is the hardest part, and perhaps where you will need to spend time coding, to get the perfect functionality out of Html2Wiki. Although we've successfully imported thousands of documents with Html2Wiki, your source content may need to be manipulated in ways we haven't seen, and that means DOM parsing. There are a number of ways to do this including PHP DOM. We decided to move up a level to use Matt Butcher's QueryPath project QueryPath implements a CSS parser (in PHP) so that you can manipulate the DOM using CSS selectors just like you would in jQuery. This is also where we're focusing development so that we might be able to simply set directives in configuration variables so that there is no coding required -- even for new situations.

John MacFarlane's Pandoc is a fantastic document converter that is able to read and write to MediaWiki syntax (among other formats). See the README file to understand what it can do. Pandoc gives us the final step before we can save content into the wiki... namely converting HTML to wiki markup. Because Pandoc does HTML to Wiki conversion right out of the box, you may want to give that a try in addition to this extension.

Zip archive handling
In order to handle the zip upload, we'll have to traverse all files and index hrefs as they exist. We'll need to map those to safe titles and rewrite the source to use those safe URLs. This has to be done for both anchors and images.

Practically speaking MW is probably more flexible than we need; but we'll want to check

[legaltitlechars] => %!"$&'*,\-.\/0-9:;=?@A-Z\\^_`a-z~\x80-\xFF+

Since MediaWiki uses (by default) First-letter capitals, you would normally need to account for that in rewriting all hrefs within the source. However, in practice, we use a Collection Name as the first path element, and MediaWiki will seamlessly redirect foo to Foo.

Styles and Scripts
Cascading Style Sheets (CSS) as well as JavaScript (js) are not kept as part of the transformation. Although, we are working on including CSS

Wiki Text markup
The fundamental requirement for this extension is to transform input (HTML) into Wiki Text (see http://www.mediawiki.org/wiki/Help:Formatting) because that is the format stored by the MediaWiki system. Originally, it was envisioned that we would make API calls to the Parsoid service which is used by the Visual Editor extension. However, Parsoid is not very flexible in the HTML that it will handle. To get a more flexible converter, we use the Pandoc project which is able to (read and) write to MediaWiki Text format.

For each source type we will need to survey the content to identify the essential content, and remove navigation, JavaScript, presentational graphics, etc. We should have a "fingerprint" that we can use to sniff out the type of document set that the user is uploading to the wiki. Actually, work is underway to allow the user to create special "recipe" articles in the wiki that would instruct Html2Wiki on how to transform content. The user will be able to interatively run a recipe in test "dry-run" mode to see the results on a sampling of content in order to perfect the recipe and then use it on a larger Collection.

As a result of sniffing the source type, we can properly index and import content only, while discarding the dross. We can likewise apply the correct transformation to the source.

Form file content is saved to server (tmp), and that triggers conversion attempt. A Title is proposed from text (checked in the db), and user can override naming HTML is converted to wiki text for the content of the article.

Image references are either assumed to be relative e.g.  and contained in the zip file, or absolute e.g.   in which case they are not local to the wiki.

Want to check your source for a list of image files?

For each of the image files (png, jpg, gif) contained in the zip archive, the image asset is saved into the wiki with automatic file naming based on the "Collection Name" + path in the zip file.

Also, each image is tagged with the collection name for easier identification.

Image references in the HTML source are automatically updated to reference the in-wiki images.

@todo document the $wgEliminateDuplicateImages option

Database
The extension currently does not make any schema changes to the MediaWiki system.

What, if any, additional tables could we want in the database?

We may need to store checksums for zip uploads, because we don't want to store the zip itself, but we may want to recognize a re-upload attempt?

Logging
Logging is provided at Special:Log/html2wiki The facility for logging will tap into   as outlined at https://www.mediawiki.org/wiki/Manual:Logging_to_Special:Log

Interestingly, SpecialUpload must call  from it's hooks  SpecialImport calls   which itself invokes   (see includes/logging).


 * @todo publish the extension upstream. first with the @link http://www.mediawiki.org/wiki/Template:Extension

Use Parsoid?
In order to use Parsoid at all, we need to have the content conform to the MediaWikiDOMspec, which is based on HTML5 and RDFa https://www.mediawiki.org/wiki/Parsoid/MediaWiki_DOM_spec#Ref_and_References.

We would need to parse the incoming content, validate and possibly transform the document type to HTML5 and then transform the HTML5 to MediaWiki DOMspec http://www.w3.org/TR/html5/syntax.html#html-parser

Parsoid offers an API with basically two actions: POST and GET You can test the API at http://parsoid-lb.eqiad.wikimedia.org/_html/

You can also test it locally on the vm through port 8000

Variables we care about

 * 1) We probably want a variable that can interact with the max upload size
 * 2) $wgMaxUploadSize[*] = 104857600 bytes (100 MB)
 * 3) $wgFileBlacklist we don't care about because we use our own file upload and mime detection
 * 4) $wgVisualEditorParsoidURL we can use for API requests to Parsoid
 * 5) $wgLegalTitleChars we use to check for valid file naming
 * 6) $wgMaxArticleSize default is 2048 KB, which may be too small?
 * 7) $wgMimeInfoFile we don't yet use
 * 8) Also, how do imagelimits come into play?  http://localhost:8080/w/api.php?action=query&meta=siteinfo&format=txt

Features
  Handles Google Drive documents. In Google Drive, select "File -> Download as -> Web page (.html, zipped)"

Import the zip file

Links in the original document will be stripped of their "Google Tracking Virus" and urldecoded back to their human-readable value.

Adds an "Import HTML" link to the Toolbox panel. You have to edit MediaWiki:Common.js See the modules/MediaWiki:Common.js file included with Html2Wiki

Content is automatically categorized according to the "Collection Name" provided.

Handles images. In fact, although there are other ways to import images into your wiki, you could use this extension to bulk import images. 

Internationalization
shows the interface messages You can see most of the messages in Special:AllMessages if you filter by the prefix 'Html2Wiki'

Error handling

 * 1) submitting the form with no file There was an error handling the file upload: No file sent.
 * 2) choosing a file that is too big: limit is set to 100 MB
 * 3) choosing a file of the wrong type There was an error handling the file upload: Invalid file format.
 * 4) choosing a file that has completely broken HTML: You could end up with no wiki markup, but it tries hard to be generous.

Developing
This extension was originally written by and is maintained by Greg Rundlett of eQuality Technology. Additional developers, testers, documentation helpers, and translators welcome!

The project code is hosted on both GitHub and WikiMedia Foundation servers on the Html2Wiki Extension page. You should use git to clone the project and submit pull requests. The code is simultaneously updated on MediaWiki servers and GitHub, so feel free to fork, or pull it from either location. or (with gerrit auth)

The best way to setup a full development environment is to use MediaWiki Vagrant. This handy bit of wizardry will create a full LAMP stack for you and package it into a VirtualBox container (among others).