Extension:CommonsMetadata

This extension is an experimental attempt at extracting metadata from commons pages.

See also http://lists.wikimedia.org/pipermail/wikitech-l/2013-August/071593.html

The assumptions of this extension are:
 * At some point in the future, wikidata will take over handling metadata at commons. In order to avoid disruptive changes, which will soon need to be changed again, the extension should work with commons metadata as it currently is (so not introducing new parser functions). Hence screen scrapping.
 * The content of many of the fields on a commons description page include rich formatting (In particular: Links, italics, bold. In some cases more complex things like embedded images)
 * As a result, extension outputs parsed html (wikitext sucks, plain text doesn't capture the data)
 * Futhermore, the data tends to be formatted for human display, rather than (for example) machine formatted dates. When the date field says something like "circa 1600s", its hard to convert that to a precise date (otoh, many examples can be).
 * To carry that forward, also apply formatting to exif metadata, which is controlled on wiki (For example, commons links the camera name to a wikipedia article)
 * If we can't extract info from the description page, but the file has the author tagged in exif/XMP/iptc metadata, we should use that as a fallback.
 * Ideally such a system would be as commons-inspecific as possible, with the commons and non-commons part separated.
 * Commons description pages have multilingual descriptions. Lots of users probably just want one language.
 * In this implementation, it applies per language conventions to dates and things. Additionally for explicitly multi-lingual fields (description), there is an option to return all, or just a single language. Even in single language mode, some things are still language specific (like the thousands seperator on numbers)

Whether or not these are good assumptions, probably will be seen once the extension gets further review.