Northern Sami Wikipedia notes 2011

Some notes about small projects covers identified troubles and possibilities at Northern Sami Wikipedia. This project is very small and has identified a number of problems with the current software. In the following the reasoning is to make a few editors more productive by removing time-consuming unnecessary steps and cannibalizing material from larger projects. Hopefully this can be a sufficient aid for a small group to produce a basic lexicon in a small language from larger lexicons in other languages.

Small projects needs as much help as possible to create initial content and back bone structure. Initial content are such things as data for infoboxes, content reflecting infoboxes and content used inside infoboxes. Structures are for example categories and templates. Construction of such content and structures are somewhat difficult for beginners and it should be possible to reuse work across projects.

Rewrite rules for years
Often years are written at this project in the form 1997:as, 1997:is, etc. This form fails to make proper links and must be written as 1997:as and 1997:is  even if it should be possible to write them as 1997:as  and 1997:is. If this is to be solved some of the regex-patterns must be updated on a per-language basis. For examples see the article Ole Henrik Magga at Northern Sami Wikipedia.

This could be implemented as a simple list of accepted strings added pre- or postfix to a link, possibly with optional checks wetter the link itself is on a specific form.

Closely related to this is the problem with letters outside the defined set of legal letters. In the previous example with Ole Henrik Magga there is the string 1993rájes where the letter á makes the link fail to incorporate the whole postfix string. To make this work as expected it should be possible to define the string of legal letters, at least in the settings somehow.

Rewrite rules for link targets
Words are often changed after more or less well-defined prefix, infix and suffix rules. A link should be checked on a successive back of model whereby more and more aggressive rules are applied. If a run of a particular regex pattern or combination of such patterns produces a match on a particular page then it is used. All such rules adds up and if the total seems to be to radical the rewrite is dropped altogether. Such rules will fail, but they will save a lot of typing. For example see the article Oppland at Norwegian (bokmål) Wikipedia which becomes Opplándda at Northern Sami. Note that this is wetter it is possible to find a legal string of transformations from one known form into another known form, it is not a free form translation and it is not necessarily a rewrite that is right given the surrounding context. As such it is far easier to get it right.

A possible and rather efficient method that succeed more often than not is to use a set of patterns to rewrite the link entry into a base form. Each of the patterns are given a weight and they add up when chained together. A sequence of rewrites are deemed better if the sum of weights are lower than some other sequence of rewrites that also succeeds in reaching a possible target page. Typically the weights are adjusted edit distances, where the weights follow legal transforms in the given language. Note that both the page name of the target page and the link name must be reduced to one or several base forms, and it is this common base form that is used to connect the entries.

Especially note that the Snowball-algorithm can be used as a ultimate and very simplistic rewrite solution for written languages which rely heavily on suffix rules. This includes English and Norwegian, but not Northern Sami languages.

If a sequence of rewrite attempts still does not end up with a target page the process is terminated and the initial redlinked page is used as is.

If a page use a name with parenthesis its not obvious how the name should be rewritten, but probably the string within the parenthesis should be kept both in the link and the target page. Both should therefore have the same parenthesized string in the name.

Bumping category members
If it were possible to declare common categories and rules for initiating them it would greatly help such small projects. Perhaps this could be a page where category names can be declared and connected to other more general names in larger projects, with fallbacks if the given granularity isn't useful for the specific project. If there are to few articles about some geographical feature at a municipality level, then bump them up to the county. If the county is still to granular, then bump it up to a country level. The editor could categorize Máze at Northern Sami Wikipedia as a small place in Guovdageaidnu, but any entry to a category at the municipality level Guovdageainnu báikkit would be bumped to the category Finnmárkku báikkit at county level as long as the municipality category contains to few category members.

If it is possible to bump placement in categories in this way it will also be possible to reuse categorization from another project. If Ole Henrik Magga at Norwegian (bokmål) Wikipedia is categorized at a to fine granularity, then the article is bumped upwards at Northern Sami Wikipedia until the categories works out. If subcategories are defined, then articles moves down to lower ones as appropriate.

Generalized reusable templates
Templates are very difficult to make for new beginners. It would be very helpfull if a set of carefully crafted and localizable templates are available from the very beginning. This includes navigational aids, infoboxes and maintenance templates. Some of these are pretty easy to make reusable while others are very difficult. If possible such templates could be stored at Commons, Meta or Mediawiki. For an example of reusable maintenance templates see $maintenance at Norwegian (bokmål) Wikipedia. This template is built for both localization and reusabillity.

Most of whats necessary is already partly implemented as part of image transclusion. Whats necessary in addition is to be able to distribute system messages in an efficient manner. If a template shall be really useful it must be possible to construct the call locally and use locally defined data, not only to construct the call on an external site.

Import infobox entries
Construction of articles from infoboxes on other projects could be very interesting, but right now it is very difficult to even reuse information across projects through cut and paste. The whole discussion about a data commons is a little to involved for this, whats necessary to get this to work is a simple syntax that makes importing data a straight forward task. Instead of a  there should be something like  , possibly even only acting like a subst statement while the page being stored. It is although more easily understandable to simply use it as a normal but expensive parser function.

The most interesting thing with this is that it will make it possible to create special templates that constructs articles from infoboxes defined at other projects. An infobox like the one at Sør-Aurdal in Norwegian (bokmål) Wikipedia can be reused to create an article Mátta-Aurdála gielda at Northern Sami Wikipedia. This won't solve the maintainability problem, but it will make it more easy to create new articles from a baseline.

List management
Because infoboxes may include lists of entries there must be some methods to transform such lists into readable sentences. Especially a list of items should be transformed into something that has a head, a middle part and an end. Before, after, and in between all of those there shoulde be joiners. In Northern Sami the joiner between the middle and end part is ja, in English it is and and in Norwegian it is og. All other joiners are set to comma. Other languages may have other joiners. On a per instance basis it should be possible to override this, for example like  or a more xpath-like. It could although be interesting to have an import function that switches between xpath for parsed content and an alternative for direct traversal of the wiki markup.

Note that today it is not possible to easily use lists inside templates, if used one has to use tricks or prepare the template for such special markup. The problem is basically that parsing of the template eats the initial spaces. This should be fixed somehow. Probably the template parameters should be checked for first character and if any list identifier is found then a list should be generated.

It should be possible to use a list as first argument to plural, thereby switching between alternate containing strings. Possibly there should be a parser function that count the number of list entries and return that number. This is rather simple for ordered and unordered lists, but not so simple for data lists. In this case it is not obvious wetter we should count the dt or dd elements, or parhaps both of them.

Constraint translations
Sometimes a parameter to a infobox will have a specific form but this form does not fit in a new role in running text. In Northern Sami this will for example be the situation where a place name is imported and used in an aggregate to name a church. In Norwegian we write Aurdal kirke, while in English this becomes Aurdal church. The name of the place does not change. In Northern Sami this becomes Aurdalas girku. This kind of transformation is defined as part of tools created by Sámi giellatekno, a language project at University of Tromsø. The syntax is pretty simple and straight forward and we could do something like it in a parser function. In this specific example the the call Aurdal+N+Prop+Plc+Sg+Loc will generate Aurdalas, and we may write this as  where the word is Aurdal and the parameters are +N+Prop+Plc+Sg+Loc. We can even slightly redefine this as  to make some sort of guided translation of fixed strings. If it fails the translation can simply be left to the editor.

This can also be generalized to translate by example, and also that it should be possible to change parameter sets generated from an example. If we write a call like  then we simply says "translate Aurdal like Alvdalas". Then Alvdalas will be analyzed and will produce +N+Prop+Plc+Sg+Loc which is then used as parameters for generating Aurdalas from Aurdal.

Sometimes the results from the analyzis will be insufficient and we will have to refine it by adding or removing switches. This can be done like this  whereby Aurdal become Aurdala and not Aurdalas. In addition there are times when we don't know if we have a complete match. In those circumstances we want to get as close as possible to a given example. We write this as.

It should be possible to combine the function with other parser functions, especially functions for system messages like plural.

Final notes
It is important to note that this kind of article production is not about translating existing articles, its about creating articles from well-defined infoboxes in other languages. It seems like statistical translations will not work very well but rule based translations like the ones produced by Apertium can work.