Help:Extension:Translate/Translation memories/da

TTMServer er en server til oversættelses-hukommelse, der følger med Translate-udvidelsen. Den kræver ingen eksterne afhængigheder. Den er aktiveret som standard og erstatter understøttelse af tmserver fra translatetoolkit, som var svær at opsætte. TTMServer er en simpel oversættelses-hukommelse og den bruger ikke nogen avancerede algoritmer. Den drager imidlertid fordel af MediaWikis fremragende understøttelse af sprog og databasens indvindings-funktioner.

Der er 3 forskellige måder at bruge TTMServer:

Konfiguration
Al oversættelses-hjælp, herunder oversættelses-hukommelse er konfigureret med -konfigurationsindstillingen. Eksempel på konfiguration af TTMServers:

Mulige nøgler og værdier er:

I øjeblikket understøttes kun MySQL for databaserne.

TTMServer API
Hvis du ønsker du at anvende din egen TTMServer-tjeneste er specifikationerne her.

Forespørgselsparametre:

Din tjeneste skal acceptere de følgende parametre: Din tjeneste skal give et JSON-objekt, der skal have nøglen  med objekt-matrix. Disse objekter skal indeholde følgende data: Eksempel:


 * URL: http://translatewiki.net/w/api.php?action=ttmserver&sourcelanguage=en&targetlanguage=fi&text=january&format=jsonfm
 * Resultat:

TTMServer arkitektur
Bagdelen indeholder tre tabeller:,   og. De svarer til kilder, mål og fuldtekst. Du kan finde tabeldefinitionerne i. Kilderne indeholder alle meddelelses-definitionerne. Selvom de som regel altid er på det samme sprog, dvs. engelsk, opbevares tekstens sprog også i de sjældne tilfælde, at dette ikke er sandt.

Hver post har et unikt id og 2 ekstra felter, længde og kontekst. Længden bruges som første passerings-filter, så ved forespørgsel behøver vi ikke sammenligne teksten vi søger med hver post i databasen. Konteksten gemmer titlen på den side, hvor teksten kommer fra, for eksempel "MediaWiki:Jan/en". Ud fra denne information kan vi linke forslagene tilbage til "MediaWiki:Jan/da", som gør det muligt for oversættere hurtigt at rette ting eller bare at afgøre, hvor den slags oversættelse er brugt.

Den anden filtrerings-passage kommer fra fuldtekst-søgningen. Definitionerne er blandet med en ad hoc-algoritme. Først er teksten opdelt i inddelinger (ord) med MediaWikis. Hvis der er nok inddelinger, fratager vi dybest set alt hvad der ikke er bogstaver og normaliserer bøjningen. Så kan vi tage de første 10 unikke ord, som er mindst 5 bytes lange (5 bogstaver på engelsk, men kortere ord for sprog med multibyte kodepunkter). Disse ord bliver så lagret i fuldtekst-indekset for yderligere filtrering i længere strenge.

When we have filtered the list of candidates, we fetch the matching targets from the targets table. Then we apply the levenshtein edit distance algorithm to do the final filtering and ranking. Let's define:


 * E : edit distance
 * S : the text we are searching suggestions for
 * Tc : the suggestion text
 * To : the original text which the Tc is translation of

The quality of suggestion Tc is calculated as E/min(length(Tc),length(To)). Depending on the length of the strings, we use: either PHP's native levenshtein function; or, if either of the strings is longer than 255 bytes, the PHP implementation of levenshtein algorithm. It has not been tested whether the native implementation of levenshtein handles multibyte characters correctly. This might be another weak point when source language is not English (the others being the fulltext search and segmentation).

There is a script which fills the translation memory with translations from the active message groups. Even big sites should be able to bootstrap the memory in half an hour when using multiple threads with the  parameter. The time depends heavily on how complete the message group completion stats are (incomplete ones will be calculated during the bootstrap). New translations are automatically added by a hook. New sources (definitions) are added when first translation is added.

Old translations which are no longer used and do not belong to any message groups are not purged automatically, unless you rerun the bootstrap script. When the translation of a message is updated, the previous translation is removed from the memory. When the definition is updated nothing happens immediately. When translations are updated against the new definition, a new entry will be added. The old definition and its old translations remain in the database until purged by rerunning the bootstrap script. Also fuzzy translations will not be added to the translation memory, but neither are the translations removed from the memory when they are fuzzied.

Solr backend
Much of the above also applies to the TTMServer using the Solr search platform as backend, except the details on database layout and queries. The results are by default ranked with the levenshtein algorithm on the Solr side, but other available string matching algorithms can also be used, like ngram matching for example.

In Solr there are no tables. Instead we have documents with fields. Here is an example document: Each translation has its own document and message documentation has one too. To actually get suggestions we first perform the search sorted by string similarity algorithm for all documents in the source language. Then we do another query to fetch translations if any for those messages.

We are using lots of hooks to keep the translation memory database updated in almost real time. If user translates similar messages one after another, the previous translation can (in the best case) be displayed as suggestion for the next message.

Initial import
 * 1) Execute ttmserver-export.php command line script for each wiki using the shared translation memory.

New translation (if not fuzzy)
 * 1) Create document

Updated translation (if not fuzzy)
 * 1) Delete wiki:X language:Y message:Z
 * 2) Create document

Updated message definition All existing documents for the message stay around because globalid is different.
 * 1) Create new document

Translation is fuzzied
 * 1) Delete wiki:X language:Y message:Z

Messages changes group membership
 * 1) Delete wiki:Z message:Z
 * 2) Create document (for all languages)

Message goes out of use Any further changes to definitions or translations are not updated to TM.
 * 1) Delete wiki:Z message:Z
 * 2) Create document (for all languages)

Translation memory query
 * 1) Collect similar messages with strdist("message definition",content)
 * 2) Collect translation with globalid:[A,B,C]

Search query Can be narrowed further by facets on language or group field.
 * 1) Find all matches with text:"search query"

Identifier fields Field  uniquely identifies the translation or message definition by combining the following fields: The used format is.
 * wiki identifier (MediaWiki database id)
 * message identifier (Title of the base page)
 * message version identifier (Revision id of the message definition page)
 * message language

In addition we have separate fields for wiki id, message id and language to make the delete queries listed above possible.

Installation
Here are the general quick steps for installing and configuring Solr for TTMServer. You should adapt them to your situation. To use Solrbackend you also need Solarium library. Easiest way is to install the Solarium MediaWiki extensions. See the example configuration for Solr backend at the configuration section of this page. You can pass extra configuration to Solarium via the  key like done for example in the Wikimedia configuration.

And finally we can populate the translation memory with content.