Help:Extension:Translate/Translation memories/zh

TTMServer 是翻译扩展自带的翻译记忆服务器. 它不依赖外部，默认是启用的，用以代替了由 translatetoolkit （难以建立）提供的 tmserver 支持. TTMServer 是个简单的翻译记忆，它没有使用高级算法，但它利用了 MediaWiki 优良的语言支持和数据库抽象功能.

TTMServer 有三种不同的使用方法：

配置
包含翻译记忆的所有翻译辅助功能都通过  设置来配置. TTMServer 配置示例：

可能的键值为：

目前只支持 MySQL 数据库.

TTMServer API
如果您想实现自己的 TTMServer 数据库，请看详细说明.

查询参数：

您的服务必须接受下列参数： 您的服务必须提供对象数组中含有键  的 JSON 对象. 这些对象必须包含下列数据： 例如：


 * 链接：http://translatewiki.net/w/api.php?action=ttmserver&sourcelanguage=en&targetlanguage=fi&text=january&format=jsonfm
 * 响应：

TTMServer architecture
The backend contains three tables:,   and. Those correspond to sources, targets and fulltext. You can find the table definitions in. The sources contain all the message definitions. Even though usually they are always in the same language, say, English, the language of the text is also stored for the rare cases this is not true.

Each entry has a unique id and two extra fields, length and context. Length is used as the first pass filter, so that when querying we don't need to compare the text we're searching with every entry in the database. The context stores the title of the page where the text comes from, for example "MediaWiki:Jan/en". From this information we can link the suggestions back to "MediaWiki:Jan/de", which makes it possible for translators to quickly fix things, or just to determine where that kind of translation was used.

The second pass of filtering comes from the fulltext search. The definitions are mingled with an ad hoc algorithm. First the text is segmented into segments (words) with MediaWiki's. If there are enough segments, we strip basically everything that is not word letters and normalize the case. Then we take the first ten unique words, which are at least 5 bytes long (5 letters in English, but even shorter words for languages with multibyte code points). Those words are then stored in the fulltext index for further filtering for longer strings.

When we have filtered the list of candidates, we fetch the matching targets from the targets table. Then we apply the levenshtein edit distance algorithm to do the final filtering and ranking. Let's define:


 * E : edit distance
 * S : the text we are searching suggestions for
 * Tc : the suggestion text
 * To : the original text which the Tc is translation of

The quality of suggestion Tc is calculated as E/min(length(Tc),length(To)). Depending on the length of the strings, we use: either PHP's native levenshtein function; or, if either of the strings is longer than 255 bytes, the PHP implementation of levenshtein algorithm. It has not been tested whether the native implementation of levenshtein handles multibyte characters correctly. This might be another weak point when source language is not English (the others being the fulltext search and segmentation).

There is a script which fills the translation memory with translations from the active message groups. Even big sites should be able to bootstrap the memory in half an hour when using multiple threads with the  parameter. The time depends heavily on how complete the message group completion stats are (incomplete ones will be calculated during the bootstrap). New translations are automatically added by a hook. New sources (definitions) are added when first translation is added.

Old translations which are no longer used and do not belong to any message groups are not purged automatically, unless you rerun the bootstrap script. When the translation of a message is updated, the previous translation is removed from the memory. When the definition is updated nothing happens immediately. When translations are updated against the new definition, a new entry will be added. The old definition and its old translations remain in the database until purged by rerunning the bootstrap script. Also fuzzy translations will not be added to the translation memory, but neither are the translations removed from the memory when they are fuzzied.

Solr backend
Much of the above also applies to the TTMServer using the Solr search platform as backend, except the details on database layout and queries. The results are by default ranked with the levenshtein algorithm on the Solr side, but other available string matching algorithms can also be used, like ngram matching for example.

In Solr there are no tables. Instead we have documents with fields. Here is an example document: Each translation has its own document and message documentation has one too. To actually get suggestions we first perform the search sorted by string similarity algorithm for all documents in the source language. Then we do another query to fetch translations if any for those messages.

We are using lots of hooks to keep the translation memory database updated in almost real time. If user translates similar messages one after another, the previous translation can (in the best case) be displayed as suggestion for the next message.

Initial import
 * 1) Execute ttmserver-export.php command line script for each wiki using the shared translation memory.

New translation (if not fuzzy)
 * 1) Create document

Updated translation (if not fuzzy)
 * 1) Delete wiki:X language:Y message:Z
 * 2) Create document

Updated message definition All existing documents for the message stay around because globalid is different.
 * 1) Create new document

Translation is fuzzied
 * 1) Delete wiki:X language:Y message:Z

Messages changes group membership
 * 1) Delete wiki:Z message:Z
 * 2) Create document (for all languages)

Message goes out of use Any further changes to definitions or translations are not updated to TM.
 * 1) Delete wiki:Z message:Z
 * 2) Create document (for all languages)

Translation memory query
 * 1) Collect similar messages with strdist("message definition",content)
 * 2) Collect translation with globalid:[A,B,C]

Search query Can be narrowed further by facets on language or group field.
 * 1) Find all matches with text:"search query"

Identifier fields Field  uniquely identifies the translation or message definition by combining the following fields: The used format is.
 * wiki identifier (MediaWiki database id)
 * message identifier (Title of the base page)
 * message version identifier (Revision id of the message definition page)
 * message language

In addition we have separate fields for wiki id, message id and language to make the delete queries listed above possible.

Installation
Here are the general quick steps for installing and configuring Solr for TTMServer. You should adapt them to your situation. To use Solrbackend you also need Solarium library. Easiest way is to install the Solarium MediaWiki extension. See the example configuration for Solr backend at the configuration section of this page. You can pass extra configuration to Solarium via the  key as done for example in the.

And finally we can populate the translation memory with content.