Unicode normalization considerations/ko

핵심 주제는?
1.4 버전부터 미디어 위키는 form C (정규화양식 C, NFC)에서 언급하고 있는 원칙들을 유니코드 텍스트 입력에 적용해 왔습니다. 이러한 정규화에는 합당한 이유가 있는데:


 * 형태적으로는 같은 글자들이지만 의미가 다른 구성요소를 갖는 페이지 제목들이 서로 충돌하는 경우를 피하며
 * 사파리를 제외한 웹브라우저들은 글자소가 결합되어 정상적으로 화면을 읽을 수 있지만, 사파리 웹브라우저는 업로드되는 음악, 동영상 등 미디어 파일들은 이름을 구성하는 글자소들이 분해되어 보여지고, 결과적으로 페이지 제목 또한 분해되어 보여지는 문제가 지속되고 있습니다


 * 입력된 검색어의 글자소 구성형태와 상관없이 검색기능이 예측한대로 작동할 수 있도록 하기 위해

양식 C가 선택되었습니다. 그 이유는:

미리 조합된 글자를 사용하여 양식 C의 형태로 이미 막대한 양의 데이터가 입력되었으며,


 * Form C is supposed to be relatively lossless, with the only changes being invisible transformations between base character + combining character sequences and precomposed characters. In theory, text should never change appearance because it's been normalized to form C.


 * And further, the W3C recommends it.

The problem
However, as time has gone by, a few issues have shown up.


 * some Arabic, Persian and Hebrew combining vowel markers sort incorrectly.
 * Some of these are just buggy fonts or renderers and only affect some platforms.
 * A few cases, however, can produce incorrect text, because the defined classifications don't include enough distinctions to produce semantically correct ordering. This affects primarily older texts such as Biblical Hebrew.


 * A surprising composition exclusion in Bangla.
 * The result doesn't render right with some tools, probably again a platform-specific bug
 * Some third-party search tools apparently don't know how to normalize and fail to locate texts so normalized.

The rendering and third-party search problems are annoying, though if we stay on our high horse we can try to ignore it and let the other parties fix their broken software over time.

The canonical ordering problems are a harder issue; you simply can't get these right by following the current specs. Unicode won't change the ordering definitions because it would break their compatibility rules, so unless they introduce *new* characters with the correct values... Well, it's not clear this is going to happen.

What can we do about it?
We can either ignore it and hope it goes away (easy, but entails dealing with ongoing complaints from particular linguistic groups), or we can give up on comprehensive normalization and change how we use it to maximize the benefits while minimizing the problems.

If we consider normalization form C (NFC) to be destructive (though not as much as its evil little sister NFKC), one possible plan might look like this:


 * Remove the normalization check on all web input; replace it with a more limited check for UTF-8 validity but allow funny composition forms through, as is.


 * Apply NFC directly in the places where it's most needed:
 * Page title normalization in Title::secureAndSplit
 * Search engine index generation
 * Search engine queries

This is minimally invasive, allowing page text to contain arbitrary composition forms while ensuring that linking and internal search continue to work. It requires no database format changes, and could be switched on without service disruption.

However, it does leave visible page titles in the normalized, potentially ugly or incorrect form.

Longer term
A further possibility would be to allow page titles to be displayed in non-normalized forms. This might be done in concert with allowing arbitrary case forms ('iMonkey' instead of 'IMonkey').

In this case, the page table might be changed to include a display title form:

page_title:        'IMonkey' page_display_title: 'iMonkey' or perhaps even scarier case-folded stuff:

page_title:        'imonkey' page_display_title: 'iMonkey'

The canonical and display titles would always be transformable to one another to maintain purity of wiki essence; you should be able to copy the title with your mouse and paste it into a link and expect it to work.

These kinds of changes could be more disruptive, requiring changes to the database structure and possibly massive swapping of data around in the tables from one form to another, so we might avoid it unless there are big benefits to be gained.

Other normalization forms
NFC was originally chosen because it's supposed to be semantically lossless, but experience has shown that that's not quite as true as we'd hoped.

We may then consider NFKC, the compatibility composition form, for at least some purposes. It's more explicitly lossy; the compatibility forms are recommended for performing searches since they fold additional characters such as plain latin and "full-width" latin letters.

It would likely be appropriate to use NFKC for building the search index and to run on search input to get some additional matches on funny stuff. I'm not sure if it's safe enough for page titles, though; perhaps with a display title, but probably not without.

Normalizaton and unicodification can both be done by bots. While no bot has yet been known to "normalize", the function is possible. The "Curpsbot-unicodify" bot has unicodified various articles on Wikipedia and this should not be undone.