3

I'm fond of Unicode and what romanization attempts. My goal is to get a first latin char (for sorting purposes) of a non-latin character - so far I succeeded by transcribing Cyrillic, Greek, Hebrew, Katakana, Hiragana, Hangul (including all its syllables), Berber, Thai and Arabic letters by assigning the most appropriate starting letter to each case.

I also know that multiple systems for transliterations and transcribings (and romanization) exist - so far their differences are almost irrelevant for my needs. I'm not fond of Japanese itself - at most I might be able to recognize English terms written in Katakanas.

My problem is: how to assign Unicode code points U+4E00 thru U+9FFF by algorithm? For Hangul syllables this is quite easy: U+AC00 thru U+B097 => K (as all of them start with that); U+B098 thru B2E3 => N. I've looked at JS solutions like https://github.com/hexenq/kuroshiro/ and https://github.com/WaniKani/WanaKana/, but I only find the code for processing Hiraganas and Katakanas (which I already got), never Kanjis (although all of their demos succeed in processing them).

Is there a table or dictionary? If romanization of Kanjis is achieved thru first converting each Kanji into Katakanas, then how to achieve that?

AmigoJack
  • 133
  • 4
  • 5
    This question is probably off topic at this forum but for what it's worth, kanji in Japanese have multiple pronunciations and you will not be able to assign definitive pronunciation to most of them. The best you could do is find a downloadable kanji database and choose a pronunciation (it will probably be in hiragana) arbitrarily. Perhaps the database will also include some form of probability that you could use to your advantage. – G-Cam Nov 08 '18 at 13:31
  • 3
    I'm curious, **why**? Granted, you _can_ assign a Latin letter to each kanji, but as others have noted, that is mostly an arbitrary process. I'm left scratching my head as to what use this would be. I suppose as a general coding project, it might be a fun puzzle, but as to final utility, I'm baffled. – Eiríkr Útlendi Nov 08 '18 at 14:07
  • The goal is to have both "Takkyu" and "卓球" being listed under "T", instead of having all non-latin words/names being listed under "#", appealing to users who rather deal with latin letters. It may not make sense for Kanjis/CJK, but for i.e. Hangul and Cyrillic ("Yulia" and "Ю́лия" both under "Y") - most probably I'll realize it makes too little sense for all CJK idiographs. – AmigoJack Nov 08 '18 at 14:39
  • 3
    @AmigoJack Wait, so you're trying deal with **words** (卓球) rather than **characters** (卓)? Then character-based approach described in my answer will make almost no sense in Japanese. One kanji can have many readings, and Unihan_Readings.txt is almost useless to determine the reading of an individual **word**. For example 生命 is **S**eimei, 生地 is **K**iji, 生卵 is **N**amatamago and 生霊 is **I**kiryo. What you may need is a morphological analyzer introduced [here](https://japanese.stackexchange.com/q/56640/5010). – naruto Nov 08 '18 at 14:58
  • That would require me to have a word dictionary, too. So far I thought "卓" would always result in something similar to "Taku" (my goal is even lower: starting with "T" when being spoken) - with these new insights I'll rather stop my journey and consider (words made of) Kanjis to be too difficult to romanize, although I still don't know how other programs achieve this without accessing online resources. – AmigoJack Nov 08 '18 at 15:23
  • 2
    [This](https://www.tofugu.com/japanese/onyomi-kunyomi/) is a must-read if you want to go any further. Perfect ronaminzation is incredibly difficult if you implement from scratch, but there are several free projects. And the dictionary you need is not large unless you have to handle very rare words. Have you tried [this](https://github.com/ikawaha/kagome)? – naruto Nov 08 '18 at 16:01
  • https://github.com/ikawaha/kagome/blob/master/internal/dic/data/ alone has ~200 MiB of code and I suppose it's a GZip stream - that's everything but "not large". :D Great on/kun article - I'm really a novice at this. – AmigoJack Nov 08 '18 at 16:35
  • 200MiB is indeed large and I suppose it's machine-generated corpus data or something like that ([here](https://japanese.stackexchange.com/a/56645/5010) is why it's necessary). But your requirement is much simpler and an MS-DOS machine in the 1980's could hold a decent hiragana-kanji dictionary in the RAM :) – naruto Nov 08 '18 at 17:34
  • I'm voting to close this question as off-topic because it's about coding and not about the Japanese language – Flaw Nov 09 '18 at 18:46
  • 1
    I oppose: there's no section suited _better_ than this one. It's also not about coding (unless you think Unicode is a language). – AmigoJack Nov 11 '18 at 20:54

1 Answers1

10

EDIT: It turned out that OP did not know most kanji have multiple readings in Japanese. He is actually trying to get the reading of words (e.g., 生地 → K, 生卵 → N, 生命 → S, 生霊 → I). For this purpose, a word-based dictionary is needed. There are several open source morphological analyzers that can do the job and more (kuromoji, mecab, Kagome, etc). See also: Is it possible to algorithmically convert Japanese text to Romaji?


Original Answer

CJK unified ideographs are sorted based on radicals, not readings. This is because it's impossible to determine the reading of those characters in one way. The Unicode Consortium provides Unihan Database, which can display the representative readings of CJK ideographs written in simple Latin alphabet. For example, here is the result for a very basic ideograph 日 (U+65E5; "day", "date", "sun", etc):

Readings of 日

The table says the "first roman letter of 日" is J in Cantonese, R in Mandarin, H-or-K-or-N-or-J in Japanese and I in Korean. To understand what's going on here, please keep in mind that the 'CJK Unified Ideographs' block has characters used in Chinese, Korean and Japanese jumbled together. Each character is read differently in different languages. Especially in Japanese, one character can have many readings depending on the context. To make matters worse, there are some characters whose readings are totally unknown. If you can accept all those limitations and still want the Latin readings anyway, go ahead and use the database according to your needs. If you're only interested in Japanese kanji, a reasonable method would be to pick the first letter of the kJapaneseOn field (or the first letter of kJapaneseKun if there is no kJapaneseOn).

naruto
  • 285,549
  • 12
  • 305
  • 582
  • 1
    This is a huge step forward to me - thru *Unihan_Readings.txt* in http://www.unicode.org/Public/UCD/latest/ucd/Unihan.zip I now have something to start automation with, despite being CJK (and more). Great! – AmigoJack Nov 08 '18 at 13:25
  • Oh I didn't know it's available for download! – naruto Nov 08 '18 at 13:44
  • I recently released the Unihan database equivalent data in JSON format, either as a single file (unihan-data-json.zip) or as a set of files for each property (unihan-data.zip); they are available for download [here](https://github.com/tonton-pixel/unicode-plus/releases). HTH... –  Nov 08 '18 at 14:39