This article falls within the scope of WikiProject Writing systems, a
WikiProject interested in improving the encyclopaedic coverage and content of articles relating to
writing systems on Wikipedia. If you would like to help out, you are welcome to drop by
the project page and/or leave a query at
the project’s talk page.Writing systemsWikipedia:WikiProject Writing systemsTemplate:WikiProject Writing systemsWriting system articles
This article is within the scope of WikiProject Japan, a collaborative effort to improve the coverage of Japan-related articles on Wikipedia. If you would like to
participate, please visit the
project page, where you can join the project, participate in
relevant discussions, and see
lists of open tasks. Current time in Japan: 09:34, August 1, 2024 (
JST,
Reiwa 6) (Refresh)JapanWikipedia:WikiProject JapanTemplate:WikiProject JapanJapan-related articles
This article has been given a rating which conflicts with the
project-independent quality rating in the banner shell. Please resolve this conflict if possible.
This article is within the scope of WikiProject Computing, a collaborative effort to improve the coverage of
computers,
computing, and
information technology on Wikipedia. If you would like to participate, please visit the project page, where you can join
the discussion and see a list of open tasks.ComputingWikipedia:WikiProject ComputingTemplate:WikiProject ComputingComputing articles
This article is within the scope of WikiProject Software, a collaborative effort to improve the coverage of
software on Wikipedia. If you would like to participate, please visit the project page, where you can join
the discussion and see a list of open tasks.SoftwareWikipedia:WikiProject SoftwareTemplate:WikiProject Softwaresoftware articles
Direct kana input is on the verge of extinction, although it is still widely supported.
This statement is either self contradictory or ambiguous. By extinction is it meant that soon no one will want to use kana input? Or is it meant that soon software will not support it?
In relating to the Japanese language and computers, unique adaptation issues arise.
Most problems are not unique to Japanese, but common to other DBCS languages. Although the specific solutions are unique to Japanese.
Romanization has little to do with problem, it's just a way of input method.
There are several standard methods to
encode characters for use on a computer, including
JIS,
SJIS,
EUC, and
Unicode.
Strictly speaking, Unicode is not a character encoding, it's a coded character set.
While mapping the set of
kana is a simple matter, kanji has proven more difficult. Because the Japanese kanji differ slightly or significantly from the corresponding characters in Chinese, it has proven both challenging and controversial to construct an encoding system which encompasses both Chinese and Japanese characters equitably.
Whether it corresponds to Chinese chracter is not problem (unless it is in relation to Unicode).
Unicode has been criticized in Japan (as well as in
China and
Korea) because it assigns the same code to similar characters from various East Asian languages, even though the character may varies in terms of form and
pronunciation[1].
Pronunciation has nothing to do with problem.
Unicode is also criticized for failing to allow for older and alternate forms of kanji.
This has nothing to do with the problem since Unicode contains all JIS chracter set. The problem is Unicode uses different criteria of coding rule.
Though Japanese computer users have almost no trouble handling contemporary text, ancient Japanese language research has been considerably handicapped by this limitation.
This problem has led to the continued wide use of many encoding standards, despite increased Unicode use in other countries. For example, most Japanese
e-mail and
web pages are encoded in
SJIS or
JIS rather than Unicode. This has led to the problem of mojibake (misconverted characters) and much unreadable Japanese text on computers.
This paragraph doesn't make sense since it has nothing to do with ancient Japanese language, but rather, a problem of support of legacy data.
Japanese text input is a complicated matter not only because of the
encoding problems discussed above
Text input has little to do with encoding, it is a matter of selecting a character.
but also because it is practically impossible to type all of characters used in Japanese writing system with a finite set of keys in keyboards. On modern computers, Japanese is input on a standard keyboard
What does standard of standard keyboard mean? Perhaps standard roman alphabet keyboard? Mobile phone keypad is another way of input, by the way.
Agree that this is poorly worded, but I'm sure the point is "on a conventional keyboard with only about 100 keys."
adamrice 20:35, 15 Jul 2004 (UTC)
combined with an
Input Method Editor which allows the user to choose the correct characters from a list. There is also another method, known as
Oyayubi shift, developed by
Fujitsu, which allows direct
kana input, but this method is now obsolete.
I don't understand why Oyayubi shift has to be mentioned here, while kana input is not mentioned at all.
Because a number of often-used characters are omitted in a standard
character set such as
JIS or even
Unicode, gaiji (外字 external character) is sometimes used to supplement the character set.
Is gaiji really used along with Unicode? Curious since I'm not sure about this.
Unicode allows for a "private use" area that is analogous to the concept of gaiji. As I understand it, JIS proponents wanted to shove the entire JIS character space into the private-use area, but this obviously didn't happen.
adamrice 20:35, 15 Jul 2004 (UTC)
However, with the spread of
computer networking and the
Internet, gaiji is no longer used as frequently. As a result, omitted characters are written with similar or simpler characters in their place.
omitted characters are written with similar or simpler characters in their place. Is this correct? Shouldn't it be As a result, those chracters need to be replaced with similar or simpler characters.?
The hastingsresearch unicode page
[2] misrepresents the issues a lot. There's a rebuttal
[3]. The article should be adjusted to remove the anti-Unicode bias which is wholly without basis. --
130.233.18.89 03:41, 13 Mar 2004 (UTC)
The rebuttal comes from a member of Unicode standard committee. How can we expect a fairness from such a person? Anyway, this page might help us.
Also, I am going to merge those confusion and controversy into unicode article. It doesn't say controversay of unicode much. I believe unicode issues are political and cultural things not technical. As a programmer, I don't think Unicode is worse than Shift-JIS. But none of encode scheme is good by nature anyway. --
Taku 04:47, Mar 13, 2004 (UTC)
Ignore the above, the hastingsresearch page and rebuttal are referenced in
Han unification where they are clearly more appropriate as well. Anyway, I modified the article a little based on input from a native, for example the scarcity of kana input. I also put gaiji back because it's relevant with JIS, and frankly the paragraph made no sense previously. Indeed, gaiji is not as frequently used with Unicode because Unicode contains over 70,000 Han characters where JIS only contains approximately 8,000. --
130.233.18.89 00:39, 15 Mar 2004 (UTC)
I would love to see the statistics data of keyboard choice.
The article text mentions Kunrei-shiki and Hepburn style romanization. In fact, romaji input method is not equivalent to neither, it is superset of both and adds a little more.
agreed. I am going to edit.
adamrice 20:35, 15 Jul 2004 (UTC)
Speed
Perhaps a frivolous inquiry, but entering Japanese seems combersome compared to English. Can someone familiar with both forms tell me which is faster to input? I'm guessing that Japanese takes much longer to input than the equaivalent English, does this have effects on society, eg: are Japonese school children's assignments hand-written rather than computed? --
Commander Keane 11:44, 10 Feb 2005 (UTC)
I have written some
kana entry programs for various platforms. Maybe I should release these as GPL software. We're not all able to use Microsoft's Japanese
IME.
-- Uncle Ed(talk) June 29, 2005 22:48 (UTC)
Japanese input is slower than English (or any alphabetic writing system), because it is a two-step (or in the best case, a 1.5-step) process, but it is not as much slower as you might think. And taking things a step further, I've seen some people in Japan sending e-mail from phones that apparently have very sophisticated predictive text input that probably makes it faster than T9 input. Japan was unquestionably late to the bandwagon of typing and word-processing, but that is more because the base level of technology required to even get started typing Japanese productively is higher. We're well past that point now.
adamrice 30 June 2005 16:32 (UTC)
History and Development
I'd like to see a section on the history of Japanese computing. I've seen a (copyrighted, sadly) photo of an early kanji-capable keyboard, which had to be used with a stylus because there were so many keys.
This article may be helpful. —
Gwalla |
Talk20:57, 3 July 2007 (UTC)reply
FEP
The link to FEP goes to an internal wikipedia disambiguation page, and the meanings listed for that doesn't include one that matches what FEP should mean.
Should the reference to that page be removed?
In this context, I guess
FEP should have to mean
front end processor. In 1980s Japan, typical early-time Japanese input methods were a kind of front end processors (FEP), so they were called Japanese language FEP (or simply FEP). --
Rija17:24, 5 July 2007 (UTC)reply
Unsuitable Bias?
The following line can be found in
this revision of this article (latest revision as of writing, link provided for reference): There has been resistance against Unicode in Japan since it is said to be an American invention not Japanese.
Are there any reliable sources to support such an argument? And even if there were reliable sources, is such an argument even necessary in this article? Coming from a Japanese family living in the US, I can see both sides of the issue and I feel such a statement detracts from the educational and, most importantly, neutral nature of an encyclopedia.
King Arthur6687 (
talk)
23:19, 5 October 2009 (UTC)reply
Unicode has left to right and right-to-left control characters. I'm pretty sure it does not yet offer a top-bottom control character, but maybe some coverage on this would be useful for completeness. -- I have browsed 5 pages so far on wikipedia searching for coverage on the topic, and none of them have been sufficient, which to me shows that at least somewhere there should be a more complete coverage of the topic. --— robbiemuffinpagetalk12:35, 28 March 2011 (UTC)reply
in order to write English
This is true if writing in modern English for native English readers.
As soon as you add the need for IPA equivalent phonetics, is 256 still sufficient ?
The assertion seems to trade on an ambiguity - on the printed page of a typical published novel which itself contains no mathematics, no foreign names ...which is NOT the case of internet documents.