UTF-9 and UTF-18

UTF-9 and UTF-18

UTF-9 and UTF-18 (9- and 18-bit Unicode Transformation Format, respectively) were two April Fools' Day RFC joke specifications for encoding unicode on systems where the nonet (nine bit group) is a better fit for the native word size than the octet, such as the 36-bit PDP-10. Both encodings were specified in RFC 4042, written by Mark Crispin (inventor of IMAP) and released on April 1 2005. The encodings suffer from a number of flaws and it is confirmed by their author that they were intended as a joke. [cite web|url=http://staff.washington.edu/mrc/|title=Mark Crispin's Web Page|accessdate=2006-09-17 Points out April Fool's Day for two of his RFCs.]

However unlike some of the "specifications" given in other April 1st RFCs they are actually technically possible to implement, and have in fact been implemented in PDP-10 assembly language. They are not endorsed by the Unicode Consortium.

Technical details

UTF-9 uses a system of putting an octet in the low 8 bits of each nonet and using the high bit to indicate continuation. This means that ASCII and Latin 1 characters take one nonet each, the rest of the BMP characters take two nonets each and non-BMP code points take three. Code points that require multiple nonets are stored starting with the most significant non-zero octet.

UTF-18 is a fixed length encoding using an 18 bit integer per code point. This allows representation of 4 planes, which are mapped to the 4 planes currently used by Unicode (planes 0-2 and 14). This means that the two private use planes (15 and 16) and the currently unused planes (3-13) are not supported. The UTF-18 specification doesn't say why they didn't allow surrogates to be used for these code points though when talking about UTF-16 earlier in the RFC it says "This transformation format requires complex surrogates to represent code points outside the BMP". After complaining about their complexity it would have looked a bit hypocritical to use surrogates in their new standard. It is unlikely that planes 3-13 will be assigned by Unicode any time in the foreseeable future. Thus, UTF-18, like UCS-2 and UCS-4, guarantees a fixed width for all characters (although not for all glyphs).

Problems

Both specifications suffer from the problem that standard communication protocols are built around octets rather than nonets, and so it would not be possible to exchange text in these formats without further encoding or specially designed protocols. This alone would probably be sufficient reason to consider their use impractical in most cases. However, this would be less of a problem with pure bit-stream communication protocols .

Furthermore, both UTF-9 and UTF-18 have specific problems of their own. UTF-9 requires special care when searching, as a shorter sequence can be found at the end of a longer sequence. This means that it is necessary to search backwards in order to find the start of the sequence. UTF-18 cannot represent all Unicode code points (although unlike UCS-2 it can represent all the planes that currently have non-private use code point assignments) making it a bad choice for a system that may need to support new languages (or rare CJK ideographs that are added after the SIP fills up) in the future.

See also

* Comparison of Unicode encodings
* UTF-8
* IP over Avian Carriers

External links

* RFC 4042: UTF-9 and UTF-18 Efficient Transformation Formats of Unicode

Notes


Wikimedia Foundation. 2010.

Игры ⚽ Поможем написать курсовую

Look at other dictionaries:

  • UTF-7 — (7 bit Unicode Transformation Format) is a variable length character encoding that was proposed for representing Unicode text using a stream of ASCII characters. It was originally intended to provide a means of encoding Unicode text for use in… …   Wikipedia

  • UTF-8 — (UCS Transformation Format  8 bit[1]) is a multibyte character encoding for Unicode. Like UTF 16 and UTF 32, UTF 8 can represent every character in the Unicode character set. Unlike them, it is backward compatible with ASCII and avoids the… …   Wikipedia

  • UTF-32/UCS-4 — UTF 32 (or UCS 4) is a protocol for encoding Unicode characters that uses exactly 32 bits for each Unicode code point. All other Unicode transformation formats use variable length encodings. Because UTF 32 uses 4 bytes for every character it is… …   Wikipedia

  • UTF-1 — is a way of transforming ISO 10646/Unicode into a stream of bytes. Due to the design it is not possible to resynchronise if decoding starts in the middle of a character (this makes truncation hard, among other things) and simple byte oriented… …   Wikipedia

  • UTF-16 — (англ. Unicode Transformation Format) в информатике один из способов кодирования символов из Unicode в виде последовательности 16 битных слов. Данная кодировка позволяет записывать символы Юникода в диапазонах U+0000..U+D7FF и… …   Википедия

  • UTF-EBCDIC — is a character encoding used to represent Unicode characters. It is meant to be EBCDIC friendly, so that legacy EBCDIC applications on mainframes may process the characters without much difficulty. Its advantages for existing EBCDIC based systems …   Wikipedia

  • UTF-8 — (от англ. Unicode Transformation Format, 8 bit  «формат преобразования Юникода, 8 битный»)  распространённая кодировка символов Юникода, совместимая с 8 битными форматами передачи текста. Нашла широкое применение в операционных… …   Википедия

  • Utf-8 — (от англ. Unicode Transformation Format формат преобразования Юникода) в настоящее время распространённая кодировка, реализующая представление Юникода, совместимое с 8 битным кодированием текста. Текст, состоящий только из символов с номером… …   Википедия

  • UTF — is a three letter acronym that may refer to:*Unicode Transformation Format *U.T.F. (Undead Task Force), an American comic book title *UTF (Underground Test Facility), a facility used for testing and developing SAGD in Northern Canada …   Wikipedia

  • UTF-16/UCS-2 — In computing, UTF 16 (16 bit Unicode Transformation Format) is a variable length character encoding for Unicode, capable of encoding the entire Unicode repertoire.The encoding form maps each character to a sequence of 16 bit words. Characters are …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”