UTF-32

From Infogalactic: the planetary knowledge core
(Redirected from Code page 12000)
Jump to: navigation, search

UTF-32 (or UCS-4) stands for Unicode Transformation Format 32 bits. It is a protocol to encode Unicode code points that uses exactly 32 bits per Unicode code point. This makes UTF-32 a fixed-length encoding, in contrast to all other Unicode transformation formats which are variable-length encodings. The UTF-32 form of a code point is a direct representation of that code point's numerical value.[1]

The main advantage of UTF-32, versus variable-length encodings, is that the Unicode code points are directly indexable. Examining the n'th code point is a constant time operation.[2] In contrast, a variable-length code requires sequential access to find the n'th code point. This makes UTF-32 a simple replacement in code that uses integers to index characters out of strings, as was commonly done for ASCII.

The main disadvantage of UTF-32 is that it is space inefficient, using four bytes per code point. Non-BMP characters are so rare in most texts[citation needed], they may as well be considered non-existent for sizing issues, making UTF-32 up to twice the size of UTF-16 and up to four times the size of UTF-8.

History

The original ISO 10646 standard defines a 31-bit encoding form called UCS-4, in which each encoded character in the Universal Character Set (UCS) is represented by a 32-bit friendly code value in the code space of integers between 0 and hexadecimal 7FFFFFFF.

Because only 17 planes are actually in use, all current code points are between 0 and 0x10FFFF. UTF-32 is a subset of UCS-4 that uses only this range. Since the Principles and Procedures document of JTC1/SC2/WG2 states that all future assignments of characters will be constrained to the BMP or the first 14 supplementary planes, UTF-32 will be able to represent all Unicode characters. Accordingly, UCS-4 and UTF-32 are now identical except that the UTF-32 standard has additional Unicode semantics.[clarification needed]

Analysis

Though a fixed number of bytes per code point appear convenient, it is not as useful as it appears. It makes truncation easier but not significantly so compared to UTF-8 and UTF-16 (both of which can search backwards for the point to truncate by looking at 2-4 code units at most).

It is extremely rare that code wishes to find the N'th code point without earlier examining the code points 0 to N-1[citation needed]. This means an integer index that is incremented by 1 for each character can be replaced with an integer offset, measured in code units and incremented by the number of code units as each character is examined. This removes all speed advantages of working with UTF-32. The few instances where N is generated without looking at the earlier code points, such as some hashing and high-speed search algorithms, do not require that N be exact, and thus, like truncation, can be made to work on UTF-8 or UTF-16 by adjusting the position to the nearest code point boundary, a fixed-time operation.

UTF-32 does not make calculating the displayed width of a string easier, since even with a “fixed width” font there may be more than one code point per character position (combining marks) or more than one character position per code point (for example CJK ideographs). Editors that limit themselves to left-to-right languages and precomposed characters can take advantage of fixed-sized code units, but such editors are unlikely to support non-BMP characters and thus can work equally well with 16-bit UTF-16 encoding.

Use

The main use of UTF-32 is in internal APIs where the data is single code points or glyphs, rather than strings of characters. For instance in modern text rendering it is common that the last step is to build a list of structures each containing x,y position, attributes, and a single UTF-32 character identifying the glyph to draw. Often non-Unicode information is stored in the "unused" 11 bits of each word.

On Unix systems, UTF-32 strings are sometimes used for storage, due to the type wchar_t being defined as 32-bits. Python versions up to 3.2 can be compiled to use them instead of UTF-16; from version 3.3 onward, UTF-16 support is dropped, and a system is used whereby strings are stored in UTF-32 but with leading zero bytes optimized away where unnecessary.[3] Seed7 and Lasso encodes all characters and strings with UTF-32. Use of UTF-32 strings on Windows (where wchar_t is 16 bits) is almost non-existent.

Non-use in HTML5

HTML5 states that "authors should not use UTF-32, as the encoding detection algorithms described in this specification intentionally do not distinguish it from UTF-16."[4]

See also

References

External links