decimal64 floating-point format

From Infogalactic: the planetary knowledge core
Jump to: navigation, search

Lua error in package.lua at line 80: module 'strict' not found. In computing, decimal64 is a decimal floating-point computer numbering format that occupies 8 bytes (64 bits) in computer memory. It is intended for applications where it is necessary to emulate decimal rounding exactly, such as financial and tax computations.

Decimal64 supports 16 decimal digits of significand and an exponent range of −383 to +384, i.e. ±0.000000000000000×10^−383 to ±9.999999999999999×10^384. (Equivalently, ±0000000000000000×10^−398 to ±9999999999999999×10^369.) In contrast, the corresponding binary format, which is the most commonly used type, has an approximate range of ±0.000000000000001×10^−308 to ±1.797693134862315×10^308. Because the significand is not normalized, most values with less than 16 significant digits have multiple possible representations; 1×102=0.1×103=0.01×104, etc. Zero has 768 possible representations (1536 if you include both signed zeros).

Decimal64 floating point is a relatively new decimal floating-point format, formally introduced in the 2008 version[1] of IEEE 754 as well as with ISO/IEC/IEEE 60559:2011.[2]

Representation of decimal64 values

Sign Combination Exponent continuation Coefficient continuation
1 bit 5 bits 8 bits 50 bits
s mmmmm xxxxxxxx cccccccccccccccccccccccccccccccccccccccccccccccccc

IEEE 754 allows two alternative representation methods for decimal64 values. The standard does not specify how to signify which representation is used, for instance in a situation where decimal64 values are communicated between systems:

Both alternatives provide exactly the same range of representable numbers: 16 digits of significand and 3×28 = 768 possible decimal exponent values. (All the possible decimal exponent values storable in a binary64 number are representable in decimal64, and most bits of the significand of a binary64 are stored keeping roughly the same number of decimal digits in the significand.)

In both cases, the most significant 4 bits of the significand (which actually only have 10 possible values) are combined with the most significant 2 bits of the exponent (3 possible values) to use 30 of the 32 possible values of a 5-bit field. The remaining combinations encode infinities and NaNs.

Combination field Exponent Msbits Significand Msbits Other
00mmm 00 0xxx
01mmm 01 0xxx
10mmm 10 0xxx
1100m 00 100x
1101m 01 100x
1110m 10 100x
11110 ±Infinity
11111 NaN. Sign bit ignored. First bit of exponent continuation field determines if NaN is signaling.

In the cases of Infinity and NaN, all other bits of the encoding are ignored. Thus, it is possible to initialize an array to Infinities or NaNs by filling it with a single byte value.

Binary integer significand field

This format uses a binary significand from 0 to 1016−1 = 9999999999999999 = 2386F26FC0FFFF16 = 1000111000011011110010011011111100000011111111111111112.

The encoding, completely stored on 64 bits, can represent binary significands up to 10×250−1 = 11258999068426239 = 27FFFFFFFFFFFF16, but values larger than 1016−1 are illegal (and the standard requires implementations to treat them as 0, if encountered on input).

As described above, the encoding varies depending on whether the most significant 4 bits of the significand are in the range 0 to 7 (00002 to 01112), or higher (10002 or 10012).

If the 2 bits after the sign bit are "00", "01", or "10", then the exponent field consists of the 10 bits following the sign bit, and the significand is the remaining 53 bits, with an implicit leading 0 bit:

s 00eeeeeeee   (0)ttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt
s 01eeeeeeee   (0)ttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt
s 10eeeeeeee   (0)ttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt

This includes subnormal numbers where the leading significand digit is 0.

If the 2 bits after the sign bit are "11", then the 10-bit exponent field is shifted 2 bits to the right (after both the sign bit and the "11" bits thereafter), and the represented significand is in the remaining 51 bits. In this case there is an implicit (that is, not stored) leading 3-bit sequence "100" for the most bits of the true significand (in the remaining lower bits ttt...ttt of the significand, not all possible values are used).

s 1100eeeeeeee (100)t tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt
s 1101eeeeeeee (100)t tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt
s 1110eeeeeeee (100)t tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt

The 2-bit sequence "11" after the sign bit indicates that there is an implicit 3-bit prefix "100" to the significand. Compare having an implicit 1-bit prefix "1" in the significand of normal values for the binary formats. Note also that the 2-bit sequences "00", "01", or "10" after the sign bit are part of the exponent field.

Note that the leading bits of the significand field do not encode the most significant decimal digit; they are simply part of a larger pure-binary number. For example, a significand of 8000000000000000 is encoded as binary 0111000110101111110101001001100011010000000000000000002, with the leading 4 bits encoding 7; the first significand which requires a 54th bit is 253 = 9007199254740992. The highest valid significant is 9999999999999999 whose binary encoding is (100)0111000011011110010011011111100000011111111111111112 (with the 3 most significant bits (100) not stored but implicit as shown above; and the next bit is always zero in valid encodings).

In the above cases, the value represented is

(−1)sign × 10exponent−398 × significand

If the four bits after the sign bit are "1111" then the value is an infinity or a NaN, as described above:

s 11110 xx...x    ±infinity
s 11111 0x...x    a quiet NaN
s 11111 1x...x    a signalling NaN

Densely packed decimal significand field

In this version, the significand is stored as a series of decimal digits. The leading digit is between 0 and 9 (3 or 4 binary bits), and the rest of the significand uses the densely packed decimal (DPD) encoding.

Unlike the binary integer significand version, where the exponent changed position and came before the significand, this encoding, combines the leading 2 bits of the exponent and the leading digit (3 or 4 bits) of the significand into the five bits that follow the sign bit.

This eight bits after that are the exponent continuation field, providing the less-significant bits of the exponent.

The last 50 bits are the significand continuation field, consisting of five 10-bit declets.[3] Each declet encodes three decimal digits[3] using the DPD encoding.

If the first two bits after the sign bit are "00", "01", or "10", then those are the leading bits of the exponent, and the three bits "TTT" after that are interpreted as the leading decimal digit (0 to 7):

s 00 TTT (00)eeeeeeee (0TTT)[tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt]
s 01 TTT (01)eeeeeeee (0TTT)[tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt]
s 10 TTT (10)eeeeeeee (0TTT)[tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt]

If the first two bits after the sign bit are "11", then the second 2-bits are the leading bits of the exponent, and the next bit "T" is prefixed with implicit bits "100" to form the leading decimal digit (8 or 9):

s 1100 T (00)eeeeeeee (100T)[tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt]
s 1101 T (01)eeeeeeee (100T)[tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt]
s 1110 T (10)eeeeeeee (100T)[tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt]

The remaining two combinations (11 110 and 11 111) of the 5-bit field after the sign bit are used to represent ±infinity and NaNs, respectively.

The DPD/3BCD transcoding for the declets is given by the following table. b9...b0 are the bits of the DPD, and d2...d0 are the three BCD digits.

Densely packed decimal encoding rules[4]
DPD encoded value Decimal digits
b9 b8 b7 b6 b5 b4 b3 b2 b1 b0 d2 d1 d0 Values encoded Description
a b c d e f 0 g h i 0abc 0def 0ghi (0–7) (0–7) (0–7) Three small digits
a b c d e f 1 0 0 i 0abc 0def 100i (0–7) (0–7) (8–9) Two small digits,
one large
a b c g h f 1 0 1 i 0abc 100f 0ghi (0–7) (8–9) (0–7)
g h c d e f 1 1 0 i 100c 0def 0ghi (8–9) (0–7) (0–7)
g h c 0 0 f 1 1 1 i 100c 100f 0ghi (8–9) (8–9) (0–7) One small digit,
two large
d e c 0 1 f 1 1 1 i 100c 0def 100i (8–9) (0–7) (8–9)
a b c 1 0 f 1 1 1 i 0abc 100f 100i (0–7) (8–9) (8–9)
x x c 1 1 f 1 1 1 i 100c 100f 100i (8–9) (8–9) (8–9) Three large digits

The 8 decimal values whose digits are all 8s or 9s have four codings each. The bits marked x in the table above are ignored on input, but will always be 0 in computed results. (The 8×3 = 24 non-standard encodings fill in the gap between 103=1000 and 210=1024.)

In the above cases, with the true significand as the sequence of decimal digits decoded, the value represented is

(-1)^\text{signbit}\times 10^{\text{exponentbits}_2-398_{10}}\times \text{truesignificand}_{10}

See also

References

  1. Lua error in package.lua at line 80: module 'strict' not found.
  2. Lua error in package.lua at line 80: module 'strict' not found.
  3. 3.0 3.1 Lua error in package.lua at line 80: module 'strict' not found.
  4. Lua error in package.lua at line 80: module 'strict' not found.