Standard RAID levels

From Infogalactic: the planetary knowledge core
(Redirected from RAID 6)
Jump to: navigation, search

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

In computer storage, the standard RAID levels comprise a basic set of RAID configurations that employ the techniques of striping, mirroring, or parity to create large reliable data stores from multiple general-purpose computer hard disk drives (HDDs). The most common types are RAID 0 (striping), RAID 1 and its variants (mirroring), RAID 5 (distributed parity), and RAID 6 (dual parity). RAID levels and their associated data formats are standardized by the Storage Networking Industry Association (SNIA) in the Common RAID Disk Drive Format (DDF) standard.[1]

RAID 0

Diagram of a RAID 0 setup

RAID 0 (also known as a stripe set or striped volume) splits ("stripes") data evenly across two or more disks, without parity information, redundancy, or fault tolerance. Since RAID 0 provides no fault tolerance or redundancy, the failure of one drive will cause the entire array to fail; as a result of having data striped across all disks, the failure will result in total data loss. This configuration is typically implemented having speed as the intended goal.[2][3] RAID 0 is normally used to increase performance, although it can also be used as a way to create a large logical volume out of two or more physical disks.[4]

A RAID 0 setup can be created with disks of differing sizes, but the storage space added to the array by each disk is limited to the size of the smallest disk. For example, if a 120 GB disk is striped together with a 320 GB disk, the size of the array will be 120 GB × 2 = 240 GB.

The diagram in this section shows how the data is distributed into Ax stripes on two disks, with A1:A2 as the first stripe, A3:A4 as the second one, etc. Once the stripe size is defined during the creation of a RAID 0 array, it needs to be maintained at all times. Since the stripes are accessed in parallel, an n-drive RAID 0 array appears as a single large disk with a data rate n times higher than the single-disk rate.

Performance

A RAID 0 array of n drives provides data read and write transfer rates up to n times higher than the individual drive rates, but with no data redundancy. As a result, RAID 0 is primarily used in applications that require high performance and are able to tolerate lower reliability, such as in scientific computing[5] or computer gaming.[6]

Some benchmarks of desktop applications show RAID 0 performance to be marginally better than a single drive.[7][8] Another article examined these claims and concluded that "striping does not always increase performance (in certain situations it will actually be slower than a non-RAID setup), but in most situations it will yield a significant improvement in performance".[9] Synthetic benchmarks show different levels of performance improvements when multiple HDDs or SSDs are used in a RAID 0 setup, compared with single-drive performance. However, some synthetic benchmarks also show a drop in performance for the same comparison.[10][11]

RAID 1

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

Diagram of a RAID 1 setup

RAID 1 consists of an exact copy (or mirror) of a set of data on two or more disks; a classic RAID 1 mirrored pair contains two disks. This configuration offers no parity, striping, or spanning of disk space across multiple disks, since the data is mirrored on all disks belonging to the array, and the array can only be as big as the smallest member disk. This layout is useful when read performance or reliability is more important than write performance or the resulting data storage capacity.[12][13]

The array will continue to operate so long as at least one member drive is operational.[14]

Performance

Lua error in package.lua at line 80: module 'strict' not found.

Any read request can be serviced and handled by any drive in the array; thus, depending on the nature of I/O load, random read performance of a RAID 1 array may equal up to the sum of each member's performance,[lower-alpha 1] while the write performance remains at the level of a single disk. However, if disks with different speeds are used in a RAID 1 array, overall write performance is equal to the speed of the slowest disk.[13][14]

Synthetic benchmarks show varying levels of performance improvements when multiple HDDs or SSDs are used in a RAID 1 setup, compared with single-drive performance. However, some synthetic benchmarks also show a drop in performance for the same comparison.[10][11]

RAID 2

Diagram of a RAID 2 setup

RAID 2, which is rarely used in practice, stripes data at the bit (rather than block) level, and uses a Hamming code for error correction. The disks are synchronized by the controller to spin at the same angular orientation (they reach index at the same time[clarification needed]), so it generally cannot service multiple requests simultaneously. Extremely high data transfer rates are possible.[15][16]

With all hard disk drives implementing internal error correction, the complexity of an external Hamming code offered little advantage over parity so RAID 2 has been rarely implemented; it is the only original level of RAID that is not currently used.[15][16]

RAID 3

Diagram of a RAID 3 setup of six-byte blocks and two parity bytes, shown are two blocks of data in different colors.

RAID 3, which is rarely used in practice, consists of byte-level striping with a dedicated parity disk. One of the characteristics of RAID 3 is that it generally cannot service multiple requests simultaneously, which happens because any single block of data will, by definition, be spread across all members of the set and will reside in the same location.[clarification needed] Therefore, any I/O operation requires activity on every disk and usually requires synchronized spindles.

This makes it suitable for applications that demand the highest transfer rates in long sequential reads and writes, for example uncompressed video editing. Applications that make small reads and writes from random disk locations will get the worst performance out of this level.[16]

The requirement that all disks spin synchronously (in a lockstep) added design considerations to a level that provided no significant advantages over other RAID levels, so it quickly became useless and is now obsolete.[15] Both RAID 3 and RAID 4 were quickly replaced by RAID 5.[17] RAID 3 was usually implemented in hardware, and the performance issues were addressed by using large disk caches.[16]

RAID 4

Diagram of a RAID 4 setup with dedicated parity disk with each color representing the group of blocks in the respective parity block (a stripe)

RAID 4 consists of block-level striping with a dedicated parity disk. As a result of its layout, RAID 4 provides good performance of random reads, while the performance of random writes is low due to the need to write all parity data to a single disk.[18]

In the example on the right, a read request for block A1 would be serviced by disk 0. A simultaneous read request for block B1 would have to wait, but a read request for B2 could be serviced concurrently by disk 1.

RAID 5

Diagram of a RAID 5 setup with distributed parity with each color representing the group of blocks in the respective parity block (a stripe). This diagram shows left asymmetric algorithm

RAID 5 consists of block-level striping with distributed parity. Unlike in RAID 4, parity information is distributed among the drives. It requires that all drives but one be present to operate. Upon failure of a single drive, subsequent reads can be calculated from the distributed parity such that no data is lost.[5] RAID 5 requires at least three disks.[19]

In comparison to RAID 4, RAID 5's distributed parity evens out the stress of a dedicated parity disk among all RAID members. Additionally, read performance is increased since all RAID members participate in serving of the read requests.[20]

RAID 6

Diagram of a RAID 6 setup, which is identical to RAID 5 other than the addition of a second parity block

RAID 6 extends RAID 5 by adding another parity block; thus, it uses block-level striping with two parity blocks distributed across all member disks.[21]

According to the Storage Networking Industry Association (SNIA), the definition of RAID 6 is: "Any form of RAID that can continue to execute read and write requests to all of a RAID array's virtual disks in the presence of any two concurrent disk failures. Several methods, including dual check data computations (parity and Reed-Solomon), orthogonal dual parity check data and diagonal parity, have been used to implement RAID Level 6."[22]

Performance

RAID 6 does not have a performance penalty for read operations, but it does have a performance penalty on write operations because of the overhead associated with parity calculations. Performance varies greatly depending on how RAID 6 is implemented in the manufacturer's storage architecture—in software, firmware, or by using firmware and specialized ASICs for intensive parity calculations. It can be as fast as a RAID 5 system with one fewer drive (same number of data drives).[23]

Parity computation

Two different syndromes need to be computed in order to allow the loss of any two drives. One of them, P can be the simple XOR of the data across the stripes, as with RAID 5. A second, independent syndrome is more complicated and requires the assistance of field theory.

To deal with this, the Galois field GF(m) is introduced with m=2^k, where GF(m) \cong F_2[x]/(p(x)) for a suitable irreducible polynomial p(x) of degree k. A chunk of data can be written as d_{k-1}d_{k-2}...d_0 in base 2 where each d_i is either 0 or 1. This is chosen to correspond with the element d_{k-1}x^{k-1} + d_{k-2}x^{k-2} + ... + d_1x + d_0 in the Galois field. Let D_0,...,D_{n-1} \in GF(m) correspond to the stripes of data across hard drives encoded as field elements in this manner (in practice they would probably be broken into byte-sized chunks). If g is some generator of the field and \oplus denotes addition in the field while concatenation denotes multiplication, then \mathbf{P} and \mathbf{Q} may be computed as follows (n denotes the number of data disks):


\mathbf{P} = \bigoplus_i{D_i} = \mathbf{D}_0 \;\oplus\; \mathbf{D}_1 \;\oplus\; \mathbf{D}_2 \;\oplus\; ... \;\oplus\; \mathbf{D}_{n-1}

\mathbf{Q} = \bigoplus_i{g^iD_i} = g^0\mathbf{D}_0 \;\oplus\; g^1\mathbf{D}_1 \;\oplus\; g^2\mathbf{D}_2 \;\oplus\; ... \;\oplus\; g^{n-1}\mathbf{D}_{n-1}

For a computer scientist, a good way to think about this is that \oplus is a bitwise XOR operator and g^i is the action of a linear feedback shift register on a chunk of data. Thus, in the formula above,[24] the calculation of P is just the XOR of each stripe. This is because addition in any characteristic two finite field reduces to the XOR operation. The computation of Q is the XOR of a shifted version of each stripe.

Mathematically, the generator is an element of the field such that g^i is different for each nonnegative i satisfying i < n.

If one data drive is lost, the data can be recomputed from P just like with RAID 5. If two data drives are lost or a data drive and the drive containing P are lost, the data can be recovered from P and Q or from just Q, respectively, using a more complex process. The details can be computed using field theory; suppose that D_i and D_j are the lost values with i \neq j, then using the other values of D, constants A and B may be found so that D_i \oplus D_j = A and g^iD_i \oplus g^jD_j = B:


A = \bigoplus_{\ell:\;\ell\not=i\;\mathrm{and}\;\ell\not=j}{D_\ell} = \mathbf{P} \;\oplus\; \mathbf{D}_0 \;\oplus\; \mathbf{D}_1 \;\oplus\; \dots \;\oplus\; \mathbf{D}_{i-1} \;\oplus\;  \mathbf{D}_{i+1} \;\oplus\;  \dots \;\oplus\; \mathbf{D}_{j-1}  \;\oplus\; \mathbf{D}_{j+1} \;\oplus\;  \dots \;\oplus\;  \mathbf{D}_{n-1}

B = \bigoplus_{\ell:\;\ell\not=i\;\mathrm{and}\;\ell\not=j}{g^{\ell}D_\ell} = \mathbf{Q} \;\oplus\; g^0\mathbf{D}_0 \;\oplus\; g^1\mathbf{D}_1 \;\oplus\; \dots \;\oplus\; g^{i-1}\mathbf{D}_{i-1} \;\oplus\;  g^{i+1}\mathbf{D}_{i+1} \;\oplus\;  \dots \;\oplus\; g^{j-1}\mathbf{D}_{j-1}  \;\oplus\; g^{j+1}\mathbf{D}_{j+1} \;\oplus\;  \dots \;\oplus\; g^{n-1}\mathbf{D}_{n-1}

Multiplying both sides of the equation for B by g^{n-i} and adding to the former equation yields (g^{n-i+j}\oplus1)D_j = g^{n-i}B\oplus A and thus a solution for D_j, which may be used to compute D_i.

The computation of Q is CPU intensive compared to the simplicity of P. Thus, RAID 6 implemented in software will have a more significant effect on system performance, and a hardware solution will be more complex.

Comparison

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>


The following table provides an overview of some considerations for standard RAID levels. In each case:

  • Array space efficiency is given as an expression in terms of the number of drives, n; this expression designates a fractional value between zero and one, representing the fraction of the sum of the drives' capacities that is available for use. For example, if three drives are arranged in RAID 3, this gives an array space efficiency of 1 − 1/n = 1 − 1/3 = 2/3 ≈ 67%; thus, if each drive in this example has a capacity of 250 GB, then the array has a total capacity of 750 GB but the capacity that is usable for data storage is only 500 GB.
  • Array failure rate is given as an expression in terms of the number of drives, n, and the drive failure rate, r (which is assumed identical and independent for each drive). For example, if each of three drives has a failure rate of 5% over the next three years, and these drives are arranged in RAID 3, then this gives an array failure rate over the next three years of:

\begin{align} 1 - (1 - r)^{n} - nr(1 - r)^{n - 1} & = 1 - (1 - 5\%)^{3} - 3 \times 5\% \times (1 - 5\%)^{3 - 1} \\
& = 1 - 0.95^{3} - 0.15 \times 0.95^{2} \\
& = 1 - 0.857375 - 0.135375 \\
& = 0.00725 \\
& \approx 0.7\% \end{align}
Level Description Minimum number of drives[lower-alpha 2] Space efficiency Fault tolerance Array failure rate[lower-alpha 3] Read performance Write performance
RAID 0 Block-level striping without parity or mirroring 2 1 None 1 − (1 − r)n n× n×
RAID 1 Mirroring without parity or striping 2 <templatestyles src="Sfrac/styles.css" />1/n n − 1 drive failures rn n×[lower-alpha 1][14] 1×[lower-alpha 4][14]
RAID 2 Bit-level striping with Hamming code for error correction 3 1 − <templatestyles src="Sfrac/styles.css" />1/n log2 (n − 1) One drive failure[lower-alpha 5] Depends Depends Depends
RAID 3 Byte-level striping with dedicated parity 3 1 − <templatestyles src="Sfrac/styles.css" />1/n One drive failure 1 − (1 − r)nnr (1 − r)n − 1 (n − 1)× (n − 1)×[lower-alpha 6]
RAID 4 Block-level striping with dedicated parity 3 1 − <templatestyles src="Sfrac/styles.css" />1/n One drive failure 1 − (1 − r)nnr (1 − r)n − 1 (n − 1)× (n − 1)×[lower-alpha 6][citation needed]
RAID 5 Block-level striping with distributed parity 3 1 − <templatestyles src="Sfrac/styles.css" />1/n One drive failure 1 − (1 − r)nnr (1 − r)n − 1 n×[lower-alpha 6] (n − 1)×[lower-alpha 6][citation needed]
RAID 6 Block-level striping with double distributed parity 4 1 − <templatestyles src="Sfrac/styles.css" />2/n Two drive failures 1 − (1 − r)nnr (1 − r)n − 1  n
2
  r2 (1 − r)n − 2
n×[lower-alpha 6] (n − 2)×[lower-alpha 6][citation needed]

Non-standard RAID levels and non-RAID drive architectures

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

Alternatives to the above designs include nested RAID levels, non-standard RAID levels, and non-RAID drive architectures. Non-RAID drive architectures are referred to by similar terms and acronyms, notably JBOD ("just a bunch of disks"), SPAN/BIG, and MAID ("massive array of idle disks").

Notes

  1. 1.0 1.1 Theoretical maximum, as low as single-disk performance in practice
  2. Assumes a non-degenerate minimum number of drives
  3. Assumes independent, identical rate of failure amongst drives
  4. If disks with different speeds are used in a RAID 1 array, overall write performance is equal to the speed of the slowest disk.
  5. RAID 2 can recover from one drive failure or repair corrupt data or parity when a corrupted bit's corresponding data and parity are good.
  6. 6.0 6.1 6.2 6.3 6.4 6.5 Assumes hardware capable of performing associated calculations fast enough

References

  1. Lua error in package.lua at line 80: module 'strict' not found.
  2. Lua error in package.lua at line 80: module 'strict' not found.
  3. Lua error in package.lua at line 80: module 'strict' not found.
  4. Lua error in package.lua at line 80: module 'strict' not found.
  5. 5.0 5.1 Lua error in package.lua at line 80: module 'strict' not found.
  6. Lua error in package.lua at line 80: module 'strict' not found.
  7. Lua error in package.lua at line 80: module 'strict' not found.
  8. Lua error in package.lua at line 80: module 'strict' not found.
  9. Lua error in package.lua at line 80: module 'strict' not found.
  10. 10.0 10.1 Lua error in package.lua at line 80: module 'strict' not found.
  11. 11.0 11.1 Lua error in package.lua at line 80: module 'strict' not found.
  12. Lua error in package.lua at line 80: module 'strict' not found.
  13. 13.0 13.1 Lua error in package.lua at line 80: module 'strict' not found.
  14. 14.0 14.1 14.2 14.3 Lua error in package.lua at line 80: module 'strict' not found.
  15. 15.0 15.1 15.2 Lua error in package.lua at line 80: module 'strict' not found.
  16. 16.0 16.1 16.2 16.3 Lua error in package.lua at line 80: module 'strict' not found.
  17. Lua error in package.lua at line 80: module 'strict' not found.
  18. Lua error in package.lua at line 80: module 'strict' not found.
  19. Lua error in package.lua at line 80: module 'strict' not found.
  20. Lua error in package.lua at line 80: module 'strict' not found.
  21. Lua error in package.lua at line 80: module 'strict' not found.
  22. Lua error in package.lua at line 80: module 'strict' not found.
  23. Lua error in package.lua at line 80: module 'strict' not found.
  24. Lua error in package.lua at line 80: module 'strict' not found.

External links