What is RAID?

What is RAID?


RAID (Redundant Array of Independent Disks, originally Redundant Array of Inexpensive Disks) is a data storage virtualization technology that combines multiple physical disk drive components into one or more logical units for the purposes of data redundancy, performance improvements.

Data is distributed across the drives in one of several ways, referred to as RAID levels, depending on the required level of redundancy and performance. The different schemes, or data distribution layouts, are named by the word “RAID” followed by a number, for example RAID 0 or RAID 1. Each scheme, or RAID level, provides a different balance among the key goals: reliability, availability, performance, and capacity. RAID levels greater than RAID 0 provide protection against unrecoverable sector read errors, as well as against failures of whole physical drives.



Many RAID levels employ an error protection scheme called “parity”, a widely used method in information technology to provide fault tolerance in a given set of data.

RAID can also provide data security with solid-state drives (SSDs) without the expense of an all-SSD system. For example, a fast SSD can be mirrored with a mechanical drive. For this configuration to provide a significant speed advantage an appropriate controller is needed that uses the fast SSD for all read operations. Adaptec calls this “hybrid RAID”.


Different Levels of RAID

A number of standard schemes have evolved. These are called levels. Originally, there were five RAID levels, but many variations have evolved, notably several nested levels and many non-standard levels (mostly proprietary). RAID levels and their associated data formats are standardized by the Storage Networking Industry Association (SNIA) in the Common RAID Disk Drive Format (DDF) standard

RAID 0: consists of striping, but no mirroring or parity. Compared to a spanned volume, the capacity of a RAID 0 volume is the same; it is the sum of the capacities of the disks in the set. But because striping distributes the contents of each file among all disks in the set, the failure of any disk causes all files, the entire RAID 0 volume, to be lost. A broken spanned volume at least preserves the files on the unfailing disks. The benefit of RAID 0 is that the throughput of read and write operations to any file is multiplied by the number of disks because, unlike spanned volumes, reads and writes are done concurrently, and the cost is complete vulnerability to drive failures.

RAID 1: consists of data mirroring, without parity or striping. Data is written identically to two drives, thereby producing a “mirrored set” of drives. Thus, any read request can be serviced by any drive in the set. If a request is broadcast to every drive in the set, it can be serviced by the drive that accesses the data first (depending on its seek time and rotational latency), improving performance. Sustained read throughput, if the controller or software is optimized for it, approaches the sum of throughputs of every drive in the set, just as for RAID 0. Actual read throughput of most RAID 1 implementations is slower than the fastest drive. Write throughput is always slower because every drive must be updated, and the slowest drive limits the write performance. The array continues to operate as long as at least one drive is functioning.

RAID 2: consists of bit-level striping with dedicated Hamming-code parity. All disk spindle rotation is synchronized and data is striped such that each sequential bit is on a different drive. Hamming-code parity is calculated across corresponding bits and stored on at least one parity drive. This level is of historical significance only; although it was used on some early machines (for example, the Thinking Machines CM-2),as of 2014 it is not used by any commercially available system.

RAID 3: consists of byte-level striping with dedicated parity. All disk spindle rotation is synchronized and data is striped such that each sequential byte is on a different drive. Parity is calculated across corresponding bytes and stored on a dedicated parity drive. Although implementations exist, RAID 3 is not commonly used in practice.

RAID 4: consists of block-level striping with dedicated parity. This level was previously used by NetApp, but has now been largely replaced by a proprietary implementation of RAID 4 with two parity disks, called RAID-DP. The main advantage of RAID 4 over RAID 2 and 3 is I/O parallelism: in RAID 2 and 3, a single read I/O operation requires reading the whole group of data drives, while in RAID 4 one I/O read operation does not have to spread across all data drives. As a result, more I/O operations can be executed in parallel, improving the performance of small transfers.

RAID 5: consists of block-level striping with distributed parity. Unlike RAID 4, parity information is distributed among the drives, requiring all drives but one to be present to operate. Upon failure of a single drive, subsequent reads can be calculated from the distributed parity such that no data is lost. RAID 5 requires at least three disks. Like all single-parity concepts, large RAID 5 implementations are susceptible to system failures because of trends regarding array rebuild time and the chance of drive failure during rebuild. Rebuilding an array requires reading all data from all disks, opening a chance for a second drive failure and the loss of the entire array.

RAID 6: consists of block-level striping with double distributed parity. Double parity provides fault tolerance up to two failed drives. This makes larger RAID groups more practical, especially for high-availability systems, as large-capacity drives take longer to restore. RAID 6 requires a minimum of four disks. As with RAID 5, a single drive failure results in reduced performance of the entire array until the failed drive has been replaced. With a RAID 6 array, using drives from multiple sources and manufacturers, it is possible to mitigate most of the problems associated with RAID 5. The larger the drive capacities and the larger the array size, the more important it becomes to choose RAID 6 instead of RAID 5. RAID 10 also minimizes these problems.


Implementations, not all RAIDs are the same

The distribution of data across multiple drives can be managed either by dedicated computer hardware or by software. A software solution may be part of the operating system, part of the firmware and drivers supplied with a standard drive controller (so-called “hardware-assisted software RAID”), or it may reside entirely within the hardware RAID controller.

Pros of Hardware RAID Cons of Hardware RAID

  • Easy to set up – Most controllers have a menu driven wizard to guide you through building your array or even are automatically set up right out of the box.
  • Easy to replace disks – If a disk fails just pull it out and replace it with a new one.
  • Performance improvements (sometimes) – If you are running tremendously intense workloads or utilizing an underpowered CPU hardware raid can offer a performance improvement.














  • Proprietary – Minimal or complete lack of detailed hardware and software specifications.
  • On-Disk meta data can make it near impossible to recover data without a compatible RAID card – If your controller goes casters-up you’ll have to find a compatible model to replace it with, your disks won’t be useful without the controller. This is especially bad if working with a discontinued model that has failed after years of operation
  • Monitoring implementations are all over the road – Depending on the vendor and model the ability and interface to monitor the health and performance of your array varies greatly. Often you are tied to specific piece of software that the vendor provides.
  • Additional cost – Hardware RAID cards cost more than standard disk controllers. High end models can be very expensive.
  • Lack of standardization between controllers (configuration, management, etc)– The keystrokes and software that you use to manage and monitor one card likely won’t work on another.
  • Inflexible – Your ability to reshape, split and perform other maintenance on arrays varies tremendously with each card.


Pros of Software RAID Cons of Software RAID

  • Hardware independent – RAID is implemented on the OS which keeps things consistent regardless of the hardware manufacturer.
  • Standardized RAID configuration (for each OS) – The management toolkit is OS specific rather than specific to each individual type of RAID card.
  • Standardized monitoring (for each OS) – Since the toolkit is consistent you can expect to monitor the health of each of your servers using the same methods.
  • Good performance – CPUs just keep getting faster and faster. Unless you are performing tremendous amounts of IO the extra cost just doesn’t seem worthwhile.
  • Very flexible – Software raid allows you to reconfigure your arrays in ways that I have not found possible with hardware controllers.





  • Need to learn the software RAID tool set for each OS. – These tools are often well documented but not as quick to get off the ground as their hardware counterparts.
  • Typically need to incorporate the RAID build into the OS install process. – Servers won’t come mirrored out of the box, you’ll need to make sure this happens yourself.
  • Slower performance than dedicated hardware – A high end dedicated RAID card will match or outperform software.
  • Additional load on CPU – RAID operations have to be calculated somewhere and in software this will run on your CPU instead of dedicated hardware. I have yet to see a real-world performance degradation introduced by this, however.
  • Disk replacement sometimes requires prep work-You typically should tell the software RAID system to stop using a disk before you yank it out of the system. I’ve seen systems panic when a failed disk was physically removed before being logically removed.



Nested RAID levels

Also known as hybrid RAID, combine two or more of the standard RAID levels to gain performance, additional redundancy or both. Nested RAID levels include RAID 01, RAID 10, RAID 100, RAID 50 and RAID 60.
Here is an example of a RAID50, the configuration uses RAID5 with a RAID0 over the top.

One drive from each of the RAID 5 sets could fail without loss of data; for example, a RAID 50 configuration including three RAID 5 sets can only tolerate three maximum potential drive failures. Because the reliability of the system depends on quick replacement of the bad drive so the array can rebuild, it is common to include hot spares that can immediately start rebuilding the array upon failure. However, this does not address the issue that the array is put under maximum strain reading every bit to rebuild the array at the time when it is most vulnerable.


Free RAID Calculator



Share this post

Leave a Reply