Saturday, February 28, 2026

Server RAID Configurations for Knowledge Safety

RAID (Redundant Array of Unbiased Disks) is without doubt one of the most misunderstood subjects in server storage. It seems steadily in internet hosting specs with out rationalization, and the commonest misunderstanding, that RAID replaces backup, results in knowledge loss in conditions the place the configuration gives no safety.

InMotion Internet hosting devoted servers use mdadm software program RAID 1 (mirroring) throughout twin NVMe drives because the default configuration. This text explains what meaning, what it protects in opposition to, what it doesn’t defend in opposition to, and when totally different RAID configurations make sense for various workloads.

RAID Fundamentals

What RAID Does and Does Not Do

RAID distributes knowledge throughout a number of bodily drives to attain one or each of two objectives: efficiency enchancment by parallelism, and fault tolerance by redundancy. The extent of RAID determines which objective is prioritized.

What RAID doesn’t do: defend in opposition to unintentional deletion, software program corruption, ransomware, or {hardware} failures that have an effect on a number of drives concurrently (fireplace, flood, or an influence surge that damages each drives). These failure modes require backup, not RAID.

RAID Ranges on Twin NVMe Drives

RAID 0: Striping

RAID 0 splits knowledge throughout each drives in alternating blocks. A 100MB write turns into 50MB to Drive 1 and 50MB to Drive 2 concurrently, finishing in roughly half the time of a single-drive write.

  • Usable capability: Full mixed capability (7.68TB on twin 3.84TB drives).
  • Learn efficiency: As much as 2x sequential learn throughput.
  • Write efficiency: As much as 2x sequential write throughput.
  • Redundancy: None. A single drive failure destroys all knowledge on the array.

RAID 0 is acceptable for scratch storage, render caches, and non-critical momentary knowledge the place most throughput issues and the information may be regenerated. It’s not acceptable for manufacturing databases, software knowledge, or any knowledge that can’t be reconstructed from an exterior supply.

RAID 1: Mirroring (InMotion Internet hosting Default)

RAID 1 writes equivalent knowledge to each drives concurrently. Each drives comprise a whole copy of all knowledge. If one drive fails, the array continues working from the surviving drive with no knowledge loss.

  • Usable capability: 50% of mixed capability (3.84TB on twin 3.84TB drives).
  • Learn efficiency: Can learn from both drive; software program RAID can distribute reads for modest enchancment.
  • Write efficiency: Should write to each drives; write throughput restricted to single-drive write pace.
  • Redundancy: Survives one full drive failure with no knowledge loss.

RAID 1 is InMotion Internet hosting’s default configuration for devoted servers with twin NVMe drives. For manufacturing databases, software knowledge, and any workload the place knowledge loss is unacceptable, RAID 1 gives the precise baseline safety. The 50% capability trade-off is the price of redundancy.

RAID 10: Striped Mirrors

RAID 10 requires 4 or extra drives: drives are paired into RAID 1 mirrors, then these mirrors are striped collectively in RAID 0. This combines the efficiency of striping with the redundancy of mirroring.

  • Usable capability: 50% of whole drive capability throughout all drives.
  • Learn efficiency: 2x sequential learn throughput (stripe throughout mirror pairs).
  • Write efficiency: Matches single-drive write pace (should write to reflect pairs).
  • Redundancy: Survives one drive failure per mirror pair; can survive a number of failures in the event that they happen in several pairs.

RAID 10 on InMotion Internet hosting servers would require 4 NVMe drives, which isn’t the usual dual-drive configuration. For workloads requiring each most throughput and redundancy, a multi-server structure with application-level replication (database main/reproduction) typically gives higher outcomes than a single server with 4 drives.

Software program RAID vs. {Hardware} RAID

How InMotion’s mdadm RAID Works

InMotion Internet hosting makes use of mdadm (A number of Gadget Administration), the Linux kernel’s software program RAID implementation. This can be a essential distinction from {hardware} RAID controllers, which use a devoted processor on a RAID controller card to handle array operations.

Software program RAID operations (parity calculation for RAID 5/6, mirroring writes for RAID 1) run on the server’s important CPU. On trendy multi-core processors, this overhead is minimal for RAID 1: a RAID 1 mirror write requires no parity calculation, simply writing to 2 units concurrently. The CPU overhead for mdadm RAID 1 on an NVMe array is usually beneath 1% on a 16-core AMD EPYC processor.

Benefits of Software program RAID

  • No controller failure mode: {Hardware} RAID controllers can fail. When a proprietary RAID controller fails, the array is usually unreadable with out an equivalent substitute controller. mdadm arrays may be learn on any Linux system with the identical model of mdadm.
  • No battery-backed write cache requirement: {Hardware} RAID controllers use battery-backed write cache to soundly delay writes to disk. This cache is a failure level. mdadm RAID 1 writes on to NVMe, which has built-in power-loss safety (PLP) on enterprise-grade NVMe drives.
  • Portability: An mdadm RAID 1 array may be moved to a different server and reassembled. Drive metadata makes reassembly computerized.

{Hardware} RAID Benefits (and Why They Matter Much less on NVMe)

{Hardware} RAID controllers traditionally offered two benefits over software program RAID: battery-backed write cache for secure write acceleration, and devoted processing to keep away from CPU overhead on complicated RAID ranges (RAID 5, RAID 6).

NVMe drives with power-loss safety (enterprise NVMe, which InMotion makes use of) have built-in capacitors that flush write buffers to non-volatile storage on energy loss. This eliminates the first security concern that battery-backed RAID cache addressed. And the CPU overhead argument was related when servers ran single-core or dual-core processors dealing with giant parity calculations. On a 16-core EPYC doing RAID 1, the overhead is negligible.

NVMe RAID Efficiency Traits

Configuration Sequential Learn Sequential Write Random Learn IOPS Fault Tolerance
Single 3.84TB NVMe ~5,500 MB/s ~4,000 MB/s ~500,000 None
RAID 0 (2x 3.84TB NVMe) ~7,000 MB/s ~6,000 MB/s ~800,000 None
RAID 1 (2x 3.84TB NVMe) ~5,500 MB/s ~4,000 MB/s ~500,000 Single drive failure

RAID 1 sequential learn efficiency may be barely greater than a single drive if the software program RAID driver distributes consecutive reads throughout each drives. In follow, mdadm RAID 1 learn efficiency is roughly single-drive pace for sequential workloads and barely higher for random reads beneath concurrent entry.

The write efficiency of RAID 1 matches single-drive write pace as a result of each drives should obtain the write earlier than it’s thought of full. On NVMe drives rated at 4GB/s sequential write, RAID 1 write throughput is roughly 4GB/s. That is quick sufficient for just about any single-server workload.

Drive Failure and Rebuild Course of

What Occurs When a Drive Fails

When one drive in an mdadm RAID 1 array fails, the array continues working in a degraded state from the surviving drive. Efficiency might drop barely throughout degraded operation as all reads now come from a single drive, however the server stays on-line and knowledge stays intact.

InMotion Internet hosting’s monitoring detects drive failures and initiates {hardware} substitute. As soon as the substitute drive is put in, mdadm rebuilds the array by copying all knowledge from the surviving drive to the brand new drive.

Rebuild Instances on NVMe

NVMe rebuild occasions are considerably sooner than SATA SSD or spinning disk:

  • Spinning disk RAID 1 rebuild: 12-24 hours for a 3-4TB drive at typical rebuild speeds of 50-100 MB/s.
  • SATA SSD RAID 1 rebuild: 2-4 hours for a 1.92TB drive at 150-200 MB/s.
  • NVMe RAID 1 rebuild: Beneath 1 hour for a 3.84TB drive at sustained 1-2 GB/s rebuild speeds.

The rebuild pace issues as a result of throughout rebuild, the surviving drive handles each manufacturing I/O and rebuild I/O concurrently. The shorter the rebuild window, the much less time the array spends in a degraded state the place a second drive failure would trigger knowledge loss.

RAID Is Not Backup: The Essential Distinction

This distinction deserves specific emphasis as a result of the confusion is widespread and the implications are extreme.

RAID 1 protects in opposition to a single bodily drive failure. It doesn’t defend in opposition to:

  • Unintended file deletion (each drives delete the file concurrently)
  • Database corruption from a software program bug (each drives retailer the corrupted knowledge)
  • Ransomware (each drives encrypt concurrently)
  • A number of simultaneous drive failures from an influence surge or fireplace
  • Server theft or knowledge middle catastrophe

Safety in opposition to these failure modes requires backup to a separate bodily location. InMotion Premier Care consists of 500GB of automated backup storage saved off-server. That is the backup layer that enhances RAID’s drive failure safety.

An entire knowledge safety technique makes use of each: RAID 1 for steady drive fault tolerance with no downtime, and off-server backups for the whole lot RAID can’t defend in opposition to. Neither replaces the opposite.

Selecting the Proper Configuration for Your Workload

Workload Beneficial RAID Cause
Manufacturing database RAID 1 (default) Knowledge integrity; drive failure can’t trigger knowledge loss
Render cache / scratch RAID 0 or no RAID Knowledge is regenerable; efficiency and capability matter extra
Software + database on identical server RAID 1 (default) Software and database each want safety
Improvement atmosphere No RAID acceptable Knowledge loss is inconvenient, not catastrophic; use model management
File server / archive RAID 1 + offsite backup Each drive fault tolerance and catastrophe safety wanted

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles