The mdadm
utility can be used to create and manage storage arrays using Linux’s software RAID capabilities. Administrators have great flexibility in coordinating their individual storage devices and creating logical storage devices that have greater performance or redundancy characteristics.
In this guide, we will go over a number of different RAID configurations that can be set up using a Debian 9 server.
In order to complete the steps in this guide, you should have:
sudo
privileges on a Debian 9 server: The steps in this guide will be completed with a sudo
user. To learn how to set up an account with these privileges, follow our Debian 9 initial server setup guide.Info: Due to the inefficiency of RAID setups on virtual private servers, we don’t recommend deploying a RAID setup on DigitalOcean droplets. The efficiency of datacenter disk replication makes the benefits of a RAID negligible, relative to a setup on baremetal hardware. This tutorial aims to be a reference for a conventional RAID setup.
Before we begin, we need to install mdadm
, the tool that allows us to set up and manage software RAID arrays in Linux. This is available in Debian’s default repositories.
Update the local package cache to retrieve an up-to-date list of available packages and then download and install the package:
- sudo apt update
- sudo apt install mdadm
This will install mdadm
and all of its dependencies. Verify that the utility is installed by typing:
- sudo mdadm -V
Outputmdadm - v3.4 - 28th January 2016
The application version should be displayed, indicating that mdadm
is installed and ready to use.
Throughout this guide, we will be introducing the steps to create a number of different RAID levels. If you wish to follow along, you will likely want to reuse your storage devices after each section. This section can be referenced to learn how to quickly reset your component storage devices prior to testing a new RAID level. Skip this section for now if you have not yet set up any arrays.
Warning: This process will completely destroy the array and any data written to it. Make sure that you are operating on the correct array and that you have copied off any data you need to retain prior to destroying the array.
Find the active arrays in the /proc/mdstat
file by typing:
- cat /proc/mdstat
OutputPersonalities : [raid0] [linear] [multipath] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid0 sdc[1] sdd[0]
209584128 blocks super 1.2 512k chunks
unused devices: <none>
Unmount the array from the filesystem:
- sudo umount /dev/md0
Then, stop and remove the array by typing:
- sudo mdadm --stop /dev/md0
Find the devices that were used to build the array with the following command:
Warning: Keep in mind that the /dev/sd*
names can change any time you reboot! Check them every time to make sure you are operating on the correct devices.
- lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
OutputNAME SIZE FSTYPE TYPE MOUNTPOINT
sda 100G disk
sdb 100G disk
sdc 100G linux_raid_member disk
sdd 100G linux_raid_member disk
vda 25G disk
├─vda1 24.9G ext4 part /
├─vda14 4M part
└─vda15 106M vfat part /boot/efi
After discovering the devices used to create an array, zero their superblock to remove the RAID metadata and reset them to normal:
- sudo mdadm --zero-superblock /dev/sdc
- sudo mdadm --zero-superblock /dev/sdd
You should remove any of the persistent references to the array. Edit the /etc/fstab
file and comment out or remove the reference to your array:
- sudo nano /etc/fstab
. . .
# /dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0
Also, comment out or remove the array definition from the /etc/mdadm/mdadm.conf
file:
- sudo nano /etc/mdadm/mdadm.conf
. . .
# ARRAY /dev/md0 metadata=1.2 name=mdadmwrite:0 UUID=7261fb9c:976d0d97:30bc63ce:85e76e91
Finally, update the initramfs
again so that the early boot process does not try to bring an unavailable array online.
- sudo update-initramfs -u
At this point, you should be ready to reuse the storage devices individually, or as components of a different array.
The RAID 0 array works by breaking up data into chunks and striping it across the available disks. This means that each disk contains a portion of the data and that multiple disks will be referenced when retrieving information.
To get started, find the identifiers for the raw disks that you will be using:
- lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
OutputNAME SIZE FSTYPE TYPE MOUNTPOINT
sda 100G disk
sdb 100G disk
vda 25G disk
├─vda1 24.9G ext4 part /
├─vda14 4M part
└─vda15 106M vfat part /boot/efi
As you can see above, we have two disks without a filesystem, each 100G in size. In this example, these devices have been given the /dev/sda
and /dev/sdb
identifiers for this session. These will be the raw components we will use to build the array.
To create a RAID 0 array with these components, pass them in to the mdadm --create
command. You will have to specify the device name you wish to create (/dev/md0
in our case), the RAID level, and the number of devices:
- sudo mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sda /dev/sdb
You can ensure that the RAID was successfully created by checking the /proc/mdstat
file:
- cat /proc/mdstat
OutputPersonalities : [raid0]
md0 : active raid0 sdb[1] sda[0]
209584128 blocks super 1.2 512k chunks
unused devices: <none>
As you can see in the highlighted line, the /dev/md0
device has been created in the RAID 0 configuration using the /dev/sda
and /dev/sdb
devices.
Next, create a filesystem on the array:
- sudo mkfs.ext4 -F /dev/md0
Create a mount point to attach the new filesystem:
- sudo mkdir -p /mnt/md0
You can mount the filesystem by typing:
- sudo mount /dev/md0 /mnt/md0
Check whether the new space is available by typing:
- df -h -x devtmpfs -x tmpfs
OutputFilesystem Size Used Avail Use% Mounted on
/dev/vda1 25G 1003M 23G 5% /
/dev/md0 196G 61M 186G 1% /mnt/md0
The new filesystem is mounted and accessible.
To make sure that the array is reassembled automatically at boot, we will have to adjust the /etc/mdadm/mdadm.conf
file. You can automatically scan the active array and append the file by typing:
- sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf
Afterwards, you can update the initramfs, or initial RAM file system, so that the array will be available during the early boot process:
- sudo update-initramfs -u
Add the new filesystem mount options to the /etc/fstab
file for automatic mounting at boot:
- echo '/dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0' | sudo tee -a /etc/fstab
Your RAID 0 array should now automatically be assembled and mounted each boot.
The RAID 1 array type is implemented by mirroring data across all available disks. Each disk in a RAID 1 array gets a full copy of the data, providing redundancy in the event of a device failure.
To get started, find the identifiers for the raw disks that you will be using:
- lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
OutputNAME SIZE FSTYPE TYPE MOUNTPOINT
sda 100G disk
sdb 100G disk
vda 25G disk
├─vda1 24.9G ext4 part /
├─vda14 4M part
└─vda15 106M vfat part /boot/efi
As you can see above, we have two disks without a filesystem, each 100G in size. In this example, these devices have been given the /dev/sda
and /dev/sdb
identifiers for this session. These will be the raw components we will use to build the array.
To create a RAID 1 array with these components, pass them in to the mdadm --create
command. You will have to specify the device name you wish to create (/dev/md0
in our case), the RAID level, and the number of devices:
- sudo mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sda /dev/sdb
If the component devices you are using are not partitions with the boot
flag enabled, you will likely see the following warning. It is safe to type y to continue:
Outputmdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
mdadm: size set to 104792064K
Continue creating array? y
The mdadm
tool will start to mirror the drives. This can take some time to complete, but the array can be used during this time. You can monitor the progress of the mirroring by checking the /proc/mdstat
file:
- cat /proc/mdstat
OutputPersonalities : [raid0] [linear] [multipath] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdb[1] sda[0]
104792064 blocks super 1.2 [2/2] [UU]
[>....................] resync = 1.5% (1629632/104792064) finish=8.4min speed=203704K/sec
unused devices: <none>
As you can see in the first highlighted line, the /dev/md0
device has been created in the RAID 1 configuration using the /dev/sda
and /dev/sdb
devices. The second highlighted line shows the progress on the mirroring. You can continue the guide while this process completes.
Next, create a filesystem on the array:
- sudo mkfs.ext4 -F /dev/md0
Create a mount point to attach the new filesystem:
- sudo mkdir -p /mnt/md0
You can mount the filesystem by typing:
- sudo mount /dev/md0 /mnt/md0
Check whether the new space is available by typing:
- df -h -x devtmpfs -x tmpfs
OutputFilesystem Size Used Avail Use% Mounted on
/dev/vda1 25G 1003M 23G 5% /
/dev/md0 98G 61M 93G 1% /mnt/md0
The new filesystem is mounted and accessible.
To make sure that the array is reassembled automatically at boot, we will have to adjust the /etc/mdadm/mdadm.conf
file. You can automatically scan the active array and append the file by typing:
- sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf
Afterwards, you can update the initramfs, or initial RAM file system, so that the array will be available during the early boot process:
- sudo update-initramfs -u
Add the new filesystem mount options to the /etc/fstab
file for automatic mounting at boot:
- echo '/dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0' | sudo tee -a /etc/fstab
Your RAID 1 array should now automatically be assembled and mounted each boot.
The RAID 5 array type is implemented by striping data across the available devices. One component of each stripe is a calculated parity block. If a device fails, the parity block and the remaining blocks can be used to calculate the missing data. The device that receives the parity block is rotated so that each device has a balanced amount of parity information.
To get started, find the identifiers for the raw disks that you will be using:
- lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
OutputNAME SIZE FSTYPE TYPE MOUNTPOINT
sda 100G disk
sdb 100G disk
sdc 100G disk
vda 25G disk
├─vda1 24.9G ext4 part /
├─vda14 4M part
└─vda15 106M vfat part /boot/efi
As you can see above, we have three disks without a filesystem, each 100G in size. In this example, these devices have been given the /dev/sda
, /dev/sdb
, and /dev/sdc
identifiers for this session. These will be the raw components we will use to build the array.
To create a RAID 5 array with these components, pass them in to the mdadm --create
command. You will have to specify the device name you wish to create (/dev/md0
in our case), the RAID level, and the number of devices:
- sudo mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sda /dev/sdb /dev/sdc
The mdadm
tool will start to configure the array (it actually uses the recovery process to build the array for performance reasons). This can take some time to complete, but the array can be used during this time. You can monitor the progress of the mirroring by checking the /proc/mdstat
file:
- cat /proc/mdstat
OutputPersonalities : [raid0] [linear] [multipath] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid5 sdc[3] sdb[1] sda[0]
209584128 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
[>....................] recovery = 0.9% (1031612/104792064) finish=10.0min speed=171935K/sec
unused devices: <none>
As you can see in the first highlighted line, the /dev/md0
device has been created in the RAID 5 configuration using the /dev/sda
, /dev/sdb
and /dev/sdc
devices. The second highlighted line shows the progress on the build.
Warning: Due to the way that mdadm
builds RAID 5 arrays, while the array is still building, the number of spares in the array will be inaccurately reported. This means that you must wait for the array to finish assembling before updating the /etc/mdadm/mdadm.conf
file. If you update the configuration file while the array is still building, the system will have incorrect information about the array state and will be unable to assemble it automatically at boot with the correct name.
You can continue the guide while this process completes.
Next, create a filesystem on the array:
- sudo mkfs.ext4 -F /dev/md0
Create a mount point to attach the new filesystem:
- sudo mkdir -p /mnt/md0
You can mount the filesystem by typing:
- sudo mount /dev/md0 /mnt/md0
Check whether the new space is available by typing:
- df -h -x devtmpfs -x tmpfs
OutputFilesystem Size Used Avail Use% Mounted on
/dev/vda1 25G 1003M 23G 5% /
/dev/md0 196G 61M 186G 1% /mnt/md0
The new filesystem is mounted and accessible.
To make sure that the array is reassembled automatically at boot, we will have to adjust the /etc/mdadm/mdadm.conf
file.
As mentioned above, before you adjust the configuration, check again to make sure the array has finished assembling. Completing this step before the array is built will prevent the system from assembling the array correctly on reboot:
- cat /proc/mdstat
OutputPersonalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md0 : active raid5 sdc[3] sdb[1] sda[0]
209584128 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
unused devices: <none>
The output above shows that the rebuild is complete. Now, we can automatically scan the active array and append the file by typing:
- sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf
Afterwards, you can update the initramfs, or initial RAM file system, so that the array will be available during the early boot process:
- sudo update-initramfs -u
Add the new filesystem mount options to the /etc/fstab
file for automatic mounting at boot:
- echo '/dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0' | sudo tee -a /etc/fstab
Your RAID 5 array should now automatically be assembled and mounted each boot.
The RAID 6 array type is implemented by striping data across the available devices. Two components of each stripe are calculated parity blocks. If one or two devices fail, the parity blocks and the remaining blocks can be used to calculate the missing data. The devices that receive the parity blocks are rotated so that each device has a balanced amount of parity information. This is similar to a RAID 5 array, but allows for the failure of two drives.
To get started, find the identifiers for the raw disks that you will be using:
- lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
OutputNAME SIZE FSTYPE TYPE MOUNTPOINT
sda 100G disk
sdb 100G disk
sdc 100G disk
sdd 100G disk
vda 25G disk
├─vda1 24.9G ext4 part /
├─vda14 4M part
└─vda15 106M vfat part /boot/efi
As you can see above, we have four disks without a filesystem, each 100G in size. In this example, these devices have been given the /dev/sda
, /dev/sdb
, /dev/sdc
, and /dev/sdd
identifiers for this session. These will be the raw components we will use to build the array.
To create a RAID 6 array with these components, pass them in to the mdadm --create
command. You will have to specify the device name you wish to create (/dev/md0
in our case), the RAID level, and the number of devices:
- sudo mdadm --create --verbose /dev/md0 --level=6 --raid-devices=4 /dev/sda /dev/sdb /dev/sdc /dev/sdd
The mdadm
tool will start to configure the array (it actually uses the recovery process to build the array for performance reasons). This can take some time to complete, but the array can be used during this time. You can monitor the progress of the mirroring by checking the /proc/mdstat
file:
- cat /proc/mdstat
OutputPersonalities : [raid0] [linear] [multipath] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid6 sdd[3] sdc[2] sdb[1] sda[0]
209584128 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU]
[>....................] resync = 0.3% (353056/104792064) finish=14.7min speed=117685K/sec
unused devices: <none>
As you can see in the first highlighted line, the /dev/md0
device has been created in the RAID 6 configuration using the /dev/sda
, /dev/sdb
, /dev/sdc
and /dev/sdd
devices. The second highlighted line shows the progress on the build. You can continue the guide while this process completes.
Next, create a filesystem on the array:
- sudo mkfs.ext4 -F /dev/md0
Create a mount point to attach the new filesystem:
- sudo mkdir -p /mnt/md0
You can mount the filesystem by typing:
- sudo mount /dev/md0 /mnt/md0
Check whether the new space is available by typing:
- df -h -x devtmpfs -x tmpfs
OutputFilesystem Size Used Avail Use% Mounted on
/dev/vda1 25G 1003M 23G 5% /
/dev/md0 196G 61M 186G 1% /mnt/md0
The new filesystem is mounted and accessible.
To make sure that the array is reassembled automatically at boot, we will have to adjust the /etc/mdadm/mdadm.conf
file. We can automatically scan the active array and append the file by typing:
- sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf
Afterwards, you can update the initramfs, or initial RAM file system, so that the array will be available during the early boot process:
- sudo update-initramfs -u
Add the new filesystem mount options to the /etc/fstab
file for automatic mounting at boot:
- echo '/dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0' | sudo tee -a /etc/fstab
Your RAID 6 array should now automatically be assembled and mounted each boot.
The RAID 10 array type is traditionally implemented by creating a striped RAID 0 array composed of sets of RAID 1 arrays. This nested array type gives both redundancy and high performance, at the expense of large amounts of disk space. The mdadm
utility has its own RAID 10 type that provides the same type of benefits with increased flexibility. It is not created by nesting arrays, but has many of the same characteristics and guarantees. We will be using the mdadm
RAID 10 here.
mdadm
style RAID 10 is configurable.By default, two copies of each data block will be stored in what is called the “near” layout. The possible layouts that dictate how each data block is stored are:
You can find out more about these layouts by checking out the “RAID10” section of this man
page:
- man 4 md
You can also find this man
page online here.
To get started, find the identifiers for the raw disks that you will be using:
- lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
OutputNAME SIZE FSTYPE TYPE MOUNTPOINT
sda 100G disk
sdb 100G disk
sdc 100G disk
sdd 100G disk
vda 25G disk
├─vda1 24.9G ext4 part /
├─vda14 4M part
└─vda15 106M vfat part /boot/efi
As you can see above, we have four disks without a filesystem, each 100G in size. In this example, these devices have been given the /dev/sda
, /dev/sdb
, /dev/sdc
, and /dev/sdd
identifiers for this session. These will be the raw components we will use to build the array.
To create a RAID 10 array with these components, pass them in to the mdadm --create
command. You will have to specify the device name you wish to create (/dev/md0
in our case), the RAID level, and the number of devices.
You can set up two copies using the near layout by not specifying a layout and copy number:
- sudo mdadm --create --verbose /dev/md0 --level=10 --raid-devices=4 /dev/sda /dev/sdb /dev/sdc /dev/sdd
If you want to use a different layout, or change the number of copies, you will have to use the --layout=
option, which takes a layout and copy identifier. The layouts are n for near, f for far, and o for offset. The number of copies to store is appended afterwards.
For instance, to create an array that has 3 copies in the offset layout, the command would look like this:
- sudo mdadm --create --verbose /dev/md0 --level=10 --layout=o3 --raid-devices=4 /dev/sda /dev/sdb /dev/sdc /dev/sdd
The mdadm
tool will start to configure the array (it actually uses the recovery process to build the array for performance reasons). This can take some time to complete, but the array can be used during this time. You can monitor the progress of the mirroring by checking the /proc/mdstat
file:
- cat /proc/mdstat
OutputPersonalities : [raid0] [linear] [multipath] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid10 sdd[3] sdc[2] sdb[1] sda[0]
209584128 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
[>....................] resync = 1.3% (2832768/209584128) finish=15.8min speed=217905K/sec
unused devices: <none>
As you can see in the first highlighted line, the /dev/md0
device has been created in the RAID 10 configuration using the /dev/sda
, /dev/sdb
, /dev/sdc
and /dev/sdd
devices. The second highlighted area shows the layout that was used for this example (2 copies in the near configuration). The third highlighted area shows the progress on the build. You can continue the guide while this process completes.
Next, create a filesystem on the array:
- sudo mkfs.ext4 -F /dev/md0
Create a mount point to attach the new filesystem:
- sudo mkdir -p /mnt/md0
You can mount the filesystem by typing:
- sudo mount /dev/md0 /mnt/md0
Check whether the new space is available by typing:
- df -h -x devtmpfs -x tmpfs
OutputFilesystem Size Used Avail Use% Mounted on
/dev/vda1 25G 1003M 23G 5% /
/dev/md0 196G 61M 186G 1% /mnt/md0
The new filesystem is mounted and accessible.
To make sure that the array is reassembled automatically at boot, we will have to adjust the /etc/mdadm/mdadm.conf
file. We can automatically scan the active array and append the file by typing:
- sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf
Afterwards, you can update the initramfs, or initial RAM file system, so that the array will be available during the early boot process:
- sudo update-initramfs -u
Add the new filesystem mount options to the /etc/fstab
file for automatic mounting at boot:
- echo '/dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0' | sudo tee -a /etc/fstab
Your RAID 10 array should now automatically be assembled and mounted each boot.
In this guide, we demonstrated how to create various types of arrays using Linux’s mdadm
software RAID utility. RAID arrays offer some compelling redundancy and performance enhancements over using multiple disks individually.
Once you have settled on the type of array needed for your environment and created the device, you will need to learn how to perform day-to-day management with mdadm
. Our guide on how to manage RAID arrays with mdadm
can help get you started.
Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
Thank U Justin very much about this tutorial! I’ve never worked with Linux RAID systems before, and I got RAID-1 running “out of the box” under Debian 9 (Stretch) just by following this fine tutorial :-)
Just yesterday I tried the same with brand new Debian 10 release (Buster). There are, however, some changes made in this Debian release concerning the PATH-varible for root user(ref: https://wiki.debian.org/NewInBuster) which must be taken into account when following this tutorial made for Debian 9.
Btw, I myself prefer using UUID instead of “/dev/md…” when configuring /etc/fstab.
Apparently following this tutorial (Raid-1) under Debian Sid does not work as advertised. Everything is OK till next reboot then the array is lost and must be recreated/synchronized (content is not lost). Apparently masterblock is not written to disk. I still see “old” partition table. I already did this thrice.