The mdadm
utility can be used to create and manage storage arrays using Linux’s software RAID capabilities. Administrators have great flexibility in coordinating their individual storage devices and creating logical storage devices that have greater performance or redundancy characteristics.
In this guide, you will perform different RAID configurations that can be set up using an Ubuntu server.
To follow the steps in this guide, you will need:
sudo
privileges on an Ubuntu server. To learn how to set up an account with these privileges, follow our Ubuntu initial server setup guide.Info: Due to the inefficiency of RAID setups on virtual private servers, we don’t recommend deploying a RAID setup on DigitalOcean droplets. The efficiency of data center disk replication makes the benefits of a RAID negligible relative to a setup on bare-metal hardware. This tutorial aims to be a reference for a conventional RAID setup.
You can skip this section for now if you have not yet set up any arrays. This guide will introduce a number of different RAID levels. If you wish to follow along and complete each RAID level for your devices, you will likely want to reuse your storage devices after each section. This specific section Resetting Existing RAID Devices can be referenced to reset your component storage devices prior to testing a new RAID level.
Warning: This process will completely destroy the array and any data written to it. Make sure that you are operating on the correct array and that you have copied any data you need to retain prior to destroying the array.
Begin by finding the active arrays in the /proc/mdstat
file:
- cat /proc/mdstat
OutputPersonalities : [raid0] [linear] [multipath] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid0 sdc[1] sdd[0]
209584128 blocks super 1.2 512k chunks
unused devices: <none>
Then unmount the array from the filesystem:
- sudo umount /dev/md0
Now stop and remove the array:
- sudo mdadm --stop /dev/md0
Find the devices that were used to build the array with the following command:
Warning: Keep in mind that the /dev/sd*
names can change any time you reboot. Check them every time to make sure you are operating on the correct devices.
- lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
OutputNAME SIZE FSTYPE TYPE MOUNTPOINT
sda 100G linux_raid_member disk
sdb 100G linux_raid_member disk
sdc 100G disk
sdd 100G disk
vda 25G disk
├─vda1 24.9G ext4 part /
├─vda14 4M part
└─vda15 106M vfat part /boot/efi
vdb 466K iso9660 disk
After discovering the devices used to create an array, zero their superblock which holds metadata for the RAID setup. Zeroing this removes the RAID metadata and resets them to normal:
- sudo mdadm --zero-superblock /dev/sda
- sudo mdadm --zero-superblock /dev/sdb
It’s recommended to also remove any persistent references to the array. Edit the /etc/fstab
file and comment out or remove the reference to your array. You can comment it out by inserting a hashtag symbol #
at the beginning of the line, using nano
or your preferred text editor:
- sudo nano /etc/fstab
. . .
# /dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0
Also, comment out or remove the array definition from the /etc/mdadm/mdadm.conf
file:
- sudo nano /etc/mdadm/mdadm.conf
. . .
# ARRAY /dev/md0 metadata=1.2 name=mdadmwrite:0 UUID=7261fb9c:976d0d97:30bc63ce:85e76e91
Finally, update the initramfs
again so that the early boot process does not try to bring an unavailable array online:
- sudo update-initramfs -u
From here, you should be ready to reuse the storage devices individually, or as components of a different array.
The RAID 0 array works by breaking up data into chunks and striping it across the available disks. This means that each disk contains a portion of the data and that multiple disks will be referenced when retrieving information.
To start, find the identifiers for the raw disks that you will be using:
- lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
OutputNAME SIZE FSTYPE TYPE MOUNTPOINT
sda 100G disk
sdb 100G disk
vda 25G disk
├─vda1 24.9G ext4 part /
├─vda14 4M part
└─vda15 106M vfat part /boot/efi
vdb 466K iso9660 disk
In this example, you have two disks without a filesystem, each 100G in size. These devices have been given the /dev/sda
and /dev/sdb
identifiers for this session and will be the raw components used to build the array.
To create a RAID 0 array with these components, pass them into the mdadm --create
command. You will have to specify the device name you wish to create, the RAID level, and the number of devices. In this command example, you will be naming the device /dev/md0
, and include the two disks that will build the array:
- sudo mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sda /dev/sdb
Confirm that the RAID was successfully created by checking the /proc/mdstat
file:
- cat /proc/mdstat
OutputPersonalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid0 sdb[1] sda[0]
209584128 blocks super 1.2 512k chunks
unused devices: <none>
This output reveals that the /dev/md0
device was created in the RAID 0 configuration using the /dev/sda
and /dev/sdb
devices.
Next, create a filesystem on the array:
- sudo mkfs.ext4 -F /dev/md0
Then, create a mount point to attach the new filesystem:
- sudo mkdir -p /mnt/md0
You can mount the filesystem with the following command:
- sudo mount /dev/md0 /mnt/md0
After, check whether the new space is available:
- df -h -x devtmpfs -x tmpfs
OutputFilesystem Size Used Avail Use% Mounted on
/dev/vda1 25G 1.4G 23G 6% /
/dev/vda15 105M 3.4M 102M 4% /boot/efi
/dev/md0 196G 61M 186G 1% /mnt/md0
The new filesystem is now mounted and accessible.
To make sure that the array is reassembled automatically at boot, you will have to adjust the /etc/mdadm/mdadm.conf
file. You can automatically scan the active array and append the file with the following:
- sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf
Afterwards, you can update the initramfs
, or initial RAM file system, so that the array will be available during the early boot process:
- sudo update-initramfs -u
Add the new filesystem mount options to the /etc/fstab
file for automatic mounting at boot:
- echo '/dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0' | sudo tee -a /etc/fstab
Your RAID 0 array will now automatically assemble and mount each boot.
You’re now finished with your RAID set up. If you want to try a different RAID, follow the resetting instructions at the beginning of this tutorial to proceed with creating a new RAID array type.
The RAID 1 array type is implemented by mirroring data across all available disks. Each disk in a RAID 1 array gets a full copy of the data, providing redundancy in the event of a device failure.
To start, find the identifiers for the raw disks that you will be using:
- lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
OutputNAME SIZE FSTYPE TYPE MOUNTPOINT
sda 100G disk
sdb 100G disk
vda 25G disk
├─vda1 24.9G ext4 part /
├─vda14 4M part
└─vda15 106M vfat part /boot/efi
vdb 466K iso9660 disk
In this example, you have two disks without a filesystem, each 100G in size. These devices have been given the /dev/sda
and /dev/sdb
identifiers for this session and will be the raw components you use to build the array.
To create a RAID 1 array with these components, pass them into the mdadm --create
command. You will have to specify the device name you wish to create, the RAID level, and the number of devices. In this command example, you will be naming the device /dev/md0
, and include the disks that will build the array:
- sudo mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sda /dev/sdb
If the component devices you are using are not partitions with the boot
flag enabled, you will likely receive the following warning. It is safe to respond with y
and continue:
Outputmdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
mdadm: size set to 104792064K
Continue creating array? y
The mdadm
tool will start to mirror the drives. This can take some time to complete, but the array can be used during this time. You can monitor the progress of the mirroring by checking the /proc/mdstat
file:
- cat /proc/mdstat
OutputPersonalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdb[1] sda[0]
104792064 blocks super 1.2 [2/2] [UU]
[====>................] resync = 20.2% (21233216/104792064) finish=6.9min speed=199507K/sec
unused devices: <none>
In the first highlighted line, the /dev/md0
device was created in the RAID 1 configuration using the /dev/sda
and /dev/sdb
devices. The second highlighted line reveals the progress on the mirroring. You can continue to the next step while this process completes.
Next, create a filesystem on the array:
- sudo mkfs.ext4 -F /dev/md0
Then, create a mount point to attach the new filesystem:
- sudo mkdir -p /mnt/md0
You can mount the filesystem by running the following:
- sudo mount /dev/md0 /mnt/md0
Check whether the new space is available:
- df -h -x devtmpfs -x tmpfs
OutputFilesystem Size Used Avail Use% Mounted on
/dev/vda1 25G 1.4G 23G 6% /
/dev/vda15 105M 3.4M 102M 4% /boot/efi
/dev/md0 99G 60M 94G 1% /mnt/md0
The new filesystem is mounted and accessible.
To make sure that the array is reassembled automatically at boot, you have to adjust the /etc/mdadm/mdadm.conf
file. You can automatically scan the active array and append the file with the following:
- sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf
Afterward, you can update the initramfs
, or initial RAM file system, so that the array will be available during the early boot process:
- sudo update-initramfs -u
Add the new filesystem mount options to the /etc/fstab
file for automatic mounting at boot:
- echo '/dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0' | sudo tee -a /etc/fstab
Your RAID 1 array will now automatically assemble and mount each boot.
You’re now finished with your RAID set up. If you want to try a different RAID, follow the resetting instructions at the beginning of this tutorial to proceed with creating a new RAID array type.
The RAID 5 array type is implemented by striping data across the available devices. One component of each stripe is a calculated parity block. If a device fails, the parity block and the remaining blocks can be used to calculate the missing data. The device that receives the parity block is rotated so that each device has a balanced amount of parity information.
To start, find the identifiers for the raw disks that you will be using:
- lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
OutputNAME SIZE FSTYPE TYPE MOUNTPOINT
sda 100G disk
sdb 100G disk
sdc 100G disk
vda 25G disk
├─vda1 24.9G ext4 part /
├─vda14 4M part
└─vda15 106M vfat part /boot/efi
vdb 466K iso9660 disk
You have three disks without a filesystem, each 100G in size. These devices have been given the /dev/sda
, /dev/sdb
, and /dev/sdc
identifiers for this session and will be the raw components you use to build the array.
To create a RAID 5 array with these components, pass them into the mdadm --create
command. You will have to specify the device name you wish to create, the RAID level, and the number of devices. In this command example, you will be naming the device /dev/md0
, and include the disks that will build the array:
- sudo mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sda /dev/sdb /dev/sdc
The mdadm
tool will start to configure the array. It uses the recovery process to build the array for performance reasons. This can take some time to complete, but the array can be used during this time. You can monitor the progress of the mirroring by checking the /proc/mdstat
file:
- cat /proc/mdstat
OutputPersonalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid5 sdc[3] sdb[1] sda[0]
209582080 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
[>....................] recovery = 0.9% (957244/104791040) finish=18.0min speed=95724K/sec
unused devices: <none>
In the first highlighted line, the /dev/md0
device was created in the RAID 5 configuration using the /dev/sda
, /dev/sdb
and /dev/sdc
devices. The second highlighted line shows the progress of the build.
Warning: Due to the way that mdadm
builds RAID 5 arrays, while the array is still building, the number of spares in the array will be inaccurately reported. This means that you must wait for the array to finish assembling before updating the /etc/mdadm/mdadm.conf
file. If you update the configuration file while the array is still building, the system will have incorrect information about the array state and will be unable to assemble it automatically at boot with the correct name.
You can continue the guide while this process completes.
Next, create a filesystem on the array:
- sudo mkfs.ext4 -F /dev/md0
Create a mount point to attach the new filesystem:
- sudo mkdir -p /mnt/md0
You can mount the filesystem with the following:
- sudo mount /dev/md0 /mnt/md0
Check whether the new space is available:
- df -h -x devtmpfs -x tmpfs
OutputFilesystem Size Used Avail Use% Mounted on
/dev/vda1 25G 1.4G 23G 6% /
/dev/vda15 105M 3.4M 102M 4% /boot/efi
/dev/md0 197G 60M 187G 1% /mnt/md0
The new filesystem is mounted and accessible.
To make sure that the array is reassembled automatically at boot, you have to adjust the /etc/mdadm/mdadm.conf
file.
Warning: As mentioned previously, before you adjust the configuration, check again to make sure the array has finished assembling. Completing the following steps before the array is built will prevent the system from assembling the array correctly on reboot.
You can monitor the progress of the mirroring by checking the /proc/mdstat
file:
- cat /proc/mdstat
OutputPersonalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md0 : active raid5 sdc[3] sdb[1] sda[0]
209584128 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
unused devices: <none>
This output reveals that the rebuild is complete. Now, you can automatically scan the active array and append the file:
- sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf
Afterwards, you can update the initramfs
, or initial RAM file system, so that the array will be available during the early boot process:
- sudo update-initramfs -u
Add the new filesystem mount options to the /etc/fstab
file for automatic mounting at boot:
- echo '/dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0' | sudo tee -a /etc/fstab
Your RAID 5 array will now automatically assemble and mount each boot.
You’re now finished with your RAID set up. If you want to try a different RAID, follow the resetting instructions at the beginning of this tutorial to proceed with creating a new RAID array type.
The RAID 6 array type is implemented by striping data across the available devices. Two components of each stripe are calculated parity blocks. If one or two devices fail, the parity blocks and the remaining blocks can be used to calculate the missing data. The devices that receive the parity blocks are rotated so that each device has a balanced amount of parity information. This is similar to a RAID 5 array, but allows for the failure of two drives.
To start, find the identifiers for the raw disks that you will be using:
- lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
OutputNAME SIZE FSTYPE TYPE MOUNTPOINT
sda 100G disk
sdb 100G disk
sdc 100G disk
sdd 100G disk
vda 25G disk
├─vda1 24.9G ext4 part /
├─vda14 4M part
└─vda15 106M vfat part /boot/efi
vdb 466K iso9660 disk
In this example, you have four disks without a filesystem, each 100G in size. These devices have been given the /dev/sda
, /dev/sdb
, /dev/sdc
, and /dev/sdd
identifiers for this session and will be the raw components used to build the array.
To create a RAID 6 array with these components, pass them into the mdadm --create
command. You have to specify the device name you wish to create, the RAID level, and the number of devices. In this following command example, you will be naming the device /dev/md0
and include the disks that will build the array :
- sudo mdadm --create --verbose /dev/md0 --level=6 --raid-devices=4 /dev/sda /dev/sdb /dev/sdc /dev/sdd
The mdadm
tool will start to configure the array. It uses the recovery process to build the array for performance reasons. This can take some time to complete, but the array can be used during this time. You can monitor the progress of the mirroring by checking the /proc/mdstat
file:
- cat /proc/mdstat
OutputPersonalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid6 sdd[3] sdc[2] sdb[1] sda[0]
209584128 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU]
[>....................] resync = 0.6% (668572/104792064) finish=10.3min speed=167143K/sec
unused devices: <none>
In the first highlighted line, the /dev/md0
device has been created in the RAID 6 configuration using the /dev/sda
, /dev/sdb
, /dev/sdc
and /dev/sdd
devices. The second highlighted line shows the progress of the build. You can continue the guide while this process completes.
Next, create a filesystem on the array:
- sudo mkfs.ext4 -F /dev/md0
Create a mount point to attach the new filesystem:
- sudo mkdir -p /mnt/md0
You can mount the filesystem with the following:
- sudo mount /dev/md0 /mnt/md0
Check whether the new space is available:
- df -h -x devtmpfs -x tmpfs
OutputFilesystem Size Used Avail Use% Mounted on
/dev/vda1 25G 1.4G 23G 6% /
/dev/vda15 105M 3.4M 102M 4% /boot/efi
/dev/md0 197G 60M 187G 1% /mnt/md0
The new filesystem is mounted and accessible.
To make sure that the array is reassembled automatically at boot, you will have to adjust the /etc/mdadm/mdadm.conf
file. You can automatically scan the active array and append the file by typing:
- sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf
Afterwards, you can update the initramfs
, or initial RAM file system, so that the array will be available during the early boot process:
- sudo update-initramfs -u
Add the new filesystem mount options to the /etc/fstab
file for automatic mounting at boot:
- echo '/dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0' | sudo tee -a /etc/fstab
Your RAID 6 array will now automatically assemble and mount each boot.
You’re now finished with your RAID set up. If you want to try a different RAID, follow the resetting instructions at the beginning of this tutorial to proceed with creating a new RAID array type.
The RAID 10 array type is traditionally implemented by creating a striped RAID 0 array composed of sets of RAID 1 arrays. This nested array type gives both redundancy and high performance, at the expense of large amounts of disk space. The mdadm
utility has its own RAID 10 type that provides the same type of benefits with increased flexibility. It is not created by nesting arrays, but has many of the same characteristics and guarantees. You will be using the mdadm
RAID 10 here.
mdadm
style RAID 10 is configurable.By default, two copies of each data block will be stored in what is called the near layout. The possible layouts that dictate how each data block is stored are as follows:
You can find out more about these layouts by checking out the RAID10
section of this man
page:
- man 4 md
You can also find this man
page online.
To start, find the identifiers for the raw disks that you will be using:
- lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
OutputNAME SIZE FSTYPE TYPE MOUNTPOINT
sda 100G disk
sdb 100G disk
sdc 100G disk
sdd 100G disk
vda 25G disk
├─vda1 24.9G ext4 part /
├─vda14 4M part
└─vda15 106M vfat part /boot/efi
vdb 466K iso9660 disk
In this example, you have four disks without a filesystem, each 100G in size. These devices have been given the /dev/sda
, /dev/sdb
, /dev/sdc
, and /dev/sdd
identifiers for this session and will be the raw components used to build the array.
To create a RAID 10 array with these components, pass them into the mdadm --create
command. You have to specify the device name you wish to create, the RAID level, and the number of devices. In this following command example, you will be naming the device /dev/md0
and include the disks that will build the array:
You can set up two copies using the near layout by not specifying a layout and copy number:
- sudo mdadm --create --verbose /dev/md0 --level=10 --raid-devices=4 /dev/sda /dev/sdb /dev/sdc /dev/sdd
If you want to use a different layout or change the number of copies, you will have to use the --layout=
option, which takes a layout and copy identifier. The layouts are n
for near, f
for far, and o
for offset. The number of copies to store is appended afterward.
For instance, to create an array that has three copies in the offset layout, the command would include the following:
- sudo mdadm --create --verbose /dev/md0 --level=10 --layout=o3 --raid-devices=4 /dev/sda /dev/sdb /dev/sdc /dev/sdd
The mdadm
tool will start to configure the array. It uses the recovery process to build the array for performance reasons. This can take some time to complete, but the array can be used during this time. You can monitor the progress of the mirroring by checking the /proc/mdstat
file:
- cat /proc/mdstat
OutputPersonalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
md0 : active raid10 sdd[3] sdc[2] sdb[1] sda[0]
209584128 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
[===>.................] resync = 18.1% (37959424/209584128) finish=13.8min speed=206120K/sec
unused devices: <none>
In the first highlighted line, the /dev/md0
device has been created in the RAID 10 configuration using the /dev/sda
, /dev/sdb
, /dev/sdc
and /dev/sdd
devices. The second highlighted area shows the layout that was used for this example (two copies in the near configuration). The third highlighted area shows the progress on the build. You can continue the guide while this process completes.
Next, create a filesystem on the array:
- sudo mkfs.ext4 -F /dev/md0
Create a mount point to attach the new filesystem:
- sudo mkdir -p /mnt/md0
You can mount the filesystem with the following:
- sudo mount /dev/md0 /mnt/md0
Check whether the new space is available:
- df -h -x devtmpfs -x tmpfs
OutputFilesystem Size Used Avail Use% Mounted on
/dev/vda1 25G 1.4G 23G 6% /
/dev/vda15 105M 3.4M 102M 4% /boot/efi
/dev/md0 197G 60M 187G 1% /mnt/md0
The new filesystem is mounted and accessible.
To make sure that the array is reassembled automatically at boot, you will have to adjust the /etc/mdadm/mdadm.conf
file. You can automatically scan the active array and append the file by running the following:
- sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf
Afterwards, you can update the initramfs
, or initial RAM file system, so that the array will be available during the early boot process:
- sudo update-initramfs -u
Add the new filesystem mount options to the /etc/fstab
file for automatic mounting at boot:
- echo '/dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0' | sudo tee -a /etc/fstab
Your RAID 10 array will now automatically assemble and mount each boot.
In this guide, you learned how to create various types of arrays using Linux’s mdadm
software RAID utility. RAID arrays offer some compelling redundancy and performance enhancements over using multiple disks individually.
Once you have settled on the type of array needed for your environment and created the device, you can learn how to perform day-to-day management with mdadm
. Our guide on how to manage RAID arrays with mdadm
on Ubuntu can help get you started.
Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.
RAID allows you to manage separate storage drives as a unified device with better performance or redundancy properties. In this series, we’ll walk through RAID concepts and terminology, create software RAID arrays using Linux’s mdadm utility, and learn how to manage and administer arrays to keep your storage infrastructure running smoothly.
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
There should be a section added to resetting devices. For drives that were formatted GTP there is a possibility that they will loose their superblock upon reboot causing the array to fail. This can be cleared off by
sgdisk --zap /dev/sdX
I spent a few days trying to figure out why my raid5 array wouldn’t survive reboot. This fixed the problem by clearing whatever corrupted partition data was on the drive.After following this guide for RAID 0 my RAID was not coming up automatically after reboot. I created the RAID on the partition instead of the device (used sda1 instead of sda) and it’s working now.
OK so I have followed this how-to several times now but seem to have ongoing issues with the array stability? Today I went to access the array as I had copied 1.5TB of data to it and it wasn’t accessible? Said the super block was unreadable and I could not save the array. So now I am rebuilding it again as a Raid 6 volume. Currently waiting for the build to finish. I read the comment below about changing the 0 to a 2 so that the array would load at boot time and used that config. Is this causing me issues? I am running Ubuntu 22.04.3 LTS with a 256gb SSD OS drive and the 4 834gb SSD drives for the array which gives me a 1.7tb disk for storage. I am not a big Linux guy my day job is Windows admin so this stuff is a bit foreign to me. I am using this as a Photoprism photo respository and don’t want to get all my photos on the array only to lose them down the road? Any help is much appreciated.
Hi, thank you for this awesome tutorial (and others).
On a new Ubuntu 22.04 install I created a RAID 10 array per your code (
sudo mdadm --create --verbose /dev/md0 --level=10 --layout=f2 --raid-devices=4 /dev/sda /dev/sdb /dev/sdc /dev/sdd
) using f2 for a far/2-copies setup on a 4Tb x 4 array to get approx. 8Tb. I’ve created/mounted/saved per all of your instructions. I also set USB shares to mount at /media/ instead of /media/user/ per this tutorial. I started yesterday afternoon and currently when I run “cat /proc/mdstat
” the resync = 13.7%.My 2 questions, please and thank you:
Do I need to keep this computer on until the resync is finished, or can I reboot and assume it will continue where it left off?
At what point can I add md0 to Samba and start copying data onto the new array?
Newbie here, sorry:(
Last step editing the fstab echo ‘/dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0’ | sudo tee -a /etc/fstab
should be
echo ‘/dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 2’ | sudo tee -a /etc/fstab
The 2 in the 0 space allows the system to boot the main os first then the array after. If left to 0 there is reports (and experienced myself) of the array not assembling in enough time before the system fully boots resulting in the nofail trigger to never report why it did not load. This is shown here fstab(5) - Linux manual page (man7.org)