CentOS / RHEL – Managing Software RAID1 arrays


mdadm on CentOS or RHEL

The tool that we are going to use to create, assemble, manage, and monitor our software RAID-1 is called mdadm (short for multiple disks admin). On Linux distros such as CentOS, or RHEL, mdadm is pre-installed.

To start the RAID monitoring service, and configure it to auto-start upon boot:

# service mdmonitor start
# chkconfig mdmonitor on

Partitioning Hard Drives

In this example, we use two 8 GB USB sticks defined as /dev/sdb and /dev/sdc

# dmesg | less
[   60.014863] sd 3:0:0:0: [sdb] 15826944 512-byte logical blocks: (8.10 GB/7.54 GiB)
[   75.066466] sd 4:0:0:0: [sdc] 15826944 512-byte logical blocks: (8.10 GB/7.54 GiB)

Let’s use fdisk to create a primary partition on each device. Use GREAT CARE with partition commands, they can cause data loss if the wrong device is used.

# fdisk /dev/sdb

Press ‘p’ to print the current partition table:

(if one or more partitions are found, they can be deleted with ‘d’ option. Then ‘w’ option is used to apply the changes).

Since no partitions are found, we will create a new primary partition [‘n’] as a primary partition [‘p’], assign the partition number = [‘1’] to it, and then indicate its size. You can press Enter key to accept the proposed default values, or enter a value of your choosing, as shown in the image below.

Now repeat the same process for /dev/sdc.

If we have two drives of different sizes, say 750 GB and 1 TB for example, we should create a primary partition of 750 GB on each of them, and use the remaining space on the bigger drive for another purpose, independent of the RAID array.

Create a RAID-1 Array

Once you are done with creating the primary partition on each drive, use the following command to create a RAID-1 array:

# mdadm -Cv /dev/md0 -l1 -n2 /dev/sdb1 /dev/sdc1

Where:

  • -Cv: creates an array and produce verbose output.
  • /dev/md0: is the name of the array.
  • -l1 (l as in “level”): indicates that this will be a RAID-1 array.
  • -n2: indicates that we will add two partitions to the array, namely /dev/sdb1 and /dev/sdc1.

The above command is equivalent to:

# mdadm –create –verbose /dev/md0 –level=1 –raid-devices=2 /dev/sdb1 /dev/sdc1

If alternatively you want to add a spare device in order to replace a faulty disk in the future, you can add ‘--spare-devices=1 /dev/sdd1‘ to the above command.

Answer “y” when prompted if you want to continue creating an array, then press Enter:

You can check the progress with the following command:
# cat /proc/mdstat

Another way to obtain more information about a RAID array (both while it’s being assembled and after the process is finished) is:

# mdadm –query /dev/md0
# mdadm –detail /dev/md0 (or mdadm -D /dev/md0)

Of the information provided by ‘mdadm -D‘, perhaps the most useful is the state of the array. The active state means that there is currently I/O activity happening. Other possible states are clean (all I/O activity has been completed), degraded (one of the devices is faulty or missing),resyncing (the system is recovering from an unclean shutdown such as a power outage), or¬†recovering (a new drive has been added to the array, and data is being copied from the other drive onto it).

Formatting and Mounting a RAID Array

Formatting (with ext4 in this example) the array:

# mkfs.ext4 /dev/md0

Mount the array, and verify that it was mounted correctly:

# mount /dev/md0 /mnt
# mount

Monitor a RAID Array

The mdadm tool comes with RAID monitoring capability built in. When mdadm is set to run as a daemon (which is the case with our RAID setup), it periodically polls existing RAID arrays, and reports on any detected events via email notification or syslog logging. Optionally, it can also be configured to invoke contingency commands (e.g., retrying or removing a disk) upon detecting any critical errors.

By default, mdadm scans all existing partitions and MD arrays, and logs any detected event to /var/log/syslog. Alternatively, you can specify devices and RAID arrays to scan in mdadm.conflocated in /etc/mdadm/mdadm.conf (Debian-based) or /etc/mdadm.conf (Red Hat-based), in the following format. If mdadm.conf does not exist, create one.

1
2
3
4
5
6
7
8
DEVICE /dev/sd[bcde]1 /dev/sd[ab]1
ARRAY /dev/md0 devices=/dev/sdb1,/dev/sdc1
ARRAY /dev/md1 devices=/dev/sdd1,/dev/sde1
.....
# optional email address to notify events
MAILADDR your@email.com

After modifying mdadm configuration, restart mdadm daemon:

On CentOS/RHEL 7:

# systemctl restart mdmonitor

On CentOS/RHEL 6:

# service mdmonitor restart

Auto-mount a RAID Array

Now we will add an entry in the /etc/fstab to mount the array in /mnt automatically during boot (you can specify any other mount point):

# echo “/dev/md0 /mnt ext4 defaults 0 2” >> /etc/fstab

To verify that mount is working, we unmount the array, restart mdadm, and remount. We can see that /dev/md0 has been mounted as per the entry added to /etc/fstab:

# umount /mnt
# systemctl restart mdmonitor (on CentOS/RHEL7)
or service mdmonitor restart (on CentOS/RHEL6)
# mount -a

To test the array, copy any file into /mnt:

 

Simulating Drive Failures

Unmount the array

# umount /mnt

Notice ‘mdadm -D /dev/md0‘ indicates the changes after performing each command below.

# mdadm /dev/md0 –fail /dev/sdb1 #Marks /dev/sdb1 as faulty
# mdadm –remove /dev/md0 /dev/sdb1 #Removes /dev/sdb1 from the array

After replacing a bad drive with a new drive, re-add the drive:

# mdadm /dev/md0 –add /dev/sdb1

The data is then rebuilt onto /dev/sdb1 from /dev/sdc1:

Note the steps detailed above apply to servers with hot-swappable drives. If the server does not contain this technology, stop the current RAID1 array, and shutdown the server prior to replacing drives:

# mdadm –stop /dev/md0
# shutdown -h now

Then add the new drive and re-assemble the array:

# mdadm /dev/md0 –add /dev/sdb1
# mdadm –assemble /dev/md0 /dev/sdb1 /dev/sdc1

Leave a comment