RAID 10 with mdadm

If I had to pick one fault of Linux, it would be that for almost everything, the Linux user is inundated with hundreds of possible solutions. This is both a blessing and a curse – for the veterans, it means that we can pick the tool that most matches how we prefer to operate; for the uninitiated, it means that we’re so overwhelmed with options it’s hard to know where to begin.

One exception is software RAID – there’s really only one option: mdadm. I can already hear the LVM advocates screaming at me; no, I don’t have any problem with LVM, and in fact I do use it as well – I just see it as filling a different role than mdadm. I won’t go into the nuances here – just trust me when I say that I use and love both.

There are quite a few how-tos, walkthroughs, and tutorials out there on using mdadm. None that I found, however, came quite near enough to what I was trying to do on my newest computer system. And even when I did get it figured out, the how-tos I read failed to mention what turned out to be a very critical piece of information, the lack of which almost lead to me destroying my newly-created array.

So without further ado, I will walk you through how I created a storage partition on a RAID 10 array using 4 hard drives (my system boots off of a single, smaller hard drive).

The first thing you want to do is make sure you have a plan of attack: What drives/partitions are you going to use? What RAID level? Where is the finished product going to be mounted?

One method that I’ve seen used frequently is to create a single array that’s used for everything, including the system. There’s nothing wrong with that approach, but here’s why I decided on having a separate physical drive for my system to boot from: simplicity. If you want to use a software RAID array for your boot partition as well, there are plenty of resources telling you how you’ll need to install your system and configure your boot loader.

For my setup, I chose a lone 80 GB drive to house my system. For my array, I selected four 750 GB drives. All 5 are SATA. After I installed Ubuntu 9.04 on my 80 GB drive and booted into it, it was time to plan my RAID array.

kromey@vmsys:~$ ls -1 /dev/sd*
/dev/sda
/dev/sdb
/dev/sdc
/dev/sdd
/dev/sde
/dev/sde1
/dev/sde2
/dev/sde5

As you can probably tell, my system is installed on sde. While I would have been happier with it being labeled sda, it doesn’t really matter. sda through sdd then are the drives that I want to combine into a RAID.

mdadm operates on partitions, not raw devices, so the next step is to create partitions on my drives. Since I want to use each entire drive, I’ll just create a single partition on each one. Using fdisk, I choose the fd (Linux raid auto) partition type and create partitions using the entire disk on each one. When I’m done, each drive looks like this:

kromey@vmsys:~$ sudo fdisk -l /dev/sda

Disk /dev/sda: 750.1 GB, 750156374016 bytes
255 heads, 63 sectors/track, 91201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1       91201   732572001   fd  Linux raid autodetect

Now that my partitions are in place, it’s time to pull out mdadm. I won’t re-hash everything that’s in the man pages here, and instead just demonstrate what I did. I’ve already established that I want a RAID 10 array, and setting that up with mdadm is quite simple:

kromey@vmsys:~$ sudo mdadm -v --create /dev/md0 --level=raid10 --raid-devices=4 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1

A word of caution: mdadm --create will return immediately, and for all intents and purposes will look like it’s done and ready. It’s not – it takes time for the array to be synchronized. It’s probably usable before then, but to be on the safe side wait until it’s done. My array took about 3 hours (give or take – I was neither watching it closely nor holding a stopwatch!). Wait until your /proc/mdstat looks something like this:

kromey@vmsys:~$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid10 sdb1[1] sda1[0] sdc1[2] sdd1[3]
      1465143808 blocks 64K chunks 2 near-copies [4/4] [UUUU]

Edit: As Jon points out in the comments, you can watch cat /proc/mdstat to get near-real-time status and know when your array is ready.

That’s it! All that’s left to do now is create a partition, throw a filesystem on there, and then mount it.

kromey@vmsys:~$ sudo fdisk /dev/md0
kromey@vmsys:~$ sudo mkfs -t ext4 /dev/md0p1
kromey@vmsys:~$ mkdir /srv/hoard
kromey@vmsys:~$ sudo mount /dev/md0p1 /srv/hoard/

Ah, how sweet it is!

kromey@vmsys:~$ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sde1              71G  3.6G   64G   6% /
tmpfs                 3.8G     0  3.8G   0% /lib/init/rw
varrun                3.8G  116K  3.8G   1% /var/run
varlock               3.8G     0  3.8G   0% /var/lock
udev                  3.8G  184K  3.8G   1% /dev
tmpfs                 3.8G  104K  3.8G   1% /dev/shm
lrm                   3.8G  2.5M  3.8G   1% /lib/modules/2.6.28-14-generic/volatile
/dev/md0p1            1.4T   89G  1.2T   7% /srv/hoard

Now comes the gotcha that nearly sank me. Well, it wouldn’t have been a total loss, I’d only copied data from an external hard drive to my new array, and could easily have done it again.

Everything I read told me that Debian-based systems (of which Ubuntu is, of course, one) were set up to automatically detect and activate your mdadm-create arrays on boot, and that you don’t need to do anything beyond what I’ve already described. Now, maybe I did something wrong (and if so, please leave a comment correcting me!), but this wasn’t the case for me, leaving me without an assembled array (while somehow making sdb busy so I couldn’t manually assemble the array except in a degraded state!) after a reboot. So I had to edit my /etc/mdadm/mdadm.conf file like so:

kromey@vmsys:~$ cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
#DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST 

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
DEVICE /dev/sd[abcd]1

ARRAY /dev/md0 super-minor=0

# This file was auto-generated on Mon, 03 Aug 2009 21:30:49 -0800
# by mkconf $Id$

It certainly looks like my array should have been detected and started when I rebooted. I commented-out the default DEVICES line and added an explicit one, then added an explicit declaration for my array; now it’s properly assembled when my system reboots, which means the fstab entry doesn’t provoke a boot-stopping error anymore, and life is all-around happy!

Update 9 April 2011: In preparation for a server rebuild, I’ve been experimenting with mdadm quite a bit more, and I’ve found a better solution to adding the necessary entries to the mdadm.conf file. Actually, two new solutions:

  1. Configure your RAID array during the Ubuntu installation. Your mdadm.conf file will be properly updated with no further action necessary on your part, and you can even have those nice handy fstab entries to boot!
  2. Run the command mdadm --examine --scan --config=mdadm.conf >> /etc/mdadm/mdadm.conf in your terminal. Then, open up mdadm.conf in your favorite editor to put the added line(s) into a more reasonable location.

On my new server, I’ll be following solution (1), but on my existing system described in this post, I have taken solution (2); my entire file now looks like this:

kromey@vmsys:~$ cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST 

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY /dev/md0 level=raid10 num-devices=4 UUID=46c6f1ed:434fd8b4:0eee10cd:168a240d

# This file was auto-generated on Mon, 03 Aug 2009 21:30:49 -0800
# by mkconf $Id$

Notice that I’m again using the default DEVICE line, and notice the new ARRAY line that’s been added. This seems to work a lot better — since making this change, I no longer experience the occasional (and strange) “device is busy” errors during boot (always complaining about /dev/sdb for some reason), making the boot-up process just that much smoother!

This entry was posted in How-to and tagged , , , . Bookmark the permalink.

15 Responses to RAID 10 with mdadm

  1. Kromus,

    I have a suggestion. When going through my 4 TB mdadm RAID setup, I used:

    $ watch cat /proc/mdstat

    To literally ‘watch’ my drive get built. Check it out. Otherwise, solid post. Greetz.

  2. Kromey says:

    In point of fact, I did use ‘watch’, I just didn’t watch it closely!

  3. The suggestion is for the other peepz lookin at this article then. Put it in there tho!

  4. n8v says:

    TL;DR. But nice syntax highlighting!

    Hey no seriously, this is great stuff to have out there. Thank you for adding to the “good quality tech howto” part of the Internet.

  5. lanky says:

    I’m gonna try this out when I get my new hard drives

  6. pingtoft says:

    AFAIK it’s a bug in mdadm that causes mdadm.conf not to be updated.
    There are two simple workarounds here: http://ubuntuforums.org/showpost.php?p=7220050&postcount=19

  7. emqueuekay says:

    Great HowTo, thanks a lot! By lucky coincidence I needed to set up pretty much exactly what you’ve described here. Thanks to your instructions I’ve now got four 2TB Caviar Black disks in a RAID-10. Sweet!! One comment: it doesn’t seem necessary to create the partition on /dev/md0. I just did ‘mkfs -type ext3 /dev/md0’. I wonder if that has anything to do with your array initially not being recognized upon reboot.

    • Kromey says:

      Glad you found it useful! 🙂

      All the documentation I’d found said that I did need to create the partition on /dev/md0, but I’ll admit that I didn’t try to just toss a filesystem on there anyway. I doubt, however, that that had anything to do with it not being found on reboot – that seems to lie fully upon mdadm.conf not being correctly updated.

  8. Jonathan says:

    For me the command mdadm –examine –scan –config=mdadm.conf >> /etc/mdadm/mdadm.conf didn’t work. The mdadm.conf didn’t change. I had to add the entry with UUID manualy. Use the command mdadm –detail /dev/md0 for finding the right UUID

    • Kromey says:

      Strange that it didn’t change the file at all; did you try running the command without the output redirection to see what it’s actually generating? Possibly something unexpected there.

      Good tip on the ‘mdadm –detail’ bit, though; I didn’t include it in my post because I’m not aware of any situation where the command I used wouldn’t work. Good to have a fallback if it does fail, though!

  9. Nitestick says:

    Brilliant post, this helped me a lot. I’m currently building a 4x2TB RAID 10 for my home server and your instructions made it simple and smoothe (aside from me having a blonde moment with fdisk).

  10. Nitestick says:

    All good again, it was pretty much what you warned about the mdadm.conf and it assembling the array. When I rebooted it didn’t recognise the array, I did follow your instructions regarding using mdadm –examine –scan –config=mdadm.conf to generate the conf line however I had to manually add the raid level and number of devices for it to work. I’m glad I didn’t try writing anything to it :p, added that line, rebooted again and all I’m a happy man.

  11. Gavin says:

    Nice post. One cute feature of mdadm Raid10 that you didn’t mention is that it doesn’t require even numbers of drives (Raid10 w/N=2 works fine on 3 or 5 drives for instance, which space available of 3/2 or 5/2 in each case), and it also works nicely for values of N>2, which are reasonably unusual in traditional Raid1+0 layouts.

    For instance, with today’s large disks you can get extra safety by using N=3, which would allow you to lose 2 disks without losing your data. If you used 4 * 2TB disks with N=3, for instance, you’d get 8/3TB of usable space, and be able to survive the loss of 2 disks. Gives you some interesting tradeoffs to explore.

  12. Brian says:

    mdadm has no problem with using raw devices. My RAID 1 consists of the raw devices of two identical 1.5TB SATA drives. I’ve also never used mdadm.conf, since debian comes with nice init scripts that scan all block devices and start the array automatically. Grub also detects my array automatically, so I could boot off of the RAID.

Comments are closed.