Skip to content

Setting up a RAID 1 Array in Ubuntu

by Ben Hepworth on October 6th, 2009

For this tutorial, I already have a functioning Ubuntu 9.04 server. The OS is installed on a separate drive. I will be adding two hard drives of the same size (1.5 Tb) and mirroring them using raid 1. This way the data stored on the raid array is always backed up on another disk in case one of the drives dies. This tutorial has 3 sections:

  1. Setting up the Array
  2. Simulating a Drive Failure
  3. Setting up Monitoring

Much of the information in this guide was extracted from reading many times through the following site: http://www.tldp.org/HOWTO/Software-RAID-HOWTO.html, as well as scouring through random sites and forums for answers. I highly recommend using the tldp.org site as a reference and hand-in-hand as you go through this tutorial. The purpose of this guide is to go from start to finish setting up a RAID 1 array in an easy-to-read fashion for someone who has never done RAID before and doesn’t know much about it.

Setting up the raid 1 array

  1. Plug in both hard drives and boot up your system.
  2. System –> Administration –> Partition Editor or from a command line run:
  3. $sudo gparted
            
  4. In the upper right hand corner, you’ll see a drop down box. You should see all of your devices there. Find the two devices that you will be using for the raid array (mine are /dev/sdb and /dev/sdc)
  5. GParted Ubuntu

  6. Select the drive in the drop down box in the upper right that you want to work on first (I’ll start with /dev/sdb in my example)
  7. Right click on the unparitioned space and select “New”
  8. GParted Ubuntu

  9. For File System select “unformatted”
  10. GParted Ubuntu

  11. Click “Add”. You’ll now notice that it says “1 operation pending” in the status bar:
    GParted Ubuntu

  12. Click “Apply”
  13. Once it is done, right click on the partition and select “Manage Flags”
  14. GParted Ubuntu

  15. Check the “raid” checkbox
  16. GParted Ubuntu

  17. Click “Close” and repeat steps 4-10 for the second drive
  18. Run fdisk -l to get the names of the devices that will be used for the raid:
  19. fdisk

  20. As you can see from fdisk the device names I’ll be using are /dev/sdb1 and /dev/sdc1
  21. Open up a terminal and run the following command:
  22. $ sudo mdadm --create /dev/md0 --chunk=64 --level=raid1 --raid-devices=2 /dev/sdb1 /dev/sdc1
            
  23. If you get “command not found”, then you probably don’t have mdadm installed. Run the following command to install mdadm, then rerun the command above to create the array:
  24. sudo apt-get install mdadm
            
  25. To check the status of the array that is building, run the following command:
  26. $ cat /proc/mdstat
            
  27. Here is an example of what the output should look like. It will take a while – mine took over 5 hours to build:
    ben@Sokka:~$ cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md0 : active raid1 sdc1[1] sdb1[0]
          1465135936 block [2/2] [UU]
          [=>..................]  resync = 6.2% (92082752/1465135936) finish=266.7min speed=85788K/sec
    unused devices: <none>
            
  28. When it is complete, it will look something like this:
  29. ben@Sokka:~$ cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md0 : active raid1 sdc1[1] sdb1[0]
          1465135936 blocks [2/2] [UU]
    unused devices: <none>
            
  30. The array has now been created, but it still is just raw partitions that are not formatted. Format the array to ext4 usi
    ng the following command:
  31. ben@Sokka:~$ sudo mkfs.ext4 /dev/md0mke2fs 1.41.4 (27-Jan-2009)
    Filesystem label=
    OS type: Linux
    Block size=4096 (log=2)
    Fragment size=4096 (log=2)
    91578368 inodes, 366283984 blocks
    18314199 blocks (5.00%) reserved for the super user
    First data block=0
    Maximum filesystem blocks=4294967296
    11179 block groups
    32768 blocks per group, 32768 fragments per group8192 inodes per group
    Superblock backups stored on blocks:
            32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
            4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
            102400000, 214990848
    Writing inode tables: done
    Creating journal (32768 blocks): done
    Writing superblocks and filesystem accounting information: done
    
    This filesystem will be automatically checked every 39 mounts or
    180 days, whichever comes first.  Use tune2fs -c or -i to override.
            
  32. Now the array needs to be assembled. This step only needs to be run now. Later we’ll set it up to auto-assemble upon re
    boot. Run the following command:
  33. ben@Sokka:~$ sudo mdadm --assemble --scan        
  34. Now, let’s run file system check to make sure everything is good. You can just run fsck.ext4 on it, or use the -f flag f
    or a more in depth test:
  35. ben@Sokka:~$ sudo fsck.ext4 /dev/md0
    e2fsck 1.41.4 (27-Jan-2009)
    /dev/md0: clean, 11/91578368 files, 5798255/366283984 blocks
    
    ben@Sokka:~$ sudo fsck.ext4 -f /dev/md0
    e2fsck 1.41.4 (27-Jan-2009)
    Pass 1: Checking inodes, blocks, and sizesPass 2: Checking directory structurePass 3: Checking directory connectivity
    Pass 4: Checking reference counts
    Pass 5: Checking group summary information
    /dev/md0: 11/91578368 files (0.0% non-contiguous), 5798255/366283984 blocks        
  36. Create the directory where you want to mount the raid array. For me, I chose /mnt/water. After creating this directory, mount the filesystem:
  37. ben@Sokka:~$ sudo mkdir /mnt/water
    ben@Sokka:~$ sudo mount -t ext4 /dev/md0 /mnt/water
            
  38. After I mounted mine I could see it by running “df -h”
  39. ben@Sokka:~$ df -hFilesystem            Size  Used Avail Use% Mounted on
    /dev/sda6             274G   15G  246G   6% /
    tmpfs                 2.0G     0  2.0G   0% /lib/init/rw
    varrun                2.0G  120K  2.0G   1% /var/run
    varlock               2.0G     0  2.0G   0% /var/lock
    udev                  2.0G  180K  2.0G   1% /dev
    tmpfs                 2.0G  288K  2.0G   1% /dev/shm
    lrm                   2.0G  2.5M  2.0G   1% /lib/modules/2.6.28-15-generic/volatile
    /dev/md0              1.4T  198M  1.3T   1% /mnt/water
            
  40. The array should auto-assemble upon boot, but let’s setup the mdadm.conf file anyway:
  41. sudo echo 'DEVICE /dev/sd*[0-9]' >tt
    sudo mdadm --examine --scan --config=tt>>/etc/mdadm/mdadm.conf        
  42. Now we will setup an entry in /etc/fstab so that the filesystem auto-mounts upon boot. Run the command “blkid” to get the
    UID for /dev/md0:
  43. ben@Sokka:~$ blkid
    /dev/sda5: TYPE="swap" UUID="06d47ff2-86ec-46aa-b4fd-0f8d0da3d313"
    /dev/sda6: UUID="3c4bed0f-2d4b-4010-a54c-2ff1f98b4403" TYPE="ext4"
    /dev/sdc1: UUID="bad9ea84-b130-7ce0-6c8a-d07b2d832e71" TYPE="mdraid"
    /dev/sdb1: UUID="bad9ea84-b130-7ce0-6c8a-d07b2d832e71" TYPE="mdraid"
    /dev/md0: UUID="0aa1316f-1548-488d-a553-996fdc83591e" TYPE="ext4"        
  44. Edit /etc/fstab and put the following entry into it (using the UUID from the blkid command):
  45. UUID=0aa1316f-1548-488d-a553-996fdc83591e /mnt/water ext4 defaults 0 1
            
  46. Reboot once to ensure that the raid array assembles cleanly and is auto-mounted.
  47. ben@Sokka:~$ sudo reboot
            

Simulating a Drive Failure

If you’re like me when I first started this, all of this raid and mdadm stuff was very foreign to me. I didn’t want to just run a bu
nch of commands and not understand what it was doing. Also, I was concerned about my data and what I would do if one of the disks cr
ashed. I wanted to ensure that I knew how to add another hard drive to the array if one went bad. Most of this information comes fr
om http://www.devil-linux.org/documentation/1.0.x/ch01s05.html, so thank you devil-linux:

  1. Get the details of the array:
  2. $ sudo mdadm --detail /dev/md0
    /dev/md0:
            Version : 00.90
      Creation Time : Sat Aug 22 01:42:24 2009
         Raid Level : raid1
         Array Size : 1465135936 (1397.26 GiB 1500.30 GB)
      Used Dev Size : 1465135936 (1397.26 GiB 1500.30 GB)   Raid Devices : 2
      Total Devices : 2
    Preferred Minor : 0
        Persistence : Superblock is persistent
        Update Time : Thu Oct  1 00:09:28 2009
              State : clean
     Active Devices : 2
    Working Devices : 2
     Failed Devices : 0
      Spare Devices : 0
    
               UUID : 2576e705:c3da6d6c:7bd08a6c:712e832d (local to host Sokka)
             Events : 0.4
        Number   Major   Minor   RaidDevice State
           0       8       17        0      active sync   /dev/sdb1       1       8       33        1      active sync   /dev/sdc1        
  3. For me, you can see there are two physical disks that make up my raid device /dev/md0: /dev/sdb1 and /dev/sdc1
  4. I’m going to simulate a failure of /dev/sdc1:
  5. ben@Sokka:~$ sudo mdadm --fail /dev/md0 /dev/sdc1
    mdadm: set /dev/sdc1 faulty in /dev/md0
            
  6. The drive is now marked as faulty:
  7. /dev/md0:        Version : 00.90
      Creation Time : Sat Aug 22 01:42:24 2009
         Raid Level : raid1
         Array Size : 1465135936 (1397.26 GiB 1500.30 GB)
      Used Dev Size : 1465135936 (1397.26 GiB 1500.30 GB)
       Raid Devices : 2
      Total Devices : 2
    Preferred Minor : 0
        Persistence : Superblock is persistent
        Update Time : Thu Oct  1 00:16:34 2009
              State : clean, degraded
     Active Devices : 1
    Working Devices : 1
     Failed Devices : 1
      Spare Devices : 0
    
               UUID : 2576e705:c3da6d6c:7bd08a6c:712e832d (local to host Sokka)
             Events : 0.20
    
        Number   Major   Minor   RaidDevice State       0       8       17        0      active sync   /dev/sdb1
           1       0        0        1      removed
           2       8       33        -      faulty spare   /dev/sdc1
            
  8. I can still access my data on the raid1 array, it is just not being mirrored to a second disk since it has been removed from the array.
  9. Running cat /prod/mdstat now shows that the raid1 array is still active, but sdc1 is marked with (F) for failed:
  10. $ cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md0 : active raid1 sdc1[2](F) sdb1[0]
          1465135936 blocks [2/1] [U_]
    
    unused devices:         
  11. Next I’ll completely remove the failed disk from the raid1 array:
  12. $ sudo mdadm --remove /dev/md0 /dev/sdc1
    mdadm: hot removed /dev/sdc1
            
  13. Now I can see that /dev/sdc1 is completely removed from the raid1 array:
  14. $ cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md0 : active raid1 sdb1[0]
          1465135936 blocks [2/1] [U_]
    unused devices:         
  15. Next we remove any previous raid info from the disk:
  16. $ sudo mdadm --zero-superblock /dev/sdc1
    $
            
  17. The command completed and returned to a prompt without any feedback. This is normal.
  18. Now I add back in /dev/sdc1 to the raid1 array:
  19. $ sudo mdadm --add /dev/md0 /dev/sdc1
    mdadm: added /dev/sdc1
            
  20. Now when I check on the status of the array, I can see that it is rebuilding /dev/sdc1:
  21. $ cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md0 : active raid1 sdc1[2] sdb1[0]
          1465135936 blocks [2/1] [U_]
          [>....................]  recovery =  0.8% (12351936/1465135936) finish=289.8min speed=83520K/sec
    unused devices: 
            
  22. The detail of the raid array also shows that it is recovering:
  23. $ sudo mdadm --detail /dev/md0/dev/md0:
            Version : 00.90
      Creation Time : Sat Aug 22 01:42:24 2009
         Raid Level : raid1
         Array Size : 1465135936 (1397.26 GiB 1500.30 GB)
      Used Dev Size : 1465135936 (1397.26 GiB 1500.30 GB)
       Raid Devices : 2
      Total Devices : 2
    Preferred Minor : 0
        Persistence : Superblock is persistent
        Update Time : Thu Oct  1 00:32:13 2009          State : clean, degraded, recovering Active Devices : 1
    Working Devices : 2
     Failed Devices : 0
      Spare Devices : 1
    
     Rebuild Status : 1% complete
    
               UUID : 2576e705:c3da6d6c:7bd08a6c:712e832d (local to host Sokka)
             Events : 0.28
    
        Number   Major   Minor   RaidDevice State
           0       8       17        0      active sync   /dev/sdb1
           2       8       33        1      spare rebuilding   /dev/sdc1
            
  24. Depending on the size of your disks, it could take quite a while to rebuild the array. During the process, though, your data is still available from the first hard disk in the array.

Setting up Monitoring

  1. Run a test command to see if mail is working on your system:
  2. echo "test"|mail -s "test" you@yourdomain.com
            
  3. If you get an error message stating that the comman mail does not exist, you need to install the command line mail utilities:
  4. $ sudo apt-get install mailx
            
  5. If the command looks like it worked, but you don’t get the email, you can run the command “mail” and you will probably see the returned mail in your mailbox:
  6. $ mail
    Mail version 8.1.2 01/15/2001.  Type ? for help.
    "/var/mail/ben": 3 messages 3 new
    >N  1 MAILER-DAEMON@Sok  Thu Oct  1 02:25   65/1867  Undelivered Mail Returned to Sender
     N  2 MAILER-DAEMON@Sok  Thu Oct  1 02:27   66/1882  Undelivered Mail Returned to Sender
     N  3 MAILER-DAEMON@Sok  Thu Oct  1 17:22   66/1886  Undelivered Mail Returned to Sender
    
  7. As you can see, I tried 3 times and they all failed. To view an individual message, just type the number next to it and you’ll see the whole mail
  8. I configured my postfix to relay mail to my gmail account using this tutorial, or you can configure postfix to relay through your internet provider’s smtp if they allow it.
  9. Once you get the mail from the command line working, then update mdadm.conf with your email address.
  10. $ sudo vi /etc/mdadm/mdadm.conf
            

    Change this line to have your email address:

    MAILADDR you@yourdomain.com
            
  11. Reboot your machine and you should be good to go. I’m not sure if the reboot is absolutely necessary. I’m sure there is a way to restart the mdadm monitoring daemon to take the new config, but by this point I was exhausted and just restarted my system. You will notice the mdadm monitor daemon running:
    $ ps -ef|grep mdadm
    root      2998     1  0 Sep30 ?        00:00:00 /sbin/mdadm --monitor --pid-file /var/run/mdadm/monitor.pid --daemonise --scan --syslog
            

    I tested a failure again and was notified via email this time. If you want to kill 2 birds with one stone, you can setup monitoring before simulating a failure.

    Raid Array Fail Notification

From → Technology

No comments yet

Leave a Reply

Note: XHTML is allowed. Your email address will never be published.

Subscribe to this comment feed via RSS