{"id":1083,"date":"2012-08-22T16:30:18","date_gmt":"2012-08-22T08:30:18","guid":{"rendered":"http:\/\/rmohan.com\/?p=1083"},"modified":"2012-08-23T08:06:23","modified_gmt":"2012-08-23T00:06:23","slug":"raid-10-with-mdadm","status":"publish","type":"post","link":"https:\/\/mohan.sg\/?p=1083","title":{"rendered":"RAID 10 with mdadm"},"content":{"rendered":"<div>\n<p>If I had to pick one fault of Linux, it would be that for almost everything, the Linux user is inundated with hundreds of possible solutions. This is both a blessing and a curse \u2013 for the veterans, it means that we can pick the tool that most matches how we prefer to operate; for the uninitiated, it means that we\u2019re so overwhelmed with options it\u2019s hard to know where to begin.<\/p>\n<p>One exception is software <acronym title=\"Redundant Array of Inexpensive Disks\">RAID<\/acronym> \u2013 there\u2019s really only one option: <code>mdadm<\/code>. I can already hear the <acronym title=\"Logical Volume Manager\">LVM<\/acronym> advocates screaming at me; no, I don\u2019t have any problem with <acronym title=\"Logical Volume Manager\">LVM<\/acronym>, and in fact I do use it as well \u2013 I just see it as filling a different role than <code>mdadm<\/code>. I won\u2019t go into the nuances here \u2013 just trust me when I say that I use and love both.<\/p>\n<p>There are quite a few how-tos, walkthroughs, and tutorials out there on using <code>mdadm<\/code>. None that I found, however, came quite near enough to what I was trying to do on my newest computer system. And even when I did get it figured out, the how-tos I read failed to mention what turned out to be a very critical piece of information, the lack of which almost lead to me destroying my newly-created array.<\/p>\n<p>So without further ado, I will walk you through how I created a storage partition on a <acronym title=\"Redundant Array of Inexpensive Disks\">RAID<\/acronym> 10 array using 4 hard drives (my system boots off of a single, smaller hard drive).<\/p>\n<p>The first thing you want to do is make sure you have a plan of attack: What drives\/partitions are you going to use? What <acronym title=\"Redundant Array of Inexpensive Disks\">RAID<\/acronym> level? Where is the finished product going to be mounted?<\/p>\n<p>One method that I\u2019ve seen used frequently is to create a single array that\u2019s used for everything, including the system. There\u2019s nothing wrong with that approach, but here\u2019s why I decided on having a separate physical drive for my system to boot from: simplicity. If you want to use a software <acronym title=\"Redundant Array of Inexpensive Disks\">RAID<\/acronym> array for your boot partition as well, there are plenty of resources telling you how you\u2019ll need to install your system and configure your boot loader.<\/p>\n<p>For my setup, I chose a lone 80 <acronym title=\"Gigabyte\">GB<\/acronym> drive to house my system. For my array, I selected four 750 <acronym title=\"Gigabyte\">GB<\/acronym> drives. All 5 are <acronym title=\"Serial Advanced Technology Attachment\">SATA<\/acronym>. After I installed Ubuntu 9.04 on my 80 <acronym title=\"Gigabyte\">GB<\/acronym> drive and booted into it, it was time to plan my <acronym title=\"Redundant Array of Inexpensive Disks\">RAID<\/acronym> array.<\/p>\n<div>\n<div>\n<pre>kromey@vmsys:~$ ls -1 \/dev\/sd*\r\n\/dev\/sda\r\n\/dev\/sdb\r\n\/dev\/sdc\r\n\/dev\/sdd\r\n\/dev\/sde\r\n\/dev\/sde1\r\n\/dev\/sde2\r\n\/dev\/sde5<\/pre>\n<\/div>\n<\/div>\n<p>As you can probably tell, my system is installed on <code>sde<\/code>. While I would have been happier with it being labeled <code>sda<\/code>, it doesn\u2019t really matter. <code>sda<\/code> through <code>sdd<\/code> then are the drives that I want to combine into a <acronym title=\"Redundant Array of Inexpensive Disks\">RAID<\/acronym>.<\/p>\n<p><code>mdadm<\/code> operates on <em>partitions<\/em>, not raw devices, so the next step is to create partitions on my drives. Since I want to use each entire drive, I\u2019ll just create a single partition on each one. Using <code>fdisk<\/code>, I choose the fd (Linux raid auto) partition type and create partitions using the entire disk on each one. When I\u2019m done, each drive looks like this:<\/p>\n<div>\n<div><a href=\"http:\/\/kromey.us\/guestbook.php\">physical-have-jon<\/a><\/div>\n<\/div>\n<div>\n<div>\n<pre>kromey@vmsys:~$ sudo fdisk -l \/dev\/sda\r\n\u00a0\r\nDisk \/dev\/sda: 750.1 GB, 750156374016 bytes\r\n255 heads, 63 sectors\/track, 91201 cylinders\r\nUnits = cylinders of 16065 * 512 = 8225280 bytes\r\nDisk identifier: 0x00000000\r\n\u00a0\r\n   Device Boot      Start         End      Blocks   Id  System\r\n\/dev\/sda1               1       91201   732572001   fd  Linux raid autodetect<\/pre>\n<\/div>\n<\/div>\n<p>Now that my partitions are in place, it\u2019s time to pull out <code>mdadm<\/code>. I won\u2019t re-hash everything that\u2019s in the <code>man<\/code> pages here, and instead just demonstrate what I did. I\u2019ve already established that I want a <acronym title=\"Redundant Array of Inexpensive Disks\">RAID<\/acronym> 10 array, and setting that up with <code>mdadm<\/code> is quite simple:<\/p>\n<div>\n<div>\n<pre>kromey@vmsys:~$ sudo mdadm -v --create \/dev\/md0 --level=raid10 --raid-devices=4 \/dev\/sda1 \/dev\/sdb1 \/dev\/sdc1 \/dev\/sdd1<\/pre>\n<\/div>\n<\/div>\n<p><strong>A word of caution<\/strong>: <code>mdadm --create<\/code> will return immediately, and for all intents and purposes will look like it\u2019s done and ready. It\u2019s not \u2013 it takes time for the array to be synchronized. It\u2019s probably usable before then, but to be on the safe side wait until it\u2019s done. My array took about 3 hours (give or take \u2013 I was neither watching it closely nor holding a stopwatch!). Wait until your <code>\/proc\/mdstat<\/code> looks something like this:<\/p>\n<div>\n<div>\n<pre>kromey@vmsys:~$ cat \/proc\/mdstat\r\nPersonalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]\r\nmd0 : active raid10 sdb1[1] sda1[0] sdc1[2] sdd1[3]\r\n      1465143808 blocks 64K chunks 2 near-copies [4\/4] [UUUU]<\/pre>\n<\/div>\n<\/div>\n<p><strong>Edit:<\/strong> As Jon points out in the comments, you can <code>watch cat \/proc\/mdstat<\/code> to get near-real-time status and know when your array is ready.<\/p>\n<p>That\u2019s it! All that\u2019s left to do now is create a partition, throw a filesystem on there, and then mount it.<\/p>\n<div>\n<div>\n<pre>kromey@vmsys:~$ sudo fdisk \/dev\/md0\r\nkromey@vmsys:~$ sudo mkfs -t ext4 \/dev\/md0p1\r\nkromey@vmsys:~$ mkdir \/srv\/hoard\r\nkromey@vmsys:~$ sudo mount \/dev\/md0p1 \/srv\/hoard\/<\/pre>\n<\/div>\n<\/div>\n<p>Ah, how sweet it is!<\/p>\n<div>\n<div>\n<pre>kromey@vmsys:~$ df -h\r\nFilesystem            Size  Used Avail Use% Mounted on\r\n\/dev\/sde1              71G  3.6G   64G   6% \/\r\ntmpfs                 3.8G     0  3.8G   0% \/lib\/init\/rw\r\nvarrun                3.8G  116K  3.8G   1% \/var\/run\r\nvarlock               3.8G     0  3.8G   0% \/var\/lock\r\nudev                  3.8G  184K  3.8G   1% \/dev\r\ntmpfs                 3.8G  104K  3.8G   1% \/dev\/shm\r\nlrm                   3.8G  2.5M  3.8G   1% \/lib\/modules\/2.6.28-14-generic\/volatile\r\n\/dev\/md0p1            1.4T   89G  1.2T   7% \/srv\/hoard<\/pre>\n<\/div>\n<\/div>\n<p>Now comes the gotcha that nearly sank me. Well, it wouldn\u2019t have been a total loss, I\u2019d only copied data from an external hard drive to my new array, and could easily have done it again.<\/p>\n<p>Everything I read told me that Debian-based systems (of which Ubuntu is, of course, one) were set up to automatically detect and activate your <code>mdadm<\/code>-create arrays on boot, and that you don\u2019t need to do anything beyond what I\u2019ve already described. Now, maybe I did something wrong (and if so, please leave a comment correcting me!), but this wasn\u2019t the case for me, leaving me without an assembled array (while somehow making <code>sdb<\/code> busy so I couldn\u2019t manually assemble the array except in a degraded state!) after a reboot. So I had to edit my <code>\/etc\/mdadm\/mdadm.conf<\/code> file like so:<\/p>\n<div>\n<div><a href=\"http:\/\/kromey.us\/guestbook.php\">physical-have-jon<\/a><\/div>\n<\/div>\n<div>\n<div>\n<pre>kromey@vmsys:~$ cat \/etc\/mdadm\/mdadm.conf\r\n# mdadm.conf\r\n#\r\n# Please refer to mdadm.conf(5) for information about this file.\r\n#\r\n\u00a0\r\n# by default, scan all partitions (\/proc\/partitions) for MD superblocks.\r\n# alternatively, specify devices to scan, using wildcards if desired.\r\n#DEVICE partitions\r\n\u00a0\r\n# auto-create devices with Debian standard permissions\r\nCREATE owner=root group=disk mode=0660 auto=yes\r\n\u00a0\r\n# automatically tag new arrays as belonging to the local system\r\nHOMEHOST \r\n\u00a0\r\n# instruct the monitoring daemon where to send mail alerts\r\nMAILADDR root\r\n\u00a0\r\n# definitions of existing MD arrays\r\nDEVICE \/dev\/sd[abcd]1\r\n\u00a0\r\nARRAY \/dev\/md0 super-minor=0\r\n\u00a0\r\n# This file was auto-generated on Mon, 03 Aug 2009 21:30:49 -0800\r\n# by mkconf $Id$<\/pre>\n<\/div>\n<\/div>\n<p>It certainly <em>looks<\/em> like my array should have been detected and started when I rebooted. I commented-out the default DEVICES line and added an explicit one, then added an explicit declaration for my array; now it\u2019s properly assembled when my system reboots, which means the <code>fstab<\/code> entry doesn\u2019t provoke a boot-stopping error anymore, and life is all-around happy!<\/p>\n<div>\n<div><a href=\"http:\/\/kromey.us\/guestbook.php\">physical-have-jon<\/a><\/div>\n<\/div>\n<p><strong>Update 9 April 2011:<\/strong> In preparation for a server rebuild, I\u2019ve been experimenting with <code>mdadm<\/code> quite a bit more, and I\u2019ve found a better solution to adding the necessary entries to the <code>mdadm.conf<\/code> file. Actually, two new solutions:<\/p>\n<ol>\n<li>Configure your <acronym title=\"Redundant Array of Inexpensive Disks\">RAID<\/acronym> array during the Ubuntu installation. Your <code>mdadm.conf<\/code> file will be properly updated with no further action necessary on your part, and you can even have those nice handy <code>fstab<\/code> entries to boot!<\/li>\n<li>Run the command <code>mdadm --examine --scan --config=mdadm.conf &gt;&gt; \/etc\/mdadm\/mdadm.conf<\/code> in your terminal. Then, open up <code>mdadm.conf<\/code> in your favorite editor to put the added line(s) into a more reasonable location.<\/li>\n<\/ol>\n<p>On my new server, I\u2019ll be following solution (1), but on my existing system described in this post, I have taken solution (2); my entire file now looks like this:<\/p>\n<div>\n<div>\n<pre>kromey@vmsys:~$ cat \/etc\/mdadm\/mdadm.conf\r\n# mdadm.conf\r\n#\r\n# Please refer to mdadm.conf(5) for information about this file.\r\n#\r\n\u00a0\r\n# by default, scan all partitions (\/proc\/partitions) for MD superblocks.\r\n# alternatively, specify devices to scan, using wildcards if desired.\r\nDEVICE partitions\r\n\u00a0\r\n# auto-create devices with Debian standard permissions\r\nCREATE owner=root group=disk mode=0660 auto=yes\r\n\u00a0\r\n# automatically tag new arrays as belonging to the local system\r\nHOMEHOST \r\n\u00a0\r\n# instruct the monitoring daemon where to send mail alerts\r\nMAILADDR root\r\n\u00a0\r\n# definitions of existing MD arrays\r\nARRAY \/dev\/md0 level=raid10 num-devices=4 UUID=46c6f1ed:434fd8b4:0eee10cd:168a240d\r\n\u00a0\r\n# This file was auto-generated on Mon, 03 Aug 2009 21:30:49 -0800\r\n# by mkconf $Id$<\/pre>\n<\/div>\n<\/div>\n<p>Notice that I\u2019m again using the default DEVICE line, and notice the new ARRAY line that\u2019s been added. This seems to work a lot better \u2014 since making this change, I no longer experience the occasional (and strange) \u201cdevice is busy\u201d errors during boot (always complaining about \/dev\/sdb for some reason), making the boot-up process just that much smoother!<\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"\n<p>If I had to pick one fault of Linux, it would be that for almost everything, the Linux user is inundated with hundreds of possible solutions. This is both a blessing and a curse \u2013 for the veterans, it means that we can pick the tool that most matches how we prefer to operate; [&#8230;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[23],"tags":[],"_links":{"self":[{"href":"https:\/\/mohan.sg\/index.php?rest_route=\/wp\/v2\/posts\/1083"}],"collection":[{"href":"https:\/\/mohan.sg\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/mohan.sg\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/mohan.sg\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/mohan.sg\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1083"}],"version-history":[{"count":7,"href":"https:\/\/mohan.sg\/index.php?rest_route=\/wp\/v2\/posts\/1083\/revisions"}],"predecessor-version":[{"id":1085,"href":"https:\/\/mohan.sg\/index.php?rest_route=\/wp\/v2\/posts\/1083\/revisions\/1085"}],"wp:attachment":[{"href":"https:\/\/mohan.sg\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1083"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/mohan.sg\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1083"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/mohan.sg\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1083"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}