Why?


Search This Blog

Monday, April 6, 2015

Create RAID10 array in Centos 6.6


Create RAID10 array in Centos 6.6
(Revised 04/18/2015)

I built my own NAS. This consisted of a Centos 6.6 box (i5 CPU, 4GB DDR3 1600 RAM, Samsung Pro 120GB SSD for boot and OS, and four 3TB WD RED SATA3 Hard Drives, 1GB Ethernet). I use NFS and samba as a means of sharing files and doing backups. I used RADI10 so I get the speed of striping and the redundancy of mirrored drives. The reason for the hardware, a bit much for just storage purposes, is I will also be installing Plex on it for video streaming. Oh ya, also putting asterisk on it with a SIP Trunk to CLEC for some small usage home stuff (no big requirements for the asterisk at this scale)

I am still out on file systems. I am looking at ext4 and ZFS though. More on that in later post. For now I am using ext4, as you will see later in this post.

I used a post by Babin Lonston · November 19, 2014 as a starting point located at:

http://www.tecmint.com/create-raid-10-in-linux/

But added/removed items as it pertained to me.

First thing I did was check and set my WD drives idle time to park heads. In the /root directory I download and. NOTE** /dev/sda is my SSD boot and OS drive so I won’t be touching this one :)


#  tar -xzf idle3-tools-0.9.1.tgz
#  cd /root/idle3-tools-0.9.1

# ./idle3ctl -g /dev/sdb
Idle3 timer set to 138 (0x8a)



# ./idle3ctl -g /dev/sdc
Idle3 timer set to 138 (0x8a)


# ./idle3ctl -g /dev/sdd
Idle3 timer set to 138 (0x8a)


# ./idle3ctl -g /dev/sde
Idle3 timer set to 138 (0x8a)
  
As you can see the timer is set to 138. Not Good. this will make the parks 100's of thousands of times each year. Premature wear. So I turned them off.

# ./idle3ctl -d /dev/sdb
Idle3 timer disabled
Please power cycle your drive off and on for the new setting to be taken into account. A reboot will not be enough!



# ./idle3ctl -d /dev/sdc
Idle3 timer disabled
Please power cycle your drive off and on for the new setting to be taken into account. A reboot will not be enough!


# ./idle3ctl -d /dev/sdd
Idle3 timer disabled
Please power cycle your drive off and on for the new setting to be taken into account. A reboot will not be enough!


# ./idle3ctl -d /dev/sde
Idle3 timer disabled
Please power cycle your drive off and on for the new setting to be taken into account. A reboot will not be enough!
 

 
Now power off system

# poweroff

After powering back on verify they are disabled 

# ./idle3ctl -g /dev/sdb
Idle3 timer is disabled
 

# ./idle3ctl -g /dev/sdc
Idle3 timer is disabled
 

# ./idle3ctl -g /dev/sdd
Idle3 timer is disabled
 

# ./idle3ctl -g /dev/sde
Idle3 timer is disabled

 

It’s time to check the drives whether there is already any raid existed before creating a new one. NOTE** /dev/sda is my SSD boot and OS drive so I won’t be touching this one :)

# mdadm -E /dev/sd[b-e]
# mdadm --examine /dev/sdb /dev/sdc /dev/sdd /dev/sde


**If any of the disk have a prior setup or MBR records then use this to clean them. Do this for each drive
             How to remove an mdadm Raid Array
Find out your arrays (md0, md1, etc..) using

# fdisk -l

Query your arrays to find out what disks are contained using

# mdadm --detail /dev/md0

Shut down the array using

# umount -l /dev/md0

# mdadm --stop /dev/md0

zero the superblock FOR EACH drive

# mdadm --zero-superblock /dev/sdb

# mdadm --zero-superblock /dev/sdc

# mdadm --zero-superblock /dev/sdd

# mdadm --zero-superblock /dev/sde
 
If you have to remove the MBR and partition use

dd if=/dev/zero of=/dev/sdb bs=1M count=1

You may also need to remove the mdadm.conf if there was a raid on there already.

# cat /etc/mdadm.conf

# rm /etc/mdadm.conf

Now create a new partition on all 4 disks (/dev/sdb, /dev/sdc, /dev/sdd and /dev/sde) using parted. NOTE** fdisk will not work with partitions over 2TB.

I had to enable my EPEL repo to get gparted (parted for the cli). I then installed the package

# yum -y install gparted

# parted /dev/sdb

GNU Parted 2.1
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.

(parted) print
Error: /dev/sdb: unrecognised disk label

(parted) mklabel gpt

(parted) print
Model: Unknown (unknown)
Disk /dev/sdb: 3001GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start  End  Size  File system  Name  Flags

(parted) mkpart primary 0GB 3001GB

(parted) print
Model: Unknown (unknown)
Disk /dev/sdb: 5909GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start   End     Size    File system  Name     Flags
 1      1049kB  5909GB  3001GB               primary

(parted)

Now do this for the other three disk (/dev/sdc, /dev/sdd and /dev/sde)

After creating all 4 partitions, again I examined the drives for any already existing raid using the following command.


# mdadm -E /dev/sd[b-e]
 
# mdadm -E /dev/sd[b-e]1

OR

# mdadm --examine /dev/sdb /dev/sdc /dev/sdd /dev/sde
 
# mdadm --examine /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1

Now it’s time to create a ‘md’ (i.e. /dev/md0) device, using ‘mdadm’ raid management tool. Before, creating device your system must have ‘mdadm’ tool installed. NOTE** I turn off all EPEL repos before doing this to insure I get the one from Centos.

# yum -y install mdadm


Once ‘mdadm’ tool installed, you can now create a ‘md’ raid device using the following command.

# mdadm --create /dev/md0 --level=10 --raid-devices=4 /dev/sd[b-e]1


Next verify the newly created raid device using the ‘cat’ command.

# cat /proc/mdstat


Next, examine all the 4 drives, partition 1, using the below command. The output of the below command will be long as it displays the information of all 4 disks.

# mdadm --examine /dev/sd[b-e]1

Next, check the details of Raid Array with the help of following command.

# mdadm –detail /dev/md0

Create a file system using ext4 for md0. Here I used ext4, but you can use any filesystem type if you want.

# mkfs.ext4 /dev/md0

After creating filesystem, mount the created file-system under ‘/mnt/raid10‘ and list the contents of the mount point using ‘ls -l’ command.

# mkdir /mnt/raid10
 
# mount /dev/md0 /mnt/raid10/
 
# ls -l /mnt/raid10/

For automounting, open the ‘/etc/fstab‘ file and append the below entry in fstab, may be mount point will differ according to your environment. Save and quit using :w and :q

# vi /etc/fstab 
/dev/md0 /mnt/raid10 ext4 defaults 0 0

Next, verify the ‘/etc/fstab‘ file for any errors before restarting the system using ‘mount -a‘ command.

# mount -av


By default the raid array does not have a config file, so we need to save it manually after making all the above steps, to preserve these settings during system boot.

# mdadm --detail --scan --verbose >> /etc/mdadm.conf

Now reboot and make sure the array is still in place and mounted

# reboot

After you log back in

# mdadm --detail /dev/md0 

I have done articles for install/setup of NFS and samba as well. Now I will apply those to this array and share the data on it.

http://glenewhittenberg.blogspot.com/2015/04/setting-up-nfs-server-and-client.html

http://glenewhittenberg.blogspot.com/2015/04/samba-setup-on-centos-66.html

End.



 

No comments:

Post a Comment