Install ZFS and create zpool on Centos 6.6
Assuming this is a clean, up to date install of CentOS you
will need to install EPEL and ZFS from RPM, this is the simplest way to get ZFS
today:
yum localinstall --nogpgcheck
https://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
yum localinstall --nogpgcheck
http://archive.zfsonlinux.org/epel/zfs-release.el6.noarch.rpm
yum install kernel-devel zfs
You can now load the ZFS module:
modprobe zfs
After running the above command you should have seen a list
of loaded modules from ZFS.
#
lsmod | grep -i zfs
zfs 2179437 3
zcommon 47120 1 zfs
znvpair 80252 2 zfs,zcommon
spl 89796 3 zfs,zcommon,znvpair
zavl 6784 1 zfs
zunicode 323046 1 zfs
You should now make sure the module is loaded persistently
on boot. We need to make a new file and add a script to it.
vi
/etc/sysconfig/modules/zfs.modules
Add the following code:
#!/bin/sh
if
[ ! -c /dev/zfs
] ; then
exec /sbin/modprobe zfs >/dev/null 2>&1
fi
Make this file executable:
chmod +x
/etc/sysconfig/modules/zfs.modules
Now reboot and make sure everything loaded
reboot
After
reboot run lsmod again and make sure modules are loaded
#
lsmod | grep -i zfs
zfs 2179437 3
zcommon 47120 1 zfs
znvpair 80252 2 zfs,zcommon
spl 89796 3 zfs,zcommon,znvpair
zavl 6784 1 zfs
zunicode 323046 1 zfs
Now we need
to set up a zpool. In ZFS terms, I will be stripping two mirrors, or better
known as raid10. First let’s check the disks we will use for any mbr or partition
info. If we find that lets remove it. NOTE** This will erase all data on the
disks you do this to.
fdisk
–l | grep GB
For me the
drives I will be using are /dev/sdb through /dev/sde. I have old information on
the drives from an old mdadm raid array, so I will remove it. I will show just
/dev/sdb on this example. Make sure to do this to all the drives that you want
to use, if they have old array data on them.
fdisk
/dev/sdb
Command
(m for help): p
After
reviewing the information, and you still want to remove it:
Command
(m for help): d
Selected
partition 1
Command
(m for help): w
Now do this
for all the other disk you will use in your array.
Just to be
sure let’s zero out the first 1MB of data on the drive to insure the removal of
MBR and partition data
dd if=/dev/zero
of=/dev/sdb bs=1M count=1
Remember do
this for all the other disk you will use in your array.
Next I setup the RAID (using a name of myriad. You can use
what you like):
zpool
create myraid mirror -f /dev/sdb /dev/sdc mirror /dev/sdd /dev/sde
Now make
sure it was created
# zpool
status
pool: myraid
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
myraid ONLINE 0
0 0
mirror-0 ONLINE
0 0 0
sdb ONLINE 0
0 0
sdc ONLINE 0
0 0
mirror-1 ONLINE
0 0 0
sdd ONLINE 0
0 0
sde ONLINE 0
0 0
errors:
No known data errors
Check it is
mounted
mount
| grep zfs
df
-h | grep myriad
If you don’t
see it mounted try
zfs
mount myriad
Add ZFS to auto mount /myraid with boot if wanted
echo
"zfs mount myraid" >> /etc/rc.local
That’s it.
Other things you can do is add SSD caching, compression, deduping, etc. Know
what these do before using them though J
zpool add myraid cache sde2
zfs set compression=on myraid
zfs set dedup=on myraid
zfs get compressratio myraid
zfs set compression=lz4 myraid
zfs get all myraid
zfs create -V 10G myraid/name_of_volume
zfs destroy myraid/name_of_volume
Replace failed drive
# zpool offline raid0 sdb
Take server down and replace physical drive. Use parted to get a partition back on drive.
# zpool online raid0 sdb
# zpool replace raid0 sdb
# zpool status tank
If you move your drives/pool to another system you can use
# zpool import -f tank
To speed things up
# zfs set sync=disabled tank
Read below before disabling though
zfs get compressratio myraid
zfs set compression=lz4 myraid
zfs get all myraid
zfs create -V 10G myraid/name_of_volume
zfs destroy myraid/name_of_volume
Replace failed drive
# zpool offline raid0 sdb
Take server down and replace physical drive. Use parted to get a partition back on drive.
# zpool online raid0 sdb
# zpool replace raid0 sdb
# zpool status tank
If you move your drives/pool to another system you can use
# zpool import -f tank
To speed things up
# zfs set sync=disabled tank
Read below before disabling though
sync=standard This is the default option. Synchronous file system transactions (fsync, O_DSYNC, O_SYNC, etc) are written out (to the intent log) and then secondly all devices written are flushed to ensure the data is stable (not cached by device controllers). sync=always For the ultra-cautious, every file system transaction is written and flushed to stable storage by a system call return. This obviously has a big performance penalty. sync=disabled Synchronous requests are disabled. File system transactions only commit to stable storage on the next DMU transaction group commit which can be many seconds. This option gives the highest performance. However, it is very dangerous as ZFS is ignoring the synchronous transaction demands of applications such as databases or NFS. Setting sync=disabled on the currently active root or /var file system may result in out-of-spec behavior, application data loss and increased vulnerability to replay attacks. This option does *NOT* affect ZFS on-disk consistency. Administrators should only use this when these risks are understood.
No comments:
Post a Comment