November 2024
M T W T F S S
 123
45678910
11121314151617
18192021222324
252627282930  

Categories

November 2024
M T W T F S S
 123
45678910
11121314151617
18192021222324
252627282930  

GlusterFS and NFS with High Availability on CentOS 7

GlusterFS and NFS with High Availability on CentOS 7

GlusterFS and NFS with High Availability on CentOS 7
Technical requirements
3 x CentOS 7 Machines
4 IP’s
Additional hard drive for each machine with same size. (eg : /dev/sdb )
Disable SELinux

Installation
Install CentOS 7 Minimal on 3 machines and give static IP’s .
node1.rmohan.com = 192.168.1.10
node2.rmohan.com = 192.168.1.11
node3.rmohan.com = 192.168.1.12

Add hostnames on /etc/hosts file .

Install GlusterFS packages on all servers.

# yum install centos-release-gluster –y
# yum install glusterfs-server –y
# systemctl start glusterd
# systemctl enable glusterd

Create partition,

# fdisk /dev/sdb
# mkfs.ext4 /dev/sdb1
# mkdir /data
# mount /dev/sdb1 /data (Add this to /etc/fstab)
# mkdir /data/br0

Configure GlusterFS,

node1# gluster peer probe node2
node1# gluster peer probe node3
node1# gluster volume create br0 replica 3 node1:/data/br0 node2:/data/br0 node3:/data/br0
node1# gluster volume start br0
node1# gluster volume info br0
node1# gluster volume set br0 nfs.disable off

Install and Configure HA

Install below packages on all servers,

# yum -y install corosync pacemaker pcs
# systemctl enable pcsd.service
# systemctl start pcsd.service
# systemctl enable corosync.service
# systemctl enable pacemaker.service

These packages will create “hacluster” user on all machines. We need to assign password for this user and it must be same on all servers.

node1# passwd hacluster
node2# passwd hacluster
node3# passwd hacluster

And run below command’s on a single server,

node1# pcs cluster auth node1 node2 node3
node1# pcs cluster setup –name NFS node1 node2 node3
node1# pcs cluster start –all
node1# pcs property set no-quorum-policy=ignore
node1# pcs property set stonith-enabled=false

Create virtual heartbeat IP resource,

node1# pcs resource create virtual_ip ocf:heartbeat:IPaddr2 ip=192.168.1.13 cidr_netmask=24 op monitor interval=20s
node1# pcs status

Done.

Client Configuration

client# yum install nfs-utils –y
client# mount -o mountproto=tcp -t nfs 192.168.1.13:/br0 /gluster

Disaster Recovery

If a glusterFS server goes down,
Assume Node2 goes down, after repairing all things we need to start that machine.
node2# pcs cluster start node2
node2# pcs status
This commands will automatically joins lost machine to existing PCS cluster.
Also check gluster Auto healer,
node2# gluster volume heal br0 info
Note : If 2 machines goes down, left over machine will act as read only file system. Need at least 2 live machines .

Leave a Reply

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>