November 2024
M T W T F S S
 123
45678910
11121314151617
18192021222324
252627282930  

Categories

November 2024
M T W T F S S
 123
45678910
11121314151617
18192021222324
252627282930  

Pacemaker Apache on High Availability CENTOS 7

Pacemaker Apache on High Availability CENTOS 7

 

 

Red Hat, Inc. introduces new Open Source Software in their every release. Red Hat Enterprise Linux 7 High Availability Add-On introduces a new suite of technologies that underlying high-availability technology based on Pacemaker and Corosync that completely replaces the CMAN and RGManager technologies from previous releases of High Availability Add-On.

HA Add On from RHEL 5 to 7

RHCS-5-7

Pacemaker is a High Availability cluster Software for Linux like Operating System.Pacemaker is known as ‘Cluster Resource Manager‘,

It provides maximum availability of the cluster resources by doing fail over of resources between the cluster nodes.

Pacemaker use corosync for heartbeat and internal communication among cluster components , Corosync also take care of Quorum in cluster.

 

 

haserver_cluster4

In this article we will demonstrate the installation and configuration of two Node Apache (httpd) Web Server Clustering using Pacemaker on CentOS 7.

In my setup i will use two Virtual Machines and Shared Storage from Centos 7 server
two disks will be shared where one disk will be used as fencing device and other disk will used as shared storage for web server )

192.168.1.71 apache1.rmohan.com apache1 -CENTOS7.2 8 GB 2 CPU
192.168.1.72 apache2.rmohan.com apache2 -CENTOS7.2 8 GB 2 CPU

192.168.1.73 storage.rmohan.com storage

[root@apache1 ~]#

Step:1 Update ‘/etc/hosts’ file

Add the following lines in /etc/hosts file in both the nodes.

[root@apache1 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.71 apache1.rmohan.com apache1
192.168.1.72 apache2.rmohan.com apache2
192.168.1.73 storage.rmohan.com storage

Step:2 Install the time server

yum install chrony -y

Nothing to do
[root@apache2 ~]# systemctl start crond.service
[root@apache2 ~]# systemctl status crond.service
? crond.service – Command Scheduler
Loaded: loaded (/usr/lib/systemd/system/crond.service; enabled; vendor preset: enabled)
Active: active (running) since Sat 2016-05-14 20:13:14 SGT; 1h 48min ago
Main PID: 848 (crond)
CGroup: /system.slice/crond.service
??848 /usr/sbin/crond -n

May 14 20:13:14 apache2.rmohan.com systemd[1]: Started Command Scheduler.
May 14 20:13:14 apache2.rmohan.com systemd[1]: Starting Command Scheduler…
May 14 20:13:14 apache2.rmohan.com crond[848]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 57% if used.)
May 14 20:13:14 apache2.rmohan.com crond[848]: (CRON) INFO (running with inotify support)
May 14 22:00:15 apache2.rmohan.com systemd[1]: Started Command Scheduler.
[root@apache2 ~]# systemctl enable crond.service
[root@apache2 ~]#

STEP3 Install the Cluster and other required packages.

Use th below yum command on both the nodes to install cluster package (pcs ), fence-agents & web server (httpd)

[root@apache1 ~]# yum -y install pcs fence-agents-all iscsi-initiator-utils httpd

[root@apache2 ~]# yum -y install pcs fence-agents-all iscsi-initiator-utils httpd

STEP4
Set the password to ‘hacluster’ user
It is recommended to use the same password of ‘hacluster’ user on both the nodes.

[root@apache1 ~]# echo p@ssw0rd | passwd –stdin hacluster
Changing password for user hacluster.
passwd: all authentication tokens updated successfully.
[root@apache1 ~]#

[root@apache2 ~]# echo p@ssw0rd | passwd –stdin hacluster
Changing password for user hacluster.
passwd: all authentication tokens updated successfully.
[root@apache2 ~]#

STEP5
Allow High Availability ports in firewall.
Use ‘firewall-cmd‘ command on both the nodes to open High Availability ports in OS firewall.

firewall-cmd –permanent –add-service=high-availability
success

firewall-cmd –reload
success

STEP6
Start the Cluster Service and authorize nodes to join the cluster.

Lets start the cluster service on both the nodes,

[root@apache1 ~]# systemctl start pcsd.service ; systemctl enable pcsd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/pcsd.service to /usr/lib/systemd/system/pcsd.service.
[root@apache1 ~]#

[root@apache2 ~]# systemctl start pcsd.service ; systemctl enable pcsd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/pcsd.service to /usr/lib/systemd/system/pcsd.service.
[root@apache2 ~]#

se below command on either of the node to authorize the nodes to join cluster.

[root@apache1 ~]# pcs cluster auth apache1 apache2
Username: hacluster
Password:
apache1: Authorized
apache2: Authorized
[root@apache1 ~]#

Create the Cluster & enable the Cluster Service

Use below pcs commands on any of the cluster nodes to create a cluster with the name ‘apachecluster‘ and apache1 & apache2 are the cluster nodes.
[root@apache1 ~]# pcs cluster setup –start –name apachecluster apache1 apache2
Shutting down pacemaker/corosync services…
Redirecting to /bin/systemctl stop pacemaker.service
Redirecting to /bin/systemctl stop corosync.service
Killing any remaining services…
Removing all cluster configuration files…
apache1: Succeeded
apache2: Succeeded
Starting cluster on nodes: apache1, apache2…
apache2: Starting Cluster…
apache1: Starting Cluster…
Synchronizing pcsd certificates on nodes apache1, apache2…
apache1: Success
apache2: Success

Restaring pcsd on the nodes in order to reload the certificates…
apache1: Success
apache2: Success

Enable the Cluster Service using below pcs command :

[root@apache1 ~]# pcs cluster enable –all
apache1: Cluster Enabled
apache2: Cluster Enabled
[root@apache1 ~]#

Now Verify the cluster Service

[root@apache1 ~]# pcs cluster status
Cluster Status:
Last updated: Sat May 14 12:33:25 2016 Last change: Sat May 14 12:32:36 2016 by hacluster via crmd on apache2
Stack: corosync
Current DC: apache2 (version 1.1.13-10.el7_2.2-44eb2dd) – partition with quorum
2 nodes and 0 resources configured
Online: [ apache1 apache2 ]

PCSD Status:
apache1: Online
apache2: Online
[root@apache1 ~]#

set8
Setup iscsi shared Storage on Fedora Server for both the nodes.

[root@storage ~]# vi /etc/hosts
[root@storage ~]# fdisk -c /dev/sdb
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0xf218266b.

Command (m for help): n
Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-41943039, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-41943039, default 41943039):
Using default value 41943039
Partition 1 of type Linux and of size 20 GiB is set

Command (m for help): t
Selected partition 1
Hex code (type L to list all codes): 8e
Changed type of partition ‘Linux’ to ‘Linux LVM’

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
[root@storage ~]# partprobe /dev/sdb
[root@storage ~]# pvcreate /dev/sdb1
Physical volume “/dev/sdb1” successfully created
[root@storage ~]# vgcreate apachedate_vg /dev/sdb1
Volume group “apachedate_vg” successfully created
[root@storage ~]# fdisk -c /dev/sdc
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0xba406434.

Command (m for help): n
Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): p
Partition number (1-4, default 1):
First sector (2048-41943039, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-41943039, default 41943039):
Using default value 41943039
Partition 1 of type Linux and of size 20 GiB is set

Command (m for help): t
Selected partition 1
Hex code (type L to list all codes): 8e
Changed type of partition ‘Linux’ to ‘Linux LVM’

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
[root@storage ~]# partprobe /dev/sdc
[root@storage ~]# pvcreate /dev/sdc1
Physical volume “/dev/sdc1” successfully created
[root@storage ~]# vgcreate apachefence_vg /dev/sdc1
Volume group “apachefence_vg” successfully created
[root@storage ~]#
[root@storage ~]#
[root@storage ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 centos lvm2 a– 49.51g 44.00m
/dev/sdb1 apachedate_vg lvm2 a– 20.00g 20.00g
/dev/sdc1 apachefence_vg lvm2 a– 20.00g 20.00g

[root@storage ~]# lvcreate -n apachedata_lvs -l 100%FREE apachedate_vg
Logical volume “apachedata_lvs” created.
[root@storage ~]# lvcreate -n apachefence_lvs -l 100%FREE apachefence_vg
Logical volume “apachefence_lvs” created.
[root@storage ~]#

[root@storage ~]# yum -y install targetcli

[root@storage ~]# targetcli
targetcli shell version 2.1.fb41
Copyright 2011-2013 by Datera, Inc and others.
For help on commands, type ‘help’.

/backstores/block> ls
o- block ………………………………………………………………………………………… [Storage Objects: 0]
/backstores/block> ls
o- block ………………………………………………………………………………………… [Storage Objects: 0]
/backstores/block> create apachedata /dev/
/dev/apachedate_vg/ /dev/apachefence_vg/ /dev/block/ /dev/bsg/ /dev/bus/ /dev/cdrom
/dev/centos/ /dev/char/ /dev/cpu/ /dev/disk/ /dev/dm-0 /dev/dm-1
/dev/dm-2 /dev/dm-3 /dev/dri/ /dev/fd/ /dev/hugepages/ /dev/input/
/dev/mapper/ /dev/mqueue/ /dev/net/ /dev/pts/ /dev/raw/ /dev/sda
/dev/sda1 /dev/sda2 /dev/sdb /dev/sdb1 /dev/sdc /dev/sdc1
/dev/shm/ /dev/snd/ /dev/sr0 /dev/vfio/
/backstores/block> create apachedata /dev/apachedate_vg/apachedata_lvs
Created block storage object apachedata using /dev/apachedate_vg/apachedata_lvs.
/backstores/block> create apachefence /dev/apache
/dev/apachedate_vg/ /dev/apachefence_vg/
/backstores/block> create apachefence /dev/apachefence_vg/apachefence_lvs
Created block storage object apachefence using /dev/apachefence_vg/apachefence_lvs.
/backstores/block> cd /iscsi
/iscsi> ls
o- iscsi ………………………………………………………………………. [Targets: 0]
/iscsi> create
Created target iqn.2003-01.org.linux-iscsi.storage.x8664:sn.94eff7fe336b.
Created TPG 1.
Global pref auto_add_default_portal=true
Created default portal listening on all IPs (0.0.0.0), port 3260.
/iscsi> ls
o- iscsi ……………………………………………………………………………………………….. [Targets: 1]
o- iqn.2003-01.org.linux-iscsi.storage.x8664:sn.94eff7fe336b ………………………………………………….. [TPGs: 1]
o- tpg1 ……………………………………………………………………………………. [no-gen-acls, no-auth]
o- acls ……………………………………………………………………………………………… [ACLs: 0]
o- luns ……………………………………………………………………………………………… [LUNs: 0]
o- portals ………………………………………………………………………………………… [Portals: 1]
o- 0.0.0.0:3260 …………………………………………………………………………………………. [OK]
/iscsi> cd luns
No such path /iscsi/luns
/iscsi> ls
o- iscsi ……………………………………………………………………………………………….. [Targets: 1]
o- iqn.2003-01.org.linux-iscsi.storage.x8664:sn.94eff7fe336b ………………………………………………….. [TPGs: 1]
o- tpg1 ……………………………………………………………………………………. [no-gen-acls, no-auth]
o- acls ……………………………………………………………………………………………… [ACLs: 0]
o- luns ……………………………………………………………………………………………… [LUNs: 0]
o- portals ………………………………………………………………………………………… [Portals: 1]
o- 0.0.0.0:3260 …………………………………………………………………………………………. [OK]
/iscsi> cd iqn.2003-01.org.linux-iscsi.storage.x8664:sn.94eff7fe336b/
/iscsi/iqn.20….94eff7fe336b> ls
o- iqn.2003-01.org.linux-iscsi.storage.x8664:sn.94eff7fe336b ……………………………………………………. [TPGs: 1]
o- tpg1 ……………………………………………………………………………………… [no-gen-acls, no-auth]
o- acls ……………………………………………………………………………………………….. [ACLs: 0]
o- luns ……………………………………………………………………………………………….. [LUNs: 0]
o- portals ………………………………………………………………………………………….. [Portals: 1]
o- 0.0.0.0:3260 …………………………………………………………………………………………… [OK]
/iscsi/iqn.20….94eff7fe336b> cd luns
No such path /iscsi/iqn.2003-01.org.linux-iscsi.storage.x8664:sn.94eff7fe336b/luns
/iscsi/iqn.20….94eff7fe336b> cd tpg1/luns
/iscsi/iqn.20…36b/tpg1/luns> ls
o- luns …………………………………………………………………………………………………… [LUNs: 0]
/iscsi/iqn.20…36b/tpg1/luns> cd luns
No such path /iscsi/iqn.2003-01.org.linux-iscsi.storage.x8664:sn.94eff7fe336b/tpg1/luns/luns
/iscsi/iqn.20…36b/tpg1/luns> ls
o- luns ………………………………………………………………………….. [LUNs: 0]
/iscsi/iqn.20…36b/tpg1/luns> [wd
/iscsi/iqn.20…36b/tpg1/luns> pwd
/iscsi/iqn.2003-01.org.linux-iscsi.storage.x8664:sn.94eff7fe336b/tpg1/luns
/iscsi/iqn.20…36b/tpg1/luns> create iqn.1994-05.com.redhat:b26f647eddb
storage object or path not valid
/iscsi/iqn.20…36b/tpg1/luns> create iqn.1994-05.com.redhat:b26f647eddb
storage object or path not valid
/iscsi/iqn.20…36b/tpg1/luns> create /backstores/block/apachedata
Created LUN 0.
/iscsi/iqn.20…36b/tpg1/luns> create /backstores/block/apachefence
Created LUN 1.
/iscsi/iqn.20…36b/tpg1/luns> cd ..
/iscsi/iqn.20…f7fe336b/tpg1> ls
o- tpg1 ………………………………………………………………. [no-gen-acls, no-auth]
o- acls ………………………………………………………………………… [ACLs: 0]
o- luns ………………………………………………………………………… [LUNs: 2]
| o- lun0 ………………………………. [block/apachedata (/dev/apachedate_vg/apachedata_lvs)]
| o- lun1 ……………………………. [block/apachefence (/dev/apachefence_vg/apachefence_lvs)]
o- portals …………………………………………………………………… [Portals: 1]
o- 0.0.0.0:3260 ……………………………………………………………………. [OK]
/iscsi/iqn.20…f7fe336b/tpg1> cd acls
/iscsi/iqn.20…36b/tpg1/acls> ls
o- acls ………………………………………………………………………….. [ACLs: 0]
/iscsi/iqn.20…36b/tpg1/acls> create iqn.1994-05.com.redhat:b26f647eddb
Created Node ACL for iqn.1994-05.com.redhat:b26f647eddb
Created mapped LUN 1.
Created mapped LUN 0.
/iscsi/iqn.20…36b/tpg1/acls> create iqn.1994-05.com.redhat:1b9a01e1275
Created Node ACL for iqn.1994-05.com.redhat:1b9a01e1275
Created mapped LUN 1.
Created mapped LUN 0.
/iscsi/iqn.20…36b/tpg1/acls> cd .
/iscsi/iqn.20…36b/tpg1/acls> cd /
/> saveconfig
Last 10 configs saved in /etc/target/backup.
Configuration saved to /etc/target/saveconfig.json
/> exit
Global pref auto_save_on_exit=true
Last 10 configs saved in /etc/target/backup.
Configuration saved to /etc/target/saveconfig.json
[root@storage ~]# systemctl start target.service
[root@storage ~]#
[root@storage ~]# systemctl enable target.service
Created symlink from /etc/systemd/system/multi-user.target.wants/target.service to /usr/lib/systemd/system/target.service.

Now Scan the iscsi storage on both the nodes :

Run below commands on both the nodes

iscsiadm –mode discovery –type sendtargets –portal 192.168.1.73
iscsiadm -m node -T iqn.2003-01.org.linux-iscsi.storage.x8664:sn.94eff7fe336b -l -p 192.168.1.73:3260

[root@apache1 ~]# systemctl start iscsi.service
[root@apache1 ~]# systemctl enable iscsi.service
[root@apache1 ~]# systemctl enable iscsid.service
Created symlink from /etc/systemd/system/multi-user.target.wants/iscsid.service to /usr/lib/systemd/system/iscsid.service.
[root@apache1 ~]#

root@apache1 ~]# ls -l /dev/disk/by-id/
total 0
lrwxrwxrwx 1 root root 9 May 14 2016 ata-VMware_Virtual_IDE_CDROM_Drive_10000000000000000001 -> ../../sr0
lrwxrwxrwx 1 root root 10 May 14 2016 dm-name-centos-home -> ../../dm-2
lrwxrwxrwx 1 root root 10 May 14 2016 dm-name-centos-root -> ../../dm-0
lrwxrwxrwx 1 root root 10 May 14 2016 dm-name-centos-swap -> ../../dm-1
lrwxrwxrwx 1 root root 10 May 14 2016 dm-uuid-LVM-YEJwQZi9JXl6MBbZN7XdhQaR09fqbpQH2nyITpeRJVbMnYzojU1b9qSDNbJr0eLn -> ../../dm-0
lrwxrwxrwx 1 root root 10 May 14 2016 dm-uuid-LVM-YEJwQZi9JXl6MBbZN7XdhQaR09fqbpQHhB2KiZ6jcZwYY8OJpwA4l11wnghcdTtJ -> ../../dm-2
lrwxrwxrwx 1 root root 10 May 14 2016 dm-uuid-LVM-YEJwQZi9JXl6MBbZN7XdhQaR09fqbpQHW6P6EC6fmlWdGYY5o41uhw9vKBmWKV0o -> ../../dm-1
lrwxrwxrwx 1 root root 10 May 14 2016 lvm-pv-uuid-YXOIJV-EPlD-dXwg-ePQX-D7av-jPdr-Grb4rp -> ../../sda2
lrwxrwxrwx 1 root root 9 May 14 15:30 scsi-3600140562a971495dce49a581f20d1ea -> ../../sdc
lrwxrwxrwx 1 root root 9 May 14 15:30 scsi-360014059dec3b96b8944a29a6cbe1d5e -> ../../sdb
lrwxrwxrwx 1 root root 9 May 14 15:30 wwn-0x600140562a971495dce49a581f20d1ea -> ../../sdc
lrwxrwxrwx 1 root root 9 May 14 15:30 wwn-0x60014059dec3b96b8944a29a6cbe1d5e -> ../../sdb

Step:9 Create the Cluster Resources.

Define stonith (Shoot The Other Node In The Head) fencing device for the cluster. It is a method to isolate the node from cluster when node become unresponsive.

I am using 1 GB iscsi storage (/dev/sdc ) for fencing.

Run the following commands on either of the node :

[root@apache1 ~]# pcs stonith create scsi_fecing_device fence_scsi pcmk_host_list=”apache1 apache2″ pcmk_monitor_action=”metadata” pcmk_reboot_action=”off” devices=”/dev/disk/by-id/wwn-0x600140562a971495dce49a581f20d1ea” meta provides=”unfencing”
[root@apache1 ~]#

[root@apache1 ~]# fdisk

Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0xab2385e3.

Command (m for help): n
Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): p
Partition number (1-4, default 1):
First sector (8192-41934847, default 8192):
Using default value 8192
Last sector, +sectors or +size{K,M,G} (8192-41934847, default 41934847):
Using default value 41934847
Partition 1 of type Linux and of size 20 GiB is set

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
[root@apache1 ~]# fdisk -l

Disk /dev/sda: 64.4 GB, 64424509440 bytes, 125829120 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000a55d1

Device Boot Start End Blocks Id System
/dev/sda1 * 2048 1026047 512000 83 Linux
/dev/sda2 1026048 125829119 62401536 8e Linux LVM

Disk /dev/mapper/centos-root: 40.1 GB, 40093351936 bytes, 78307328 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/mapper/centos-swap: 4160 MB, 4160749568 bytes, 8126464 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/mapper/centos-home: 19.6 GB, 19574816768 bytes, 38232064 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/sdb: 21.5 GB, 21470642176 bytes, 41934848 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 4194304 bytes

Disk /dev/sdc: 21.5 GB, 21470642176 bytes, 41934848 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 4194304 bytes
Disk label type: dos
Disk identifier: 0xab2385e3

Device Boot Start End Blocks Id System
/dev/sdc1 8192 41934847 20963328 83 Linux

[root@apache1 ~]# pcs stonith show
scsi_fecing_device (stonith:fence_scsi): Started apache1
[root@apache1 ~]#

Mount the new file system temporary on /var/www and create sub-folders

[root@apache1 ~]# ls -ltr /dev/disk/by-id/
total 0
lrwxrwxrwx 1 root root 9 May 14 15:30 wwn-0x60014059dec3b96b8944a29a6cbe1d5e -> ../../sdb
lrwxrwxrwx 1 root root 9 May 14 15:30 scsi-360014059dec3b96b8944a29a6cbe1d5e -> ../../sdb
lrwxrwxrwx 1 root root 9 May 14 19:36 wwn-0x600140562a971495dce49a581f20d1ea -> ../../sdc
lrwxrwxrwx 1 root root 9 May 14 19:36 scsi-3600140562a971495dce49a581f20d1ea -> ../../sdc
lrwxrwxrwx 1 root root 10 May 14 19:36 wwn-0x600140562a971495dce49a581f20d1ea-part1 -> ../../sdc1
lrwxrwxrwx 1 root root 10 May 14 19:36 scsi-3600140562a971495dce49a581f20d1ea-part1 -> ../../sdc1
lrwxrwxrwx 1 root root 10 May 14 2016 lvm-pv-uuid-YXOIJV-EPlD-dXwg-ePQX-D7av-jPdr-Grb4rp -> ../../sda2
lrwxrwxrwx 1 root root 9 May 14 2016 ata-VMware_Virtual_IDE_CDROM_Drive_10000000000000000001 -> ../../sr0
lrwxrwxrwx 1 root root 10 May 14 2016 dm-name-centos-root -> ../../dm-0
lrwxrwxrwx 1 root root 10 May 14 2016 dm-uuid-LVM-YEJwQZi9JXl6MBbZN7XdhQaR09fqbpQH2nyITpeRJVbMnYzojU1b9qSDNbJr0eLn -> ../../dm-0
lrwxrwxrwx 1 root root 10 May 14 2016 dm-uuid-LVM-YEJwQZi9JXl6MBbZN7XdhQaR09fqbpQHW6P6EC6fmlWdGYY5o41uhw9vKBmWKV0o -> ../../dm-1
lrwxrwxrwx 1 root root 10 May 14 2016 dm-name-centos-swap -> ../../dm-1
lrwxrwxrwx 1 root root 10 May 14 2016 dm-uuid-LVM-YEJwQZi9JXl6MBbZN7XdhQaR09fqbpQHhB2KiZ6jcZwYY8OJpwA4l11wnghcdTtJ -> ../../dm-2
lrwxrwxrwx 1 root root 10 May 14 2016 dm-name-centos-home -> ../../dm-2
[root@apache1 ~]# fdisk /dev/disk/by-id/wwn-0x60014059dec3b96b8944a29a6cbe1d5e
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x26a39fc5.

Command (m for help): n
Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): p
Partition number (1-4, default 1): w
Partition number (1-4, default 1):
First sector (8192-41934847, default 8192):
Using default value 8192
Last sector, +sectors or +size{K,M,G} (8192-41934847, default 41934847):
Using default value 41934847
Partition 1 of type Linux and of size 20 GiB is set

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
[root@apache1 ~]# mkfs.ext4 /dev/disk/by-id/wwn-0x60014059dec3b96b8944a29a6cbe1d5e
mke2fs 1.42.9 (28-Dec-2013)
/dev/disk/by-id/wwn-0x60014059dec3b96b8944a29a6cbe1d5e is entire device, not just one partition!
Proceed anyway? (y,n) y
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=1024 blocks
1310720 inodes, 5241856 blocks
262092 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2153775104
160 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

mkfs.ext4 /dev/disk/by-id/wwn-0x60014059dec3b96b8944a29a6cbe1d5e
mount /dev/disk/by-id/wwn-0x60014059dec3b96b8944a29a6cbe1d5e /var/www/
df -TH
mkdir /var/www/html
mkdir /var/www/cgi-bin
mkdir /var/www/error
echo “Apache Web Sever Pacemaker Cluster” > /var/www/html/index.html
umount /var/www/

[root@apache1 ~]# pcs resource create webserver_fs Filesystem device=”/dev/disk/by-id/wwn-0x60014059dec3b96b8944a29a6cbe1d5e” directory=”/var/www” fstype=”ext4″ –group webgroup

Create Virtual IP (IPaddr2) Cluster Resource using below command. Execute the following command on any of the node.

[root@apache1 ~]# pcs resource create vip_res IPaddr2 ip=192.168.1.70 cidr_netmask=24 –group webgroup

Add the following lines in ‘/etc/httpd/conf/httpd.conf’ file on both the nodes.
SetHandler server-status
Order deny,allow
Deny from all
Allow from 127.0.0.1

[root@apache1 ~]# pcs resource create apache_res apache configfile=”/etc/httpd/conf/httpd.conf” statusurl=”http://127.0.0.1/server-status” –group webgroup

Pacemaker and pcs on Linux example, configuring cluster resource

Once the cluster and nodes stonith devices configuration is done, then we can start create resource in the cluster.

In this example, there are 3 SAN based storage LUNs, all accessible by the nodes in the cluster, we create filesystem resources to let cluster to manage them, any of them fail to access the resource, the filesystem will float to other nodes.
Resource Creation

Create a filesystem based resource, resource id is fs11

#pcs resource create fs11 ocf:heartbeat:Filesystem params device=/dev/mapper/LUN11 directory=/lun11 fstype=”xfs”

Normally we want gracefully stop the filesystem, and kill the processes that accessing the filesystems when stopping the resource. And have filesystem monitor enabled.

#pcs resource create fs11 ocf:heartbeat:Filesystem params device=/dev/mapper/LUN11 directory=/lun11 fstype=”xfs” fast_stop=”no” force_unmount=”safe” op stop on-fail=stop timeout=200 op monitor on-fail=stop timeout=200 OCF_CHECK_LEVEL=10

From the left, the options are:
– the name of the filesystem resource, also called resource id(fs11)
– the resource agent to use (ocf:heartbeat:Filesystem)
– the block device for the filesystem (e.g. /dev/mapper/lun11)
– the mount point for the filesystem (e.g. /lun11)
– the filesystem type (xfs)
– fast_stop=”no” force_unmount=”safe” are to help the filesystem stop and unmount filesystem
– op monitor on-fail=stop timeout=200 OCF_CHECK_LEVEL=10: similar to stop, also the check level does a read test to see if the filesystem is readable for its monitor probe

To List the created resource fs11

# pcs resource
fs11 (ocf::heartbeat:Filesystem): Started

# pcs resource show fs11
Resource: fs11 (class=ocf provider=heartbeat type=Filesystem)
Attributes: device=/dev/mapper/LUN11 directory=/lun11 fstype=xfs fast_stop=no force_unmount=safe
Operations: start interval=0s timeout=60 (fs11-start-timeout-60)
stop on-fail=stop interval=0s timeout=200 (fs11-stop-on-fail-stop-timeout-200)
monitor on-fail=stop interval=60s timeout=200 OCF_CHECK_LEVEL=10 (fs11-monitor-on-fail-stop-timeout-200)

There are 4 properties to identify a resource,

id: the id in the cluster to identify a particular service — > fs11

Rest 3 just after resource id in the command line, seperated by : ocf:heartbeat:Filesystem
ocf: Rresource standard
heartbeat: Provider
Filesystem: type

To check all available resources provided by pacemaker by categories.

#pcs resource list # list all available resources
#pcs resource standards # list resource standards
#pcs resource providers # list all available resource providers
#pcs resource list string # it works as a filter, for example, you want to list resource Filesystem
#pcs resource list Filesystem
ocf:heartbeat:Filesystem – Manages filesystem mounts

Delete resource

Want delete a resource ? here is it.

#pcs resource delete fs11

Resource-Specific Parameters

To check resource type Filesystem parameters, all the parameters can be set and updated

# pcs resource describe Filesystem
ocf:heartbeat:Filesystem – Manages filesystem mounts

Resource script for Filesystem. It manages a Filesystem on a shared storage medium. The standard monitor operation of depth 0 (also known as probe) checks if the filesystem is mounted. If you want deeper tests, set OCF_CHECK_LEVEL to one of the following values: 10: read first 16 blocks of the device (raw read) This doesn’t exercise the filesystem at all, but the device on which the filesystem lives. This is noop for non-block devices such as NFS, SMBFS, or bind mounts. 20: test if a status file can be written and read The status file must be writable by root. This is not always the case with an NFS mount, as NFS exports usually have the “root_squash” option set. In such a setup, you must either use read-only monitoring (depth=10), export with “no_root_squash” on your NFS server, or grant world write permissions on the directory where the status file is to be placed.

Resource options:
device (required): The name of block device for the filesystem, or -U, -L options for mount,
or NFS mount specification.
directory (required): The mount point for the filesystem.
fstype (required): The type of filesystem to be mounted.
options: Any extra options to be given as -o options to mount. For bind mounts, add “bind”
here and set fstype to “none”. We will do the right thing for options such as “bind,ro”.
statusfile_prefix: The prefix to be used for a status file for resource monitoring with depth 20. If you don’t specify
this parameter, all status files will be created in a separate directory.
run_fsck: Specify how to decide whether to run fsck or not. “auto” : decide to run fsck depending on the fstype(default)
“force” : always run fsck regardless of the fstype “no” : do not run fsck ever.
fast_stop: Normally, we expect no users of the filesystem and the stop operation to finish quickly. If you cannot
control the filesystem users easily and want to prevent the stop action from failing, then set this parameter
to “no” and add an appropriate timeout for the stop operation.
force_clones: The use of a clone setup for local filesystems is forbidden by default. For special setups like glusterfs,
cloning a mount of a local device with a filesystem like ext4 or xfs independently on several nodes is a
valid use case. Only set this to “true” if you know what you are doing!
force_unmount: This option allows specifying how to handle processes that are currently accessing the mount directory.
“true” : Default value, kill processes accessing mount point “safe” : Kill processes accessing mount
point using methods that avoid functions that could potentially block during process detection “false” :
Do not kill any processes. The ‘safe’ option uses shell logic to walk the /procs/ directory for pids
using the mount point while the default option uses the fuser cli tool. fuser is known to perform
operations that can potentially block if unresponsive nfs mounts are in use on the system.

Resource Meta Options

The resource meta options can be updated anytime, for example, by now, the fs resource can be started on any node, if you prefer to have resource has a preferred node, then

#pcs status | grep fs11
fs11 (ocf::heartbeat:Filesystem): Started nodeC
#pcs resource meta fs11 stickness=500

Set nodeC to standby, the resource fs11 will float to other nodes

#pcs cluster standby nodeC
#pcs status | grep fs11
fs11 (ocf::heartbeat:Filesystem): Started nodeA

Set nodeC to unstandby, the resource fs11 will float back

pcs cluster unstandby nodeC
#pcs status | grep fs11
fs11 (ocf::heartbeat:Filesystem): Started nodeC

Resource Operations

You can either add operations to a resource when resource creation, or later add the operations, but

#pcs resource op add

Displaying Configured Resources

To list all resources4

# pcs resource show
fs11 (ocf::heartbeat:Filesystem): Started
fs12 (ocf::heartbeat:Filesystem): Started

To list a resource and its full attributes, meta and operations configurations

# pcs resource show fs11
Resource: fs11 (class=ocf provider=heartbeat type=Filesystem)
Attributes: device=/dev/mapper/LUN11 directory=/lun11 fstype=xfs fast_stop=no force_unmount=safe
Meta Attrs: stickness=500
Operations: start interval=0s timeout=60 (fs11-start-timeout-60)
stop on-fail=stop interval=0s timeout=200 (fs11-stop-on-fail-stop-timeout-200)
monitor on-fail=stop interval=60s timeout=200 OCF_CHECK_LEVEL=10 (fs11-monitor-on-fail-stop-timeout-200)

To show all resource in full list mode

#pcs resource show –full

Enabling and Disabling Cluster Resources

To disable a resource on a node and don’t want this resource start on other nodes

#pcs resource disable fs11
# pcs resource
fs11 (ocf::heartbeat:Filesystem): Stopped

To enable the resource

#pcs resource enable fs11
# pcs resource
fs11 (ocf::heartbeat:Filesystem): Started

Cluster Resources Cleanup

When a resource messed up, showing some error on start, stop or other situation, to clean up the mess,

#pcs resource cleanup

Leave a Reply

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>