{"id":5935,"date":"2016-05-16T23:02:43","date_gmt":"2016-05-16T15:02:43","guid":{"rendered":"http:\/\/rmohan.com\/?p=5935"},"modified":"2016-05-17T12:48:27","modified_gmt":"2016-05-17T04:48:27","slug":"pacemaker-apache-on-high-availability-centos-7","status":"publish","type":"post","link":"https:\/\/mohan.sg\/?p=5935","title":{"rendered":"Pacemaker Apache on High Availability CENTOS 7"},"content":{"rendered":"<p><strong>Pacemaker Apache on High Availability CENTOS 7 <\/strong><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p dir=\"ltr\">Red Hat, Inc. introduces new Open Source Software in their every release. Red Hat Enterprise Linux 7 High Availability Add-On introduces a new suite of technologies that underlying high-availability technology based on Pacemaker and Corosync that completely replaces the CMAN and RGManager technologies from previous releases of High Availability Add-On.<\/p>\n<p dir=\"ltr\">HA Add On from RHEL 5 to 7<\/p>\n<p dir=\"ltr\"><a href=\"http:\/\/rmohan.com\/wp-content\/uploads\/2016\/05\/RHCS-5-7.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-5961\" src=\"http:\/\/rmohan.com\/wp-content\/uploads\/2016\/05\/RHCS-5-7.png\" alt=\"RHCS-5-7\" width=\"1027\" height=\"729\" srcset=\"https:\/\/mohan.sg\/wp-content\/uploads\/2016\/05\/RHCS-5-7.png 1027w, https:\/\/mohan.sg\/wp-content\/uploads\/2016\/05\/RHCS-5-7-300x213.png 300w, https:\/\/mohan.sg\/wp-content\/uploads\/2016\/05\/RHCS-5-7-768x545.png 768w, https:\/\/mohan.sg\/wp-content\/uploads\/2016\/05\/RHCS-5-7-1024x727.png 1024w, https:\/\/mohan.sg\/wp-content\/uploads\/2016\/05\/RHCS-5-7-150x106.png 150w, https:\/\/mohan.sg\/wp-content\/uploads\/2016\/05\/RHCS-5-7-400x284.png 400w\" sizes=\"(max-width: 1027px) 100vw, 1027px\" \/><\/a><\/p>\n<p>Pacemaker is a High Availability cluster Software for Linux like Operating System.Pacemaker is known as \u2018Cluster Resource Manager\u2018,<\/p>\n<p>It provides maximum availability of the cluster resources by doing fail over of resources between the cluster nodes.<\/p>\n<p>Pacemaker use corosync for heartbeat and internal communication among cluster components , Corosync also take care of Quorum in cluster.<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p><a href=\"http:\/\/rmohan.com\/wp-content\/uploads\/2016\/05\/haserver_cluster4.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-5963\" src=\"http:\/\/rmohan.com\/wp-content\/uploads\/2016\/05\/haserver_cluster4.png\" alt=\"haserver_cluster4\" width=\"660\" height=\"360\" srcset=\"https:\/\/mohan.sg\/wp-content\/uploads\/2016\/05\/haserver_cluster4.png 660w, https:\/\/mohan.sg\/wp-content\/uploads\/2016\/05\/haserver_cluster4-300x164.png 300w, https:\/\/mohan.sg\/wp-content\/uploads\/2016\/05\/haserver_cluster4-150x82.png 150w, https:\/\/mohan.sg\/wp-content\/uploads\/2016\/05\/haserver_cluster4-400x218.png 400w\" sizes=\"(max-width: 660px) 100vw, 660px\" \/><\/a><\/p>\n<p>In this article we will demonstrate the installation and configuration of two Node Apache (httpd) Web Server Clustering using Pacemaker on CentOS 7.<\/p>\n<p>In my setup i will use two Virtual Machines and Shared Storage from Centos 7 server<br \/>\ntwo disks will be shared where one disk will be used as fencing device and other disk will used as shared storage for web server )<\/p>\n<p>192.168.1.71 apache1.rmohan.com apache1 -CENTOS7.2 8 GB 2 CPU<br \/>\n192.168.1.72 apache2.rmohan.com apache2 -CENTOS7.2 8 GB 2 CPU<\/p>\n<p>192.168.1.73 storage.rmohan.com storage<\/p>\n<p>[root@apache1 ~]#<\/p>\n<p>Step:1 Update \u2018\/etc\/hosts\u2019 file<\/p>\n<p>Add the following lines in \/etc\/hosts file in both the nodes.<\/p>\n<p>[root@apache1 ~]# cat \/etc\/hosts<br \/>\n127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4<br \/>\n::1 localhost localhost.localdomain localhost6 localhost6.localdomain6<br \/>\n192.168.1.71 apache1.rmohan.com apache1<br \/>\n192.168.1.72 apache2.rmohan.com apache2<br \/>\n192.168.1.73 storage.rmohan.com storage<\/p>\n<p>Step:2 Install the time server<\/p>\n<p>yum install chrony -y<\/p>\n<p>Nothing to do<br \/>\n[root@apache2 ~]# systemctl start crond.service<br \/>\n[root@apache2 ~]# systemctl status crond.service<br \/>\n? crond.service &#8211; Command Scheduler<br \/>\nLoaded: loaded (\/usr\/lib\/systemd\/system\/crond.service; enabled; vendor preset: enabled)<br \/>\nActive: active (running) since Sat 2016-05-14 20:13:14 SGT; 1h 48min ago<br \/>\nMain PID: 848 (crond)<br \/>\nCGroup: \/system.slice\/crond.service<br \/>\n??848 \/usr\/sbin\/crond -n<\/p>\n<p>May 14 20:13:14 apache2.rmohan.com systemd[1]: Started Command Scheduler.<br \/>\nMay 14 20:13:14 apache2.rmohan.com systemd[1]: Starting Command Scheduler&#8230;<br \/>\nMay 14 20:13:14 apache2.rmohan.com crond[848]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 57% if used.)<br \/>\nMay 14 20:13:14 apache2.rmohan.com crond[848]: (CRON) INFO (running with inotify support)<br \/>\nMay 14 22:00:15 apache2.rmohan.com systemd[1]: Started Command Scheduler.<br \/>\n[root@apache2 ~]# systemctl enable crond.service<br \/>\n[root@apache2 ~]#<\/p>\n<p>STEP3 Install the Cluster and other required packages.<\/p>\n<p>Use th below yum command on both the nodes to install cluster package (pcs ), fence-agents &amp; web server (httpd)<\/p>\n<p>[root@apache1 ~]# yum -y install pcs fence-agents-all iscsi-initiator-utils httpd<\/p>\n<p>[root@apache2 ~]# yum -y install pcs fence-agents-all iscsi-initiator-utils httpd<\/p>\n<p>STEP4<br \/>\nSet the password to \u2018hacluster\u2019 user<br \/>\nIt is recommended to use the same password of \u2018hacluster\u2019 user on both the nodes.<\/p>\n<p>[root@apache1 ~]# echo p@ssw0rd | passwd &#8211;stdin hacluster<br \/>\nChanging password for user hacluster.<br \/>\npasswd: all authentication tokens updated successfully.<br \/>\n[root@apache1 ~]#<\/p>\n<p>[root@apache2 ~]# echo p@ssw0rd | passwd &#8211;stdin hacluster<br \/>\nChanging password for user hacluster.<br \/>\npasswd: all authentication tokens updated successfully.<br \/>\n[root@apache2 ~]#<\/p>\n<p>STEP5<br \/>\nAllow High Availability ports in firewall.<br \/>\nUse \u2018firewall-cmd\u2018 command on both the nodes to open High Availability ports in OS firewall.<\/p>\n<p>firewall-cmd &#8211;permanent &#8211;add-service=high-availability<br \/>\nsuccess<\/p>\n<p>firewall-cmd &#8211;reload<br \/>\nsuccess<\/p>\n<p>STEP6<br \/>\nStart the Cluster Service and authorize nodes to join the cluster.<\/p>\n<p>Lets start the cluster service on both the nodes,<\/p>\n<p>[root@apache1 ~]# systemctl start pcsd.service ; systemctl enable pcsd.service<br \/>\nCreated symlink from \/etc\/systemd\/system\/multi-user.target.wants\/pcsd.service to \/usr\/lib\/systemd\/system\/pcsd.service.<br \/>\n[root@apache1 ~]#<\/p>\n<p>[root@apache2 ~]# systemctl start pcsd.service ; systemctl enable pcsd.service<br \/>\nCreated symlink from \/etc\/systemd\/system\/multi-user.target.wants\/pcsd.service to \/usr\/lib\/systemd\/system\/pcsd.service.<br \/>\n[root@apache2 ~]#<\/p>\n<p>se below command on either of the node to authorize the nodes to join cluster.<\/p>\n<p>[root@apache1 ~]# pcs cluster auth apache1 apache2<br \/>\nUsername: hacluster<br \/>\nPassword:<br \/>\napache1: Authorized<br \/>\napache2: Authorized<br \/>\n[root@apache1 ~]#<\/p>\n<p>Create the Cluster &amp; enable the Cluster Service<\/p>\n<p>Use below pcs commands on any of the cluster nodes to create a cluster with the name \u2018apachecluster\u2018 and apache1 &amp; apache2 are the cluster nodes.<br \/>\n[root@apache1 ~]# pcs cluster setup &#8211;start &#8211;name apachecluster apache1 apache2<br \/>\nShutting down pacemaker\/corosync services&#8230;<br \/>\nRedirecting to \/bin\/systemctl stop pacemaker.service<br \/>\nRedirecting to \/bin\/systemctl stop corosync.service<br \/>\nKilling any remaining services&#8230;<br \/>\nRemoving all cluster configuration files&#8230;<br \/>\napache1: Succeeded<br \/>\napache2: Succeeded<br \/>\nStarting cluster on nodes: apache1, apache2&#8230;<br \/>\napache2: Starting Cluster&#8230;<br \/>\napache1: Starting Cluster&#8230;<br \/>\nSynchronizing pcsd certificates on nodes apache1, apache2&#8230;<br \/>\napache1: Success<br \/>\napache2: Success<\/p>\n<p>Restaring pcsd on the nodes in order to reload the certificates&#8230;<br \/>\napache1: Success<br \/>\napache2: Success<\/p>\n<p>Enable the Cluster Service using below pcs command :<\/p>\n<p>[root@apache1 ~]# pcs cluster enable &#8211;all<br \/>\napache1: Cluster Enabled<br \/>\napache2: Cluster Enabled<br \/>\n[root@apache1 ~]#<\/p>\n<p>Now Verify the cluster Service<\/p>\n<p>[root@apache1 ~]# pcs cluster status<br \/>\nCluster Status:<br \/>\nLast updated: Sat May 14 12:33:25 2016 Last change: Sat May 14 12:32:36 2016 by hacluster via crmd on apache2<br \/>\nStack: corosync<br \/>\nCurrent DC: apache2 (version 1.1.13-10.el7_2.2-44eb2dd) &#8211; partition with quorum<br \/>\n2 nodes and 0 resources configured<br \/>\nOnline: [ apache1 apache2 ]<\/p>\n<p>PCSD Status:<br \/>\napache1: Online<br \/>\napache2: Online<br \/>\n[root@apache1 ~]#<\/p>\n<p>set8<br \/>\nSetup iscsi shared Storage on Fedora Server for both the nodes.<\/p>\n<p>[root@storage ~]# vi \/etc\/hosts<br \/>\n[root@storage ~]# fdisk -c \/dev\/sdb<br \/>\nWelcome to fdisk (util-linux 2.23.2).<\/p>\n<p>Changes will remain in memory only, until you decide to write them.<br \/>\nBe careful before using the write command.<\/p>\n<p>Device does not contain a recognized partition table<br \/>\nBuilding a new DOS disklabel with disk identifier 0xf218266b.<\/p>\n<p>Command (m for help): n<br \/>\nPartition type:<br \/>\np primary (0 primary, 0 extended, 4 free)<br \/>\ne extended<br \/>\nSelect (default p): p<br \/>\nPartition number (1-4, default 1): 1<br \/>\nFirst sector (2048-41943039, default 2048):<br \/>\nUsing default value 2048<br \/>\nLast sector, +sectors or +size{K,M,G} (2048-41943039, default 41943039):<br \/>\nUsing default value 41943039<br \/>\nPartition 1 of type Linux and of size 20 GiB is set<\/p>\n<p>Command (m for help): t<br \/>\nSelected partition 1<br \/>\nHex code (type L to list all codes): 8e<br \/>\nChanged type of partition &#8216;Linux&#8217; to &#8216;Linux LVM&#8217;<\/p>\n<p>Command (m for help): w<br \/>\nThe partition table has been altered!<\/p>\n<p>Calling ioctl() to re-read partition table.<br \/>\nSyncing disks.<br \/>\n[root@storage ~]# partprobe \/dev\/sdb<br \/>\n[root@storage ~]# pvcreate \/dev\/sdb1<br \/>\nPhysical volume &#8220;\/dev\/sdb1&#8221; successfully created<br \/>\n[root@storage ~]# vgcreate apachedate_vg \/dev\/sdb1<br \/>\nVolume group &#8220;apachedate_vg&#8221; successfully created<br \/>\n[root@storage ~]# fdisk -c \/dev\/sdc<br \/>\nWelcome to fdisk (util-linux 2.23.2).<\/p>\n<p>Changes will remain in memory only, until you decide to write them.<br \/>\nBe careful before using the write command.<\/p>\n<p>Device does not contain a recognized partition table<br \/>\nBuilding a new DOS disklabel with disk identifier 0xba406434.<\/p>\n<p>Command (m for help): n<br \/>\nPartition type:<br \/>\np primary (0 primary, 0 extended, 4 free)<br \/>\ne extended<br \/>\nSelect (default p): p<br \/>\nPartition number (1-4, default 1):<br \/>\nFirst sector (2048-41943039, default 2048):<br \/>\nUsing default value 2048<br \/>\nLast sector, +sectors or +size{K,M,G} (2048-41943039, default 41943039):<br \/>\nUsing default value 41943039<br \/>\nPartition 1 of type Linux and of size 20 GiB is set<\/p>\n<p>Command (m for help): t<br \/>\nSelected partition 1<br \/>\nHex code (type L to list all codes): 8e<br \/>\nChanged type of partition &#8216;Linux&#8217; to &#8216;Linux LVM&#8217;<\/p>\n<p>Command (m for help): w<br \/>\nThe partition table has been altered!<\/p>\n<p>Calling ioctl() to re-read partition table.<br \/>\nSyncing disks.<br \/>\n[root@storage ~]# partprobe \/dev\/sdc<br \/>\n[root@storage ~]# pvcreate \/dev\/sdc1<br \/>\nPhysical volume &#8220;\/dev\/sdc1&#8221; successfully created<br \/>\n[root@storage ~]# vgcreate apachefence_vg \/dev\/sdc1<br \/>\nVolume group &#8220;apachefence_vg&#8221; successfully created<br \/>\n[root@storage ~]#<br \/>\n[root@storage ~]#<br \/>\n[root@storage ~]# pvs<br \/>\nPV VG Fmt Attr PSize PFree<br \/>\n\/dev\/sda2 centos lvm2 a&#8211; 49.51g 44.00m<br \/>\n\/dev\/sdb1 apachedate_vg lvm2 a&#8211; 20.00g 20.00g<br \/>\n\/dev\/sdc1 apachefence_vg lvm2 a&#8211; 20.00g 20.00g<\/p>\n<p>[root@storage ~]# lvcreate -n apachedata_lvs -l 100%FREE apachedate_vg<br \/>\nLogical volume &#8220;apachedata_lvs&#8221; created.<br \/>\n[root@storage ~]# lvcreate -n apachefence_lvs -l 100%FREE apachefence_vg<br \/>\nLogical volume &#8220;apachefence_lvs&#8221; created.<br \/>\n[root@storage ~]#<\/p>\n<p>[root@storage ~]# yum -y install targetcli<\/p>\n<p>[root@storage ~]# targetcli<br \/>\ntargetcli shell version 2.1.fb41<br \/>\nCopyright 2011-2013 by Datera, Inc and others.<br \/>\nFor help on commands, type &#8216;help&#8217;.<\/p>\n<p>\/backstores\/block&gt; ls<br \/>\no- block &#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230; [Storage Objects: 0]<br \/>\n\/backstores\/block&gt; ls<br \/>\no- block &#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230; [Storage Objects: 0]<br \/>\n\/backstores\/block&gt; create apachedata \/dev\/<br \/>\n\/dev\/apachedate_vg\/ \/dev\/apachefence_vg\/ \/dev\/block\/ \/dev\/bsg\/ \/dev\/bus\/ \/dev\/cdrom<br \/>\n\/dev\/centos\/ \/dev\/char\/ \/dev\/cpu\/ \/dev\/disk\/ \/dev\/dm-0 \/dev\/dm-1<br \/>\n\/dev\/dm-2 \/dev\/dm-3 \/dev\/dri\/ \/dev\/fd\/ \/dev\/hugepages\/ \/dev\/input\/<br \/>\n\/dev\/mapper\/ \/dev\/mqueue\/ \/dev\/net\/ \/dev\/pts\/ \/dev\/raw\/ \/dev\/sda<br \/>\n\/dev\/sda1 \/dev\/sda2 \/dev\/sdb \/dev\/sdb1 \/dev\/sdc \/dev\/sdc1<br \/>\n\/dev\/shm\/ \/dev\/snd\/ \/dev\/sr0 \/dev\/vfio\/<br \/>\n\/backstores\/block&gt; create apachedata \/dev\/apachedate_vg\/apachedata_lvs<br \/>\nCreated block storage object apachedata using \/dev\/apachedate_vg\/apachedata_lvs.<br \/>\n\/backstores\/block&gt; create apachefence \/dev\/apache<br \/>\n\/dev\/apachedate_vg\/ \/dev\/apachefence_vg\/<br \/>\n\/backstores\/block&gt; create apachefence \/dev\/apachefence_vg\/apachefence_lvs<br \/>\nCreated block storage object apachefence using \/dev\/apachefence_vg\/apachefence_lvs.<br \/>\n\/backstores\/block&gt; cd \/iscsi<br \/>\n\/iscsi&gt; ls<br \/>\no- iscsi &#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;. [Targets: 0]<br \/>\n\/iscsi&gt; create<br \/>\nCreated target iqn.2003-01.org.linux-iscsi.storage.x8664:sn.94eff7fe336b.<br \/>\nCreated TPG 1.<br \/>\nGlobal pref auto_add_default_portal=true<br \/>\nCreated default portal listening on all IPs (0.0.0.0), port 3260.<br \/>\n\/iscsi&gt; ls<br \/>\no- iscsi &#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;.. [Targets: 1]<br \/>\no- iqn.2003-01.org.linux-iscsi.storage.x8664:sn.94eff7fe336b &#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;.. [TPGs: 1]<br \/>\no- tpg1 &#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;. [no-gen-acls, no-auth]<br \/>\no- acls &#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230; [ACLs: 0]<br \/>\no- luns &#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230; [LUNs: 0]<br \/>\no- portals &#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230; [Portals: 1]<br \/>\no- 0.0.0.0:3260 &#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;. [OK]<br \/>\n\/iscsi&gt; cd luns<br \/>\nNo such path \/iscsi\/luns<br \/>\n\/iscsi&gt; ls<br \/>\no- iscsi &#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;.. [Targets: 1]<br \/>\no- iqn.2003-01.org.linux-iscsi.storage.x8664:sn.94eff7fe336b &#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;.. [TPGs: 1]<br \/>\no- tpg1 &#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;. [no-gen-acls, no-auth]<br \/>\no- acls &#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230; [ACLs: 0]<br \/>\no- luns &#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230; [LUNs: 0]<br \/>\no- portals &#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230; [Portals: 1]<br \/>\no- 0.0.0.0:3260 &#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;. [OK]<br \/>\n\/iscsi&gt; cd iqn.2003-01.org.linux-iscsi.storage.x8664:sn.94eff7fe336b\/<br \/>\n\/iscsi\/iqn.20&#8230;.94eff7fe336b&gt; ls<br \/>\no- iqn.2003-01.org.linux-iscsi.storage.x8664:sn.94eff7fe336b &#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;. [TPGs: 1]<br \/>\no- tpg1 &#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230; [no-gen-acls, no-auth]<br \/>\no- acls &#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;.. [ACLs: 0]<br \/>\no- luns &#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;.. [LUNs: 0]<br \/>\no- portals &#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;.. [Portals: 1]<br \/>\no- 0.0.0.0:3260 &#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230; [OK]<br \/>\n\/iscsi\/iqn.20&#8230;.94eff7fe336b&gt; cd luns<br \/>\nNo such path \/iscsi\/iqn.2003-01.org.linux-iscsi.storage.x8664:sn.94eff7fe336b\/luns<br \/>\n\/iscsi\/iqn.20&#8230;.94eff7fe336b&gt; cd tpg1\/luns<br \/>\n\/iscsi\/iqn.20&#8230;36b\/tpg1\/luns&gt; ls<br \/>\no- luns &#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230; [LUNs: 0]<br \/>\n\/iscsi\/iqn.20&#8230;36b\/tpg1\/luns&gt; cd luns<br \/>\nNo such path \/iscsi\/iqn.2003-01.org.linux-iscsi.storage.x8664:sn.94eff7fe336b\/tpg1\/luns\/luns<br \/>\n\/iscsi\/iqn.20&#8230;36b\/tpg1\/luns&gt; ls<br \/>\no- luns &#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;.. [LUNs: 0]<br \/>\n\/iscsi\/iqn.20&#8230;36b\/tpg1\/luns&gt; [wd<br \/>\n\/iscsi\/iqn.20&#8230;36b\/tpg1\/luns&gt; pwd<br \/>\n\/iscsi\/iqn.2003-01.org.linux-iscsi.storage.x8664:sn.94eff7fe336b\/tpg1\/luns<br \/>\n\/iscsi\/iqn.20&#8230;36b\/tpg1\/luns&gt; create iqn.1994-05.com.redhat:b26f647eddb<br \/>\nstorage object or path not valid<br \/>\n\/iscsi\/iqn.20&#8230;36b\/tpg1\/luns&gt; create iqn.1994-05.com.redhat:b26f647eddb<br \/>\nstorage object or path not valid<br \/>\n\/iscsi\/iqn.20&#8230;36b\/tpg1\/luns&gt; create \/backstores\/block\/apachedata<br \/>\nCreated LUN 0.<br \/>\n\/iscsi\/iqn.20&#8230;36b\/tpg1\/luns&gt; create \/backstores\/block\/apachefence<br \/>\nCreated LUN 1.<br \/>\n\/iscsi\/iqn.20&#8230;36b\/tpg1\/luns&gt; cd ..<br \/>\n\/iscsi\/iqn.20&#8230;f7fe336b\/tpg1&gt; ls<br \/>\no- tpg1 &#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;. [no-gen-acls, no-auth]<br \/>\no- acls &#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230; [ACLs: 0]<br \/>\no- luns &#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230; [LUNs: 2]<br \/>\n| o- lun0 &#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;. [block\/apachedata (\/dev\/apachedate_vg\/apachedata_lvs)]<br \/>\n| o- lun1 &#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;. [block\/apachefence (\/dev\/apachefence_vg\/apachefence_lvs)]<br \/>\no- portals &#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230; [Portals: 1]<br \/>\no- 0.0.0.0:3260 &#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;. [OK]<br \/>\n\/iscsi\/iqn.20&#8230;f7fe336b\/tpg1&gt; cd acls<br \/>\n\/iscsi\/iqn.20&#8230;36b\/tpg1\/acls&gt; ls<br \/>\no- acls &#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;&#8230;.. [ACLs: 0]<br \/>\n\/iscsi\/iqn.20&#8230;36b\/tpg1\/acls&gt; create iqn.1994-05.com.redhat:b26f647eddb<br \/>\nCreated Node ACL for iqn.1994-05.com.redhat:b26f647eddb<br \/>\nCreated mapped LUN 1.<br \/>\nCreated mapped LUN 0.<br \/>\n\/iscsi\/iqn.20&#8230;36b\/tpg1\/acls&gt; create iqn.1994-05.com.redhat:1b9a01e1275<br \/>\nCreated Node ACL for iqn.1994-05.com.redhat:1b9a01e1275<br \/>\nCreated mapped LUN 1.<br \/>\nCreated mapped LUN 0.<br \/>\n\/iscsi\/iqn.20&#8230;36b\/tpg1\/acls&gt; cd .<br \/>\n\/iscsi\/iqn.20&#8230;36b\/tpg1\/acls&gt; cd \/<br \/>\n\/&gt; saveconfig<br \/>\nLast 10 configs saved in \/etc\/target\/backup.<br \/>\nConfiguration saved to \/etc\/target\/saveconfig.json<br \/>\n\/&gt; exit<br \/>\nGlobal pref auto_save_on_exit=true<br \/>\nLast 10 configs saved in \/etc\/target\/backup.<br \/>\nConfiguration saved to \/etc\/target\/saveconfig.json<br \/>\n[root@storage ~]# systemctl start target.service<br \/>\n[root@storage ~]#<br \/>\n[root@storage ~]# systemctl enable target.service<br \/>\nCreated symlink from \/etc\/systemd\/system\/multi-user.target.wants\/target.service to \/usr\/lib\/systemd\/system\/target.service.<\/p>\n<p>Now Scan the iscsi storage on both the nodes :<\/p>\n<p>Run below commands on both the nodes<\/p>\n<p>iscsiadm &#8211;mode discovery &#8211;type sendtargets &#8211;portal 192.168.1.73<br \/>\niscsiadm -m node -T iqn.2003-01.org.linux-iscsi.storage.x8664:sn.94eff7fe336b -l -p 192.168.1.73:3260<\/p>\n<p>[root@apache1 ~]# systemctl start iscsi.service<br \/>\n[root@apache1 ~]# systemctl enable iscsi.service<br \/>\n[root@apache1 ~]# systemctl enable iscsid.service<br \/>\nCreated symlink from \/etc\/systemd\/system\/multi-user.target.wants\/iscsid.service to \/usr\/lib\/systemd\/system\/iscsid.service.<br \/>\n[root@apache1 ~]#<\/p>\n<p>root@apache1 ~]# ls -l \/dev\/disk\/by-id\/<br \/>\ntotal 0<br \/>\nlrwxrwxrwx 1 root root 9 May 14 2016 ata-VMware_Virtual_IDE_CDROM_Drive_10000000000000000001 -&gt; ..\/..\/sr0<br \/>\nlrwxrwxrwx 1 root root 10 May 14 2016 dm-name-centos-home -&gt; ..\/..\/dm-2<br \/>\nlrwxrwxrwx 1 root root 10 May 14 2016 dm-name-centos-root -&gt; ..\/..\/dm-0<br \/>\nlrwxrwxrwx 1 root root 10 May 14 2016 dm-name-centos-swap -&gt; ..\/..\/dm-1<br \/>\nlrwxrwxrwx 1 root root 10 May 14 2016 dm-uuid-LVM-YEJwQZi9JXl6MBbZN7XdhQaR09fqbpQH2nyITpeRJVbMnYzojU1b9qSDNbJr0eLn -&gt; ..\/..\/dm-0<br \/>\nlrwxrwxrwx 1 root root 10 May 14 2016 dm-uuid-LVM-YEJwQZi9JXl6MBbZN7XdhQaR09fqbpQHhB2KiZ6jcZwYY8OJpwA4l11wnghcdTtJ -&gt; ..\/..\/dm-2<br \/>\nlrwxrwxrwx 1 root root 10 May 14 2016 dm-uuid-LVM-YEJwQZi9JXl6MBbZN7XdhQaR09fqbpQHW6P6EC6fmlWdGYY5o41uhw9vKBmWKV0o -&gt; ..\/..\/dm-1<br \/>\nlrwxrwxrwx 1 root root 10 May 14 2016 lvm-pv-uuid-YXOIJV-EPlD-dXwg-ePQX-D7av-jPdr-Grb4rp -&gt; ..\/..\/sda2<br \/>\nlrwxrwxrwx 1 root root 9 May 14 15:30 scsi-3600140562a971495dce49a581f20d1ea -&gt; ..\/..\/sdc<br \/>\nlrwxrwxrwx 1 root root 9 May 14 15:30 scsi-360014059dec3b96b8944a29a6cbe1d5e -&gt; ..\/..\/sdb<br \/>\nlrwxrwxrwx 1 root root 9 May 14 15:30 wwn-0x600140562a971495dce49a581f20d1ea -&gt; ..\/..\/sdc<br \/>\nlrwxrwxrwx 1 root root 9 May 14 15:30 wwn-0x60014059dec3b96b8944a29a6cbe1d5e -&gt; ..\/..\/sdb<\/p>\n<p>Step:9 Create the Cluster Resources.<\/p>\n<p>Define stonith (Shoot The Other Node In The Head) fencing device for the cluster. It is a method to isolate the node from cluster when node become unresponsive.<\/p>\n<p>I am using 1 GB iscsi storage (\/dev\/sdc ) for fencing.<\/p>\n<p>Run the following commands on either of the node :<\/p>\n<p>[root@apache1 ~]# pcs stonith create scsi_fecing_device fence_scsi pcmk_host_list=&#8221;apache1 apache2&#8243; pcmk_monitor_action=&#8221;metadata&#8221; pcmk_reboot_action=&#8221;off&#8221; devices=&#8221;\/dev\/disk\/by-id\/wwn-0x600140562a971495dce49a581f20d1ea&#8221; meta provides=&#8221;unfencing&#8221;<br \/>\n[root@apache1 ~]#<\/p>\n<p>[root@apache1 ~]# fdisk<\/p>\n<p>Welcome to fdisk (util-linux 2.23.2).<\/p>\n<p>Changes will remain in memory only, until you decide to write them.<br \/>\nBe careful before using the write command.<\/p>\n<p>Device does not contain a recognized partition table<br \/>\nBuilding a new DOS disklabel with disk identifier 0xab2385e3.<\/p>\n<p>Command (m for help): n<br \/>\nPartition type:<br \/>\np primary (0 primary, 0 extended, 4 free)<br \/>\ne extended<br \/>\nSelect (default p): p<br \/>\nPartition number (1-4, default 1):<br \/>\nFirst sector (8192-41934847, default 8192):<br \/>\nUsing default value 8192<br \/>\nLast sector, +sectors or +size{K,M,G} (8192-41934847, default 41934847):<br \/>\nUsing default value 41934847<br \/>\nPartition 1 of type Linux and of size 20 GiB is set<\/p>\n<p>Command (m for help): w<br \/>\nThe partition table has been altered!<\/p>\n<p>Calling ioctl() to re-read partition table.<br \/>\nSyncing disks.<br \/>\n[root@apache1 ~]# fdisk -l<\/p>\n<p>Disk \/dev\/sda: 64.4 GB, 64424509440 bytes, 125829120 sectors<br \/>\nUnits = sectors of 1 * 512 = 512 bytes<br \/>\nSector size (logical\/physical): 512 bytes \/ 512 bytes<br \/>\nI\/O size (minimum\/optimal): 512 bytes \/ 512 bytes<br \/>\nDisk label type: dos<br \/>\nDisk identifier: 0x000a55d1<\/p>\n<p>Device Boot Start End Blocks Id System<br \/>\n\/dev\/sda1 * 2048 1026047 512000 83 Linux<br \/>\n\/dev\/sda2 1026048 125829119 62401536 8e Linux LVM<\/p>\n<p>Disk \/dev\/mapper\/centos-root: 40.1 GB, 40093351936 bytes, 78307328 sectors<br \/>\nUnits = sectors of 1 * 512 = 512 bytes<br \/>\nSector size (logical\/physical): 512 bytes \/ 512 bytes<br \/>\nI\/O size (minimum\/optimal): 512 bytes \/ 512 bytes<\/p>\n<p>Disk \/dev\/mapper\/centos-swap: 4160 MB, 4160749568 bytes, 8126464 sectors<br \/>\nUnits = sectors of 1 * 512 = 512 bytes<br \/>\nSector size (logical\/physical): 512 bytes \/ 512 bytes<br \/>\nI\/O size (minimum\/optimal): 512 bytes \/ 512 bytes<\/p>\n<p>Disk \/dev\/mapper\/centos-home: 19.6 GB, 19574816768 bytes, 38232064 sectors<br \/>\nUnits = sectors of 1 * 512 = 512 bytes<br \/>\nSector size (logical\/physical): 512 bytes \/ 512 bytes<br \/>\nI\/O size (minimum\/optimal): 512 bytes \/ 512 bytes<\/p>\n<p>Disk \/dev\/sdb: 21.5 GB, 21470642176 bytes, 41934848 sectors<br \/>\nUnits = sectors of 1 * 512 = 512 bytes<br \/>\nSector size (logical\/physical): 512 bytes \/ 512 bytes<br \/>\nI\/O size (minimum\/optimal): 512 bytes \/ 4194304 bytes<\/p>\n<p>Disk \/dev\/sdc: 21.5 GB, 21470642176 bytes, 41934848 sectors<br \/>\nUnits = sectors of 1 * 512 = 512 bytes<br \/>\nSector size (logical\/physical): 512 bytes \/ 512 bytes<br \/>\nI\/O size (minimum\/optimal): 512 bytes \/ 4194304 bytes<br \/>\nDisk label type: dos<br \/>\nDisk identifier: 0xab2385e3<\/p>\n<p>Device Boot Start End Blocks Id System<br \/>\n\/dev\/sdc1 8192 41934847 20963328 83 Linux<\/p>\n<p>[root@apache1 ~]# pcs stonith show<br \/>\nscsi_fecing_device (stonith:fence_scsi): Started apache1<br \/>\n[root@apache1 ~]#<\/p>\n<p>Mount the new file system temporary on \/var\/www and create sub-folders<\/p>\n<p>[root@apache1 ~]# ls -ltr \/dev\/disk\/by-id\/<br \/>\ntotal 0<br \/>\nlrwxrwxrwx 1 root root 9 May 14 15:30 wwn-0x60014059dec3b96b8944a29a6cbe1d5e -&gt; ..\/..\/sdb<br \/>\nlrwxrwxrwx 1 root root 9 May 14 15:30 scsi-360014059dec3b96b8944a29a6cbe1d5e -&gt; ..\/..\/sdb<br \/>\nlrwxrwxrwx 1 root root 9 May 14 19:36 wwn-0x600140562a971495dce49a581f20d1ea -&gt; ..\/..\/sdc<br \/>\nlrwxrwxrwx 1 root root 9 May 14 19:36 scsi-3600140562a971495dce49a581f20d1ea -&gt; ..\/..\/sdc<br \/>\nlrwxrwxrwx 1 root root 10 May 14 19:36 wwn-0x600140562a971495dce49a581f20d1ea-part1 -&gt; ..\/..\/sdc1<br \/>\nlrwxrwxrwx 1 root root 10 May 14 19:36 scsi-3600140562a971495dce49a581f20d1ea-part1 -&gt; ..\/..\/sdc1<br \/>\nlrwxrwxrwx 1 root root 10 May 14 2016 lvm-pv-uuid-YXOIJV-EPlD-dXwg-ePQX-D7av-jPdr-Grb4rp -&gt; ..\/..\/sda2<br \/>\nlrwxrwxrwx 1 root root 9 May 14 2016 ata-VMware_Virtual_IDE_CDROM_Drive_10000000000000000001 -&gt; ..\/..\/sr0<br \/>\nlrwxrwxrwx 1 root root 10 May 14 2016 dm-name-centos-root -&gt; ..\/..\/dm-0<br \/>\nlrwxrwxrwx 1 root root 10 May 14 2016 dm-uuid-LVM-YEJwQZi9JXl6MBbZN7XdhQaR09fqbpQH2nyITpeRJVbMnYzojU1b9qSDNbJr0eLn -&gt; ..\/..\/dm-0<br \/>\nlrwxrwxrwx 1 root root 10 May 14 2016 dm-uuid-LVM-YEJwQZi9JXl6MBbZN7XdhQaR09fqbpQHW6P6EC6fmlWdGYY5o41uhw9vKBmWKV0o -&gt; ..\/..\/dm-1<br \/>\nlrwxrwxrwx 1 root root 10 May 14 2016 dm-name-centos-swap -&gt; ..\/..\/dm-1<br \/>\nlrwxrwxrwx 1 root root 10 May 14 2016 dm-uuid-LVM-YEJwQZi9JXl6MBbZN7XdhQaR09fqbpQHhB2KiZ6jcZwYY8OJpwA4l11wnghcdTtJ -&gt; ..\/..\/dm-2<br \/>\nlrwxrwxrwx 1 root root 10 May 14 2016 dm-name-centos-home -&gt; ..\/..\/dm-2<br \/>\n[root@apache1 ~]# fdisk \/dev\/disk\/by-id\/wwn-0x60014059dec3b96b8944a29a6cbe1d5e<br \/>\nWelcome to fdisk (util-linux 2.23.2).<\/p>\n<p>Changes will remain in memory only, until you decide to write them.<br \/>\nBe careful before using the write command.<\/p>\n<p>Device does not contain a recognized partition table<br \/>\nBuilding a new DOS disklabel with disk identifier 0x26a39fc5.<\/p>\n<p>Command (m for help): n<br \/>\nPartition type:<br \/>\np primary (0 primary, 0 extended, 4 free)<br \/>\ne extended<br \/>\nSelect (default p): p<br \/>\nPartition number (1-4, default 1): w<br \/>\nPartition number (1-4, default 1):<br \/>\nFirst sector (8192-41934847, default 8192):<br \/>\nUsing default value 8192<br \/>\nLast sector, +sectors or +size{K,M,G} (8192-41934847, default 41934847):<br \/>\nUsing default value 41934847<br \/>\nPartition 1 of type Linux and of size 20 GiB is set<\/p>\n<p>Command (m for help): w<br \/>\nThe partition table has been altered!<\/p>\n<p>Calling ioctl() to re-read partition table.<br \/>\nSyncing disks.<br \/>\n[root@apache1 ~]# mkfs.ext4 \/dev\/disk\/by-id\/wwn-0x60014059dec3b96b8944a29a6cbe1d5e<br \/>\nmke2fs 1.42.9 (28-Dec-2013)<br \/>\n\/dev\/disk\/by-id\/wwn-0x60014059dec3b96b8944a29a6cbe1d5e is entire device, not just one partition!<br \/>\nProceed anyway? (y,n) y<br \/>\nFilesystem label=<br \/>\nOS type: Linux<br \/>\nBlock size=4096 (log=2)<br \/>\nFragment size=4096 (log=2)<br \/>\nStride=0 blocks, Stripe width=1024 blocks<br \/>\n1310720 inodes, 5241856 blocks<br \/>\n262092 blocks (5.00%) reserved for the super user<br \/>\nFirst data block=0<br \/>\nMaximum filesystem blocks=2153775104<br \/>\n160 block groups<br \/>\n32768 blocks per group, 32768 fragments per group<br \/>\n8192 inodes per group<br \/>\nSuperblock backups stored on blocks:<br \/>\n32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,<br \/>\n4096000<\/p>\n<p>Allocating group tables: done<br \/>\nWriting inode tables: done<br \/>\nCreating journal (32768 blocks): done<br \/>\nWriting superblocks and filesystem accounting information: done<\/p>\n<p>mkfs.ext4 \/dev\/disk\/by-id\/wwn-0x60014059dec3b96b8944a29a6cbe1d5e<br \/>\nmount \/dev\/disk\/by-id\/wwn-0x60014059dec3b96b8944a29a6cbe1d5e \/var\/www\/<br \/>\ndf -TH<br \/>\nmkdir \/var\/www\/html<br \/>\nmkdir \/var\/www\/cgi-bin<br \/>\nmkdir \/var\/www\/error<br \/>\necho &#8220;Apache Web Sever Pacemaker Cluster&#8221; &gt; \/var\/www\/html\/index.html<br \/>\numount \/var\/www\/<\/p>\n<p>[root@apache1 ~]# pcs resource create webserver_fs Filesystem device=&#8221;\/dev\/disk\/by-id\/wwn-0x60014059dec3b96b8944a29a6cbe1d5e&#8221; directory=&#8221;\/var\/www&#8221; fstype=&#8221;ext4&#8243; &#8211;group webgroup<\/p>\n<p>Create Virtual IP (IPaddr2) Cluster Resource using below command. Execute the following command on any of the node.<\/p>\n<p>[root@apache1 ~]# pcs resource create vip_res IPaddr2 ip=192.168.1.70 cidr_netmask=24 &#8211;group webgroup<\/p>\n<p>Add the following lines in \u2018\/etc\/httpd\/conf\/httpd.conf\u2019 file on both the nodes.<br \/>\nSetHandler server-status<br \/>\nOrder deny,allow<br \/>\nDeny from all<br \/>\nAllow from 127.0.0.1<\/p>\n<p>[root@apache1 ~]# pcs resource create apache_res apache configfile=&#8221;\/etc\/httpd\/conf\/httpd.conf&#8221; statusurl=&#8221;http:\/\/127.0.0.1\/server-status&#8221; &#8211;group webgroup<\/p>\n<p>Pacemaker and pcs on Linux example, configuring cluster resource<\/p>\n<p>Once the cluster and nodes stonith devices configuration is done, then we can start create resource in the cluster.<\/p>\n<p>In this example, there are 3 SAN based storage LUNs, all accessible by the nodes in the cluster, we create filesystem resources to let cluster to manage them, any of them fail to access the resource, the filesystem will float to other nodes.<br \/>\nResource Creation<\/p>\n<p>Create a filesystem based resource, resource id is fs11<\/p>\n<p>#pcs resource create fs11 ocf:heartbeat:Filesystem params device=\/dev\/mapper\/LUN11 directory=\/lun11 fstype=&#8221;xfs&#8221;<\/p>\n<p>Normally we want gracefully stop the filesystem, and kill the processes that accessing the filesystems when stopping the resource. And have filesystem monitor enabled.<\/p>\n<p>#pcs resource create fs11 ocf:heartbeat:Filesystem params device=\/dev\/mapper\/LUN11 directory=\/lun11 fstype=&#8221;xfs&#8221; fast_stop=&#8221;no&#8221; force_unmount=&#8221;safe&#8221; op stop on-fail=stop timeout=200 op monitor on-fail=stop timeout=200 OCF_CHECK_LEVEL=10<\/p>\n<p>From the left, the options are:<br \/>\n&#8211; the name of the filesystem resource, also called resource id(fs11)<br \/>\n&#8211; the resource agent to use (ocf:heartbeat:Filesystem)<br \/>\n&#8211; the block device for the filesystem (e.g. \/dev\/mapper\/lun11)<br \/>\n&#8211; the mount point for the filesystem (e.g. \/lun11)<br \/>\n&#8211; the filesystem type (xfs)<br \/>\n&#8211; fast_stop=&#8221;no&#8221; force_unmount=&#8221;safe&#8221; are to help the filesystem stop and unmount filesystem<br \/>\n&#8211; op monitor on-fail=stop timeout=200 OCF_CHECK_LEVEL=10: similar to stop, also the check level does a read test to see if the filesystem is readable for its monitor probe<\/p>\n<p>To List the created resource fs11<\/p>\n<p># pcs resource<br \/>\nfs11 (ocf::heartbeat:Filesystem): Started<\/p>\n<p># pcs resource show fs11<br \/>\nResource: fs11 (class=ocf provider=heartbeat type=Filesystem)<br \/>\nAttributes: device=\/dev\/mapper\/LUN11 directory=\/lun11 fstype=xfs fast_stop=no force_unmount=safe<br \/>\nOperations: start interval=0s timeout=60 (fs11-start-timeout-60)<br \/>\nstop on-fail=stop interval=0s timeout=200 (fs11-stop-on-fail-stop-timeout-200)<br \/>\nmonitor on-fail=stop interval=60s timeout=200 OCF_CHECK_LEVEL=10 (fs11-monitor-on-fail-stop-timeout-200)<\/p>\n<p>There are 4 properties to identify a resource,<\/p>\n<p>id: the id in the cluster to identify a particular service &#8212; &gt; fs11<\/p>\n<p>Rest 3 just after resource id in the command line, seperated by : ocf:heartbeat:Filesystem<br \/>\nocf: Rresource standard<br \/>\nheartbeat: Provider<br \/>\nFilesystem: type<\/p>\n<p>To check all available resources provided by pacemaker by categories.<\/p>\n<p>#pcs resource list # list all available resources<br \/>\n#pcs resource standards # list resource standards<br \/>\n#pcs resource providers # list all available resource providers<br \/>\n#pcs resource list string # it works as a filter, for example, you want to list resource Filesystem<br \/>\n#pcs resource list Filesystem<br \/>\nocf:heartbeat:Filesystem &#8211; Manages filesystem mounts<\/p>\n<p>Delete resource<\/p>\n<p>Want delete a resource ? here is it.<\/p>\n<p>#pcs resource delete fs11<\/p>\n<p>Resource-Specific Parameters<\/p>\n<p>To check resource type Filesystem parameters, all the parameters can be set and updated<\/p>\n<p># pcs resource describe Filesystem<br \/>\nocf:heartbeat:Filesystem &#8211; Manages filesystem mounts<\/p>\n<p>Resource script for Filesystem. It manages a Filesystem on a shared storage medium. The standard monitor operation of depth 0 (also known as probe) checks if the filesystem is mounted. If you want deeper tests, set OCF_CHECK_LEVEL to one of the following values: 10: read first 16 blocks of the device (raw read) This doesn&#8217;t exercise the filesystem at all, but the device on which the filesystem lives. This is noop for non-block devices such as NFS, SMBFS, or bind mounts. 20: test if a status file can be written and read The status file must be writable by root. This is not always the case with an NFS mount, as NFS exports usually have the &#8220;root_squash&#8221; option set. In such a setup, you must either use read-only monitoring (depth=10), export with &#8220;no_root_squash&#8221; on your NFS server, or grant world write permissions on the directory where the status file is to be placed.<\/p>\n<p>Resource options:<br \/>\ndevice (required): The name of block device for the filesystem, or -U, -L options for mount,<br \/>\nor NFS mount specification.<br \/>\ndirectory (required): The mount point for the filesystem.<br \/>\nfstype (required): The type of filesystem to be mounted.<br \/>\noptions: Any extra options to be given as -o options to mount. For bind mounts, add &#8220;bind&#8221;<br \/>\nhere and set fstype to &#8220;none&#8221;. We will do the right thing for options such as &#8220;bind,ro&#8221;.<br \/>\nstatusfile_prefix: The prefix to be used for a status file for resource monitoring with depth 20. If you don&#8217;t specify<br \/>\nthis parameter, all status files will be created in a separate directory.<br \/>\nrun_fsck: Specify how to decide whether to run fsck or not. &#8220;auto&#8221; : decide to run fsck depending on the fstype(default)<br \/>\n&#8220;force&#8221; : always run fsck regardless of the fstype &#8220;no&#8221; : do not run fsck ever.<br \/>\nfast_stop: Normally, we expect no users of the filesystem and the stop operation to finish quickly. If you cannot<br \/>\ncontrol the filesystem users easily and want to prevent the stop action from failing, then set this parameter<br \/>\nto &#8220;no&#8221; and add an appropriate timeout for the stop operation.<br \/>\nforce_clones: The use of a clone setup for local filesystems is forbidden by default. For special setups like glusterfs,<br \/>\ncloning a mount of a local device with a filesystem like ext4 or xfs independently on several nodes is a<br \/>\nvalid use case. Only set this to &#8220;true&#8221; if you know what you are doing!<br \/>\nforce_unmount: This option allows specifying how to handle processes that are currently accessing the mount directory.<br \/>\n&#8220;true&#8221; : Default value, kill processes accessing mount point &#8220;safe&#8221; : Kill processes accessing mount<br \/>\npoint using methods that avoid functions that could potentially block during process detection &#8220;false&#8221; :<br \/>\nDo not kill any processes. The &#8216;safe&#8217; option uses shell logic to walk the \/procs\/ directory for pids<br \/>\nusing the mount point while the default option uses the fuser cli tool. fuser is known to perform<br \/>\noperations that can potentially block if unresponsive nfs mounts are in use on the system.<\/p>\n<p>Resource Meta Options<\/p>\n<p>The resource meta options can be updated anytime, for example, by now, the fs resource can be started on any node, if you prefer to have resource has a preferred node, then<\/p>\n<p>#pcs status | grep fs11<br \/>\nfs11 (ocf::heartbeat:Filesystem): Started nodeC<br \/>\n#pcs resource meta fs11 stickness=500<\/p>\n<p>Set nodeC to standby, the resource fs11 will float to other nodes<\/p>\n<p>#pcs cluster standby nodeC<br \/>\n#pcs status | grep fs11<br \/>\nfs11 (ocf::heartbeat:Filesystem): Started nodeA<\/p>\n<p>Set nodeC to unstandby, the resource fs11 will float back<\/p>\n<p>pcs cluster unstandby nodeC<br \/>\n#pcs status | grep fs11<br \/>\nfs11 (ocf::heartbeat:Filesystem): Started nodeC<\/p>\n<p>Resource Operations<\/p>\n<p>You can either add operations to a resource when resource creation, or later add the operations, but<\/p>\n<p>#pcs resource op add<\/p>\n<p>Displaying Configured Resources<\/p>\n<p>To list all resources4<\/p>\n<p># pcs resource show<br \/>\nfs11 (ocf::heartbeat:Filesystem): Started<br \/>\nfs12 (ocf::heartbeat:Filesystem): Started<\/p>\n<p>To list a resource and its full attributes, meta and operations configurations<\/p>\n<p># pcs resource show fs11<br \/>\nResource: fs11 (class=ocf provider=heartbeat type=Filesystem)<br \/>\nAttributes: device=\/dev\/mapper\/LUN11 directory=\/lun11 fstype=xfs fast_stop=no force_unmount=safe<br \/>\nMeta Attrs: stickness=500<br \/>\nOperations: start interval=0s timeout=60 (fs11-start-timeout-60)<br \/>\nstop on-fail=stop interval=0s timeout=200 (fs11-stop-on-fail-stop-timeout-200)<br \/>\nmonitor on-fail=stop interval=60s timeout=200 OCF_CHECK_LEVEL=10 (fs11-monitor-on-fail-stop-timeout-200)<\/p>\n<p>To show all resource in full list mode<\/p>\n<p>#pcs resource show &#8211;full<\/p>\n<p>Enabling and Disabling Cluster Resources<\/p>\n<p>To disable a resource on a node and don&#8217;t want this resource start on other nodes<\/p>\n<p>#pcs resource disable fs11<br \/>\n# pcs resource<br \/>\nfs11 (ocf::heartbeat:Filesystem): Stopped<\/p>\n<p>To enable the resource<\/p>\n<p>#pcs resource enable fs11<br \/>\n# pcs resource<br \/>\nfs11 (ocf::heartbeat:Filesystem): Started<\/p>\n<p>Cluster Resources Cleanup<\/p>\n<p>When a resource messed up, showing some error on start, stop or other situation, to clean up the mess,<\/p>\n<p>#pcs resource cleanup<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Pacemaker Apache on High Availability CENTOS 7 <\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p dir=\"ltr\">Red Hat, Inc. introduces new Open Source Software in their every release. Red Hat Enterprise Linux 7 High Availability Add-On introduces a new suite of technologies that underlying high-availability technology based on Pacemaker and Corosync that completely replaces the CMAN and RGManager technologies from [&#8230;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[73],"tags":[],"_links":{"self":[{"href":"https:\/\/mohan.sg\/index.php?rest_route=\/wp\/v2\/posts\/5935"}],"collection":[{"href":"https:\/\/mohan.sg\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/mohan.sg\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/mohan.sg\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/mohan.sg\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=5935"}],"version-history":[{"count":3,"href":"https:\/\/mohan.sg\/index.php?rest_route=\/wp\/v2\/posts\/5935\/revisions"}],"predecessor-version":[{"id":5962,"href":"https:\/\/mohan.sg\/index.php?rest_route=\/wp\/v2\/posts\/5935\/revisions\/5962"}],"wp:attachment":[{"href":"https:\/\/mohan.sg\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=5935"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/mohan.sg\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=5935"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/mohan.sg\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=5935"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}