April 2024
M T W T F S S
1234567
891011121314
15161718192021
22232425262728
2930  

Categories

April 2024
M T W T F S S
1234567
891011121314
15161718192021
22232425262728
2930  

Simple way to configure Ngnix High Availability Web Server with Pacemaker and Corosync on CentOS7

Pacemaker is an open source cluster manager software which provide high availability of resources or services in CentOS 7 or RHEL 7 Linux. It has feature of scalable and advanced HA Cluster Manager. This HA cluster manager distributed by ClusterLabs.

Corosync is the core of Pacemaker Cluster Manager as it is responsible for generating heartbeat communication between cluster nodes which make it capable of deploying High Availability in applications. Corosync is derived from an Open Source project OpenAIS under new BSD License.

Pcsd is a Pacemaker command line interface (CLI) and GUI for managing the Pacemaker cluster. PCSD command pcs is use for creating, configuring and adding a new node to cluster.

In this tutorial I will use pcsd in CLI for configuring Active/Passive Pacemaker Cluster to provide high availability of Nginx webservice in CentOS 7. In this article I have tried to give basic idea of how to configure the Pacemaker cluster on CentOS 7 (applicable same to RHEL 7 as CentOS is mimic of RHEL). For basic cluster configuration I have disable the STONITH and ignore the Quorum but for Production environment I suggest to use STONITH feature of Pacemaker.

Here is Short Defination of STONITH: STONITH or Shoot The Other Node In The Head is the fencing implementation on Pacemaker. It is a technique for fencing in computer clusters. Fencing is the isolation of a failed node so that it does not cause disruption to a computer cluster.

For demonstration I have built two VMs (Virtual Machines) on KVM based on my Ubuntu 16.04 base machine and those VMs have private IP addresses.

Note: I am referring my VMs as Cluster node for better presenting them in rest of the topics.

Pre-requisite for configuring pacemaker cluster

Minimum two CentOS 7 Server
webserver01: 192.168.1.33
webserver02: 192.168.1.34
Floating IP Address: 192.168.1.30
Root Privilege

Below are the points which I will follow for Installing and Configuring two node Pacemaker Cluster:

  1. Mapping of Host File
  2. Installation of Epel Repository and Nginx
  3. Installation and Configuration of Pacemaker, Corosync, and Pcsd
  4. Creation and Configuration of Cluster
  5. Disabling of STONITH and Ignoring Quorum Policy
  6. Adding of Floating-IP and Resources
  7. Testing the Cluster service

Steps for Installation and configuration of pacemaker cluster

  1. Mapping of host files:

As in my test lab I am not using DNS for resolving the both pacemaker cluster node hostname thus I have configured /etc/hosts file for resolving hostname of both nodes. But my suggestion is, though you have DNS in your environment for name resolution but still for better Pacemaker Cluster heartbeat communication between cluster nodes you should configure /etc/hosts file.

Edit the /etc/hosts file with desire editor in both cluster nodes, below is example of /etc/hosts file which I have configured in both cluster nodes.

[root@webserver01 ~]

# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.33 webserver01
192.168.1.34 webserver02

[root@webserver01 ~]

#

[root@webserver02 ~]

# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.33 webserver01
192.168.1.34 webserver02

[root@webserver02 ~]

#
Post /etc/hosts file configuration we will test the connectivity of both cluster nodes with each other through ping command:
Example:

ping -c 3 webserver01
ping -c 3 webserver02
If we will get reply like below that means our webservers are communicating with each other.

[root@webserver01 ~]

# ping -c 3 webserver02
PING webserver02 (192.168.1.34) 56(84) bytes of data.
64 bytes from webserver02 (192.168.1.34): icmp_seq=1 ttl=64 time=1.10 ms
64 bytes from webserver02 (192.168.1.34): icmp_seq=2 ttl=64 time=0.727 ms
64 bytes from webserver02 (192.168.1.34): icmp_seq=3 ttl=64 time=0.698 ms

— webserver02 ping statistics —
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 0.698/0.843/1.106/0.188 ms

[root@webserver01 ~]

#

[root@webserver02 ~]

# ping -c 3 webserver01
PING webserver01 (192.168.1.33) 56(84) bytes of data.
64 bytes from webserver01 (192.168.1.33): icmp_seq=1 ttl=64 time=0.197 ms
64 bytes from webserver01 (192.168.1.33): icmp_seq=2 ttl=64 time=0.123 ms
64 bytes from webserver01 (192.168.1.33): icmp_seq=3 ttl=64 time=0.114 ms

— webserver01 ping statistics —
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.114/0.144/0.197/0.039 ms

[root@webserver02~]

#

  1. Installation of Epel Repository and Nginx

In this Steps we will install EPEL (Extra Package for Enterprise Linux) repository and then Nginx. For Nginx installation EPEL repository package need to install first.

yum -y install epel-release
Now install Nginx:

yum -y install nginx

  1. Install and Configure Pacemaker, Corosync, and Pcsd

Now we will install the pacemaker, pcs and corosync package with yum command. These package does not require seperate repository as they will use default CentOS repository.

yum -y install corosync pacemaker pcs
Once Cluster packages will install successfully enable the cluster services in startup through systemctl commands as mentioned below:

systemctl enable pcsd
systemctl enable corosync
systemctl enable pacemaker
Now Start the pcsd service in both cluster nodes and also enable it in system startup.

systemctl start pcsd.service
systemctl enable pcsd.service
The pcsd daemon works with the pcs command-line interface to manage synchronizing the corosync configuration across all nodes in the cluster.

The user hacluster is created automatically with disable password during package installation this account is needed a login credential for syncing the corosync configuration, or starting and stopping the cluster service on other cluster nodes.
In next step we will create a new password for hacluster user and we will use same password for rest cluster node as well.

passwd hacluster
Changing password for user hacluster.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.

  1. Creation and Configuration of Cluster

Note: This steps from Step 4 to 7 will only need to perform on webserver01 server.
This step will cover the creating of new 2 nodes CentOS Linux cluster servers which will host Nginx resources and Floating IP Address.

First of all to create cluster we need to authorize all servers using the pcs command and the hacluster user.

Authorize both cluster webservers with the pcs command and hacluster user and password.

[root@webserver01 ~]

# pcs cluster auth webserver01 webserver02
Username: hacluster
Password:
webserver01: Authorized
webserver02: Authorized

[root@webserver01 ~]

#
Note: If you are getting below error after running above auth command:

[root@webserver01 ~]

# pcs cluster auth webserver01 webserver02
Username: hacluster
Password:
webserver01: Authorized
Error: Unable to communicate with webserver02
Then you need to define firewalld rules in both nodes which enable the communication of both Cluster nodes:

Below are the example for adding rules for Cluster and ngnix as well

[root@webserver01 ~]

# firewall-cmd –permanent –add-service=high-availability
success

[root@webserver01 ~]

# firewall-cmd –permanent –add-service=http
success

[root@webserver01 ~]

# firewall-cmd –permanent –add-service=https
success

[root@webserver01 ~]

# firewall-cmd –reload
success

[root@webserver01 ~]

# firewall-cmd –list-all
public (active)
target: default
icmp-block-inversion: no
interfaces: eth0
sources:
services: ssh dhcpv6-client high-availability http https
ports:
protocols:
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:

[root@webserver01 ~]

#
Now we will define the cluster name and cluster node members.

pcs cluster setup –name web_cluster webserver01 webserver02
Next start the all cluster services and enable them in system startup.

[root@webserver01 ~]

# pcs cluster start –all
webserver02: Starting Cluster…
webserver01: Starting Cluster…

[root@webserver01 ~]

# pcs cluster enable –all
webserver01: Cluster Enabled
webserver02: Cluster Enabled

[root@webserver01 ~]

#
Run the below command to check the Cluster status

pcs status cluster

[root@webserver01 ~]

# pcs status cluster
Cluster Status:
Stack: corosync
Current DC: webserver02 (version 1.1.18-11.el7_5.3-2b07d5c5a9) – partition with quorum
Last updated: Tue Sep 4 02:38:20 2018
Last change: Tue Sep 4 02:33:06 2018 by hacluster via crmd on webserver02
2 nodes configured
0 resources configured

PCSD Status:
webserver01: Online
webserver02: Online

[root@webserver01 ~]

#

  1. Disabling of STONITH and Ignoring Quorum Policy
    In this tutorial we will disable the STONITH and Quorum policy as we are not using fencing device here. But if you want to implement Cluster in Production environment then I suggest to use Fencing and Quorum Policy.

Disable the STOITH:

pcs property set stonith-enabled=false
Ignore the Quorum Policy:

pcs property set no-quorum-policy=ignore
Now Check whether STONITH and Quorum policy are disable or not with below command:

pcs property list

[root@webserver01 ~]

# pcs property list
Cluster Properties:
cluster-infrastructure: corosync
cluster-name: web_cluster
dc-version: 1.1.18-11.el7_5.3-2b07d5c5a9
have-watchdog: false
no-quorum-policy: ignore
stonith-enabled: false

[root@webserver01 ~]

#

  1. Adding of Floating-IP and Resources

Floating IP address are cluster virtual IP address which float or move automatically from one cluster node to another cluster node in event of one Active Cluster node failure or disable which was hosting cluster resources.

In this step we will add Floating IP and Nginx resources:

Adding Floating IP

pcs resource create virtual_ip ocf:heartbeat:IPaddr2 ip=192.168.1.30 cidr_netmask=32 op monitor interval=30s
Adding nginx resources

pcs resource create webserver ocf:heartbeat:nginx configfile=/etc/nginx/nginx.conf op monitor timeout=”5s” interval=”5s”
Now check newly added resources from below command:

pcs status resources

[root@webserver01 ~]

# pcs status resources
virtual_ip (ocf::heartbeat:IPaddr2): Started webserver01
webserver (ocf::heartbeat:nginx): Started webserver01

[root@webserver01 ~]

#

  1. Testing the Cluster service

To check cluster service running status
Now We will check the Cluster service status before moving to test nginx webservice failover in event of one Active Cluster node fail.

To check running cluster service status below is the command with example:

[root@webserver01 ~]

# pcs status
Cluster name: web_cluster
Stack: corosync
Current DC: webserver01 (version 1.1.18-11.el7_5.3-2b07d5c5a9) – partition with quorum
Last updated: Tue Sep 4 03:55:47 2018
Last change: Tue Sep 4 03:15:29 2018 by root via cibadmin on webserver01

2 nodes configured
2 resources configured

Online: [ webserver01 webserver02 ]

Full list of resources:

virtual_ip (ocf::heartbeat:IPaddr2): Started webserver01
webserver (ocf::heartbeat:nginx): Started webserver01

Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled

[root@webserver01 ~]

#
To test ngnix webservice failover:

First of we will create a webpages in both cluster nodes by below command:
In webserver01:

echo ‘

webserver01 – Web-Cluster-Testing

‘ > /usr/share/nginx/html/index.html
echo ‘

webserver02 – Web-Cluster-Testing

‘ > /usr/share/nginx/html/index.html
Now open this web page with Floating IP address (192.168.1.30) which we had configured with Cluster resources in previous steps, you will see currently webpage is accessible from webserver01:

Now stop the cluster service in webserver01 and after it again open the webpage with same floating IP address. Below is the command for Stopping pacemaker cluster in webserver01:

pcs cluster stop webserver01
After stopping the pacemaker cluster in webserver01 this time webpage should be accessed from webserver02:

Leave a Reply

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>