November 2024
M T W T F S S
 123
45678910
11121314151617
18192021222324
252627282930  

Categories

November 2024
M T W T F S S
 123
45678910
11121314151617
18192021222324
252627282930  

How to Set up Nginx High Availability with Pacemaker and Corosync on CentOS 7

How to Set up Nginx High Availability with Pacemaker and Corosync on CentOS 7

We will create the Active-Passive Cluster or Failover-cluster Nginx web server using Pacemaker on a CentOS 7 system.
Pacemaker is an open source cluster manager software that achieves maximum high availability of your services. It’s an advanced and scalable HA cluster manager distributed by ClusterLabs.
Corosync Cluster Engine is an open source project derived from the OpenAIS project under new BSD License. It’s a group communication system with additional features for implementing High Availability within applications.
There are some applications for the Pacemaker interfaces. Pcsd is one of the Pacemaker command line interface and GUI for managing the Pacemaker. We can create, configure, or add a new node to the cluster with the pcsd command pcs.
Prerequisites

[root@clusterserver1 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.20 clusterserver1.rmohan.com clusterserver1
192.168.1.21 clusterserver2.rmohan.com clusterserver2
192.168.1.22 clusterserver3.rmohan.com clusterserver3
[root@clusterserver1 ~]#

Floating IP Address 192.168.1.25
Root Privileges

Now test the hosts’ mapping configuration.

ping -c 3 clusterserver1
ping -c 3 clusterserver2

Install Epel Repository and Nginx
In this step, we will install the epel repository and then install the Nginx web server. EPEL or Extra Packages for Enterprise Linux repository is needed for installing Nginx packages.

Install EPEL Repository using the following yum command.

yum -y install epel-release

Now install Nginx web server from the EPEL repository.

yum -y install nginx

systemctl start nginx
systemctl enable nginx
systemctl status nginx

#Run Command on ‘clusterserver1’
echo ‘<h1>clusterserver1 – TEST SERVER1</h1>’ > /usr/share/nginx/html/index.html

#Run Command on ‘clusterserver2’
echo ‘<h1>clusterserver2 – TEST SERVER2</h1>’ > /usr/share/nginx/html/index.html

#Run Command on ‘clusterserver3’
echo ‘<h1>clusterserver3 – TEST SERVER3</h1>’ > /usr/share/nginx/html/index.html

Install and configure Pacemaker, Corosync, and Pcsd
Pacemaker, Corosync, and Pcsd are available in the default system repository. So they all can be installed from the CentOS repository using the following yum command.
yum -y install corosync pacemaker pcs
After the installation has been completed, enable all services to launch automatically at system boot using the systemctl commands below.
systemctl enable pcsd
systemctl enable corosync
systemctl enable pacemaker
Now start the pcsd Pacemaker command line interface on all servers.
systemctl start pcsd
Next, create a new password for ‘hacluster’ user and use the same password for all servers. This user has been created automatically during software installation.
Here’s how you configure a password for the ‘hacluster’ user.
passwd hacluster
Enter new password:
High Availability software stack Pacemaker, Corosync, and Pcsd are installed on to the system.

Create and Configure the Cluster

In this step, we will create a new cluster with 3 centos servers. Then configure the Floating IP address and add new Nginx resources.
To create the cluster, we need to authorize all servers using the pcs command and the hacluster user.
Authorize all servers with the pcs command and hacluster user and password.
pcs cluster auth clusterserver1 clusterserver2 clusterserver3
Username: hacluster
Password: test123

[root@clusterserver1 ~]# pcs cluster auth clusterserver1 clusterserver2 clusterserver3
Username: hacluster
Password:
clusterserver3: Authorized
clusterserver2: Authorized
clusterserver1: Authorized
[root@clusterserver1 ~]#

Now it’s time set up the cluster. Define the cluster name and all servers that will be part of the cluster.

pcs cluster setup –name mohan_cluster clusterserver1 clusterserver2 clusterserver3

[root@clusterserver1 ~]# pcs cluster setup –name mohan_cluster clusterserver1 clusterserver2 clusterserver3
Destroying cluster on nodes: clusterserver1, clusterserver2, clusterserver3…
clusterserver1: Stopping Cluster (pacemaker)…
clusterserver2: Stopping Cluster (pacemaker)…
clusterserver3: Stopping Cluster (pacemaker)…
clusterserver1: Successfully destroyed cluster
clusterserver3: Successfully destroyed cluster
clusterserver2: Successfully destroyed cluster

Sending ‘pacemaker_remote authkey’ to ‘clusterserver1’, ‘clusterserver2’, ‘clusterserver3’
clusterserver3: successful distribution of the file ‘pacemaker_remote authkey’
clusterserver1: successful distribution of the file ‘pacemaker_remote authkey’
clusterserver2: successful distribution of the file ‘pacemaker_remote authkey’
Sending cluster config files to the nodes…
clusterserver1: Succeeded
clusterserver2: Succeeded
clusterserver3: Succeeded

Synchronizing pcsd certificates on nodes clusterserver1, clusterserver2, clusterserver3…
clusterserver3: Success
clusterserver2: Success
clusterserver1: Success
Restarting pcsd on the nodes in order to reload the certificates…
clusterserver3: Success
clusterserver2: Success

[root@clusterserver1 ~]# pcs cluster start –all
clusterserver3: Starting Cluster…
clusterserver1: Starting Cluster…
clusterserver2: Starting Cluster…
[root@clusterserver1 ~]#

[root@clusterserver1 ~]# pcs status cluster
Cluster Status:
Stack: unknown
Current DC: NONE
Last updated: Fri Mar 10 03:56:45 2017
Last change: Fri Mar 10 03:56:26 2017 by hacluster via crmd on clusterserver1
3 nodes configured
0 resources configured

PCSD Status:
clusterserver1: Online
clusterserver2: Online
clusterserver3: Online
[root@clusterserver1 ~]#

Disable STONITH and Ignore the Quorum Policy
Since we’re not using the fencing device, we will disable the STONITH. STONITH or Shoot The Other Node
In The Head is the fencing implementation on Pacemaker. If you’re in production, it’s better to enable STONITH.
Disable STONITH with the following pcs command.

pcs property set stonith-enabled=false
Next, for the Quorum policy, ignore it.
pcs property set no-quorum-policy=ignore
Check the property list and make sure stonith and the quorum policy are disabled.

pcs property list

[root@clusterserver1 ~]# pcs property list
Cluster Properties:
cluster-infrastructure: corosync
cluster-name: mohan_cluster
dc-version: 1.1.16-12.el7_4.5-94ff4df
have-watchdog: false
no-quorum-policy: ignore
stonith-enabled: false
[root@clusterserver1 ~]#

The STONITH and Quorum Policy is disabled.

Add the Floating-IP and Resources
Floating IP is the IP address that can be migrated/moved automatically from one server to another server in the same Data Center. And we’ve already defined the floating IP address for the Pacemaker High-Availability to be ‘10.0.15.15’. Now we want to add two resources, the Floating IP address resource with the name ‘virtual_ip’ and a new resource for the Nginx web server named ‘webserver’.
Add the new resource floating IP address ‘virtual_ip’ using the pcs command as shown below.

pcs resource create virtual_ip ocf:heartbeat:IPaddr2 ip=192.168.1.25 cidr_netmask=32 op monitor interval=30s

Next, add a new resource for the Nginx ‘webserver’.

pcs resource create webserver ocf:heartbeat:nginx configfile=/etc/nginx/nginx.conf op monitor timeout=”5s” interval=”5s”

Make sure you got no error result, then check the resources available.

pcs status
pcs status resources

You will see two resources ‘virtual_ip’ and a ‘webserver’. New resources for the Floating IP and Nginx web server have been added.

Add Constraint Rules to the Cluster

In this step, we will setup High Availability Rules, and will setup resource constraint with the pcs command line interface.
Set the collation constraint for webserver and virtual_ip resources with score ‘INFINITY’.
Also, setup the webserver and virtual_ip resources as same on all server nodes.

pcs constraint colocation add webserver virtual_ip INFINITY

Set the ‘virtual_ip’ and ‘webserver’ resources always on same node servers.

pcs constraint order virtual_ip then the webserver

pcs constraint colocation add webserver virtual_ip INFINITY

Next, stop the cluster and then start again.

pcs cluster stop –all
pcs cluster start –all

[root@clusterserver1 ~]# pcs cluster stop –all
clusterserver1: Stopping Cluster (pacemaker)…
clusterserver3: Stopping Cluster (pacemaker)…
clusterserver2: Stopping Cluster (pacemaker)…
clusterserver3: Stopping Cluster (corosync)…
clusterserver1: Stopping Cluster (corosync)…
clusterserver2: Stopping Cluster (corosync)…
[root@clusterserver1 ~]# pcs cluster start –all
clusterserver1: Starting Cluster…
clusterserver2: Starting Cluster…
clusterserver3: Starting Cluster…
[root@clusterserver1 ~]#
[root@clusterserver1 ~]#
[root@clusterserver1 ~]#

Testing
In this step, we’re gonna do some test for the cluster. Test the node status (‘Online’ or ‘Offline’), test the corosync members and status, and then test the high-availability of the Nginx webserver by accessing the Floating IP address.
Test node status with the following command.

[root@clusterserver1 ~]# pcs status nodes
Pacemaker Nodes:
Online: clusterserver1
Standby:
Maintenance:
Offline: clusterserver2 clusterserver3
Pacemaker Remote Nodes:
Online:
Standby:
Maintenance:
Offline:
[root@clusterserver1 ~]#

Test the corosync members.
corosync-cmapctl | grep members
You will get Corosync members IP address

[root@clusterserver1 ~]# corosync-cmapctl | grep members
runtime.totem.pg.mrp.srp.members.1.config_version (u64) = 0
runtime.totem.pg.mrp.srp.members.1.ip (str) = r(0) ip(192.168.1.20)
runtime.totem.pg.mrp.srp.members.1.join_count (u32) = 1
runtime.totem.pg.mrp.srp.members.1.status (str) = joined

[root@clusterserver1 ~]# pcs status
Cluster name: mohan_cluster
Stack: corosync
Current DC: clusterserver1 (version 1.1.16-12.el7_4.5-94ff4df) – partition WITHOUT quorum
Last updated: Fri Mar 10 04:12:59 2017
Last change: Fri Mar 10 04:11:59 2017 by root via cibadmin on clusterserver1

3 nodes configured
2 resources configured

Online: [ clusterserver1 ]
OFFLINE: [ clusterserver2 clusterserver3 ]

Full list of resources:

virtual_ip (ocf::heartbeat:IPaddr2): Started clusterserver1
webserver (ocf::heartbeat:nginx): Started clusterserver1

Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
[root@clusterserver1 ~]#
[root@clusterserver1 ~]#
[root@clusterserver1 ~]#
[root@clusterserver1 ~]#

Leave a Reply

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>