April 2024
M T W T F S S
1234567
891011121314
15161718192021
22232425262728
2930  

Categories

April 2024
M T W T F S S
1234567
891011121314
15161718192021
22232425262728
2930  

Steps to install Oracle 19c in CentOS 7.6 RPM mode

Steps to install Oracle 19c in CentOS 7.6 RPM mode

  1. Download the required installation package:

1.1 preinstall

http://yum.Oracle.com/repo/OracleLinux/OL7/latest/x86_64/getPackage/oracle-database-preinstall-19c-1.0-1.el7.x86_64.rpm
1.2 Oracle rpm installation package

https://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html
It is recommended to download at home or see the VPN proxy download speed in the company.

  1. Installation.

yum localinstall -y oracle-database-preinstall-19c-1.0-1.el7.x86_64.rpm

Install after installation is complete

yum localinstall -y oracle-database-preinstall-19c-1.0-1.el7.x86_64.rpm
Wait for the installation results.

Different servers take different time:

The result of my installation here is:

Total size: 6.9 G
Installed size: 6.9 G
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : oracle-database-ee-19c-1.0-1.x86_64 1/1
[INFO] Executing post installation scripts…
[INFO] Oracle home installed successfully and ready to be configured.
To configure a sample Oracle Database you can execute the following service configuration script as root: /etc/init.d/oracledb_ORCLCDB-19c configure
Verifying : oracle-database-ee-19c-1.0-1.x86_64 1/1

Installed:
oracle-database-ee-19c.x86_64 0:1.0-1

Complete!
Note that the configuration after the installation is complete requires the root user.

  1. As with previous blogs, you need to modify the character set and other configurations:
https://www.cnblogs.com/jinanxiaolaohu/p/9826653.html

https://www.cnblogs.com/jinanxiaolaohu/p/10015624.html
The modified configuration file of oracle19c is:

vim /etc/init.d/oracledb_ORCLCDB-19c
The revised content is mainly the part of the circle

Text version:

export ORACLE_VERSION=19c
export ORACLE_SID=ORA19C
export TEMPLATE_NAME=General_Purpose.dbc
export CHARSET=ZHS16GBK
export PDB_NAME=ORA19CPDB
export CREATE_AS_CDB=true
Corresponding to copy a parameter file

cd /etc/sysconfig/

cp oracledb_ORCLCDB-19c.conf oracledb_ORA19C-19c.conf

  1. Configure with the root user.

The root user executes the command:
/etc/init.d/oracledb_ORCLCDB-19c configure
Wait for the Oracle database to perform initialization operations.

. Processing after the completion of the execution.

Increase environment variable processing

vim /etc/profile.d/oracle19c.sh

Add content as:
export ORACLE_HOME=/opt/oracle/product/19c/dbhome_1
export PATH=$PATH:/opt/oracle/product/19c/dbhome_1/bin
export ORACLE_SID=ORA19C
Modify the password of the Oracle user:

passwd oracle
Use Oracle login for related processing

sqlplus / as sysdba
View pdb information

show pdbs
5.1 Create a trigger to automatically start pdb (Do not set the PDB boot startup Many programs can not connect to the PDB, it is recommended to use show pdbs to view the status, manual start can also. Can not create business data in the CDB, will prompt to create the user name does not meet c# ##???)

CREATE TRIGGER open_all_pdbs
AFTER STARTUP ON DATABASE
BEGIN
EXECUTE IMMEDIATE ‘alter pluggable database all open’;
END open_all_pdbs;
/

CentOS 7.6 configures Nginx reverse proxy

Using a three CentOS 7 virtual machine to build a simple Nginx reverse proxy load cluster, three virtual machine addresses and functions

192.168.1.76 nginx load balancer

192.168.1.82 web01 server

192.168.1.78 web02 server

Second, install the nginx software (the following operations must be carried out on three virtual machines)

Some Centos 7.6 does not have the wget command installed, so install it yourself:

yum -y install wget

Install nginx software: (three servers must be installed)

$ wget http://dl.Fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

$ rpm -ivh epel-release-latest-7.noarch.rpm

$ yum install nginx (direct yum installation)

Installation is so simple and convenient, after the installation is complete, you can use systemctl to control the startup of nginx.

$ systemctl enable nginx (join boot)
$ systemctl start nginx (turn on nginx)
$ systemctl status nginx (view status)

After the three servers are installed with nginx respectively, the test can run normally and provide web services. If the error is probably the cause of the firewall, please see the last few steps about the firewall.

Modify the configuration file of the nginx of the proxy server to implement load balancing. As the name implies, multiple requests are distributed to different services to achieve a balanced load and reduce the pressure on a single service.

$ vi /etc/nginx/nginx.conf (modify configuration file, global configuration file)

For more information on configuration, see:

* Official English Documentation: http://nginx.org/en/docs/

* Official Russian Documentation: http://nginx.org/ru/docs/

User nginx;
worker_processes auto; (default is automatic, you can set it yourself, generally no more than cpu core)
error_log /var/log/nginx/error.log; (error log path)
pid /run/nginx.pid; (pid file path)

Load dynamic modules. See /usr/share/nginx/README.dynamic.

include /usr/share/nginx/modules/*.conf;

Events { accept_mutex on; (set network connection serialization to prevent surprises, default is on) 
multi_accept on; (set whether a process accepts multiple network connections at the same time, the default is off) 
worker_connections 1024; (the maximum of a process Number of connections) 

}

http {
log_format main ‘$remote_addr – $remote_user [$time_local] “$request” ‘
‘$status $body_bytes_sent “$http_referer” ‘
‘”$http_user_agent” “$http_x_forwarded_for”‘;

access_log  /var/log/nginx/access.log  main;



Sendfile     on; # tcp_nopush on; (not commented out here) 
tcp_nodelay on; 
keepalive_timeout 65; (connection timeout) 
types_hash_max_size 2048; 
gzip on; (open compression) 
include /etc/nginx/mime.types; 
default_type application/octet-stream;


# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;

Here to set load balancing, load balancing has multiple strategies, nginx comes with polling, weights, ip-hash, response time and so on.

Default is to split the http load, the way to poll.

is to distribute the request according to the weight, the load with high weight is large

ip-hash, according to ip to allocate, keep the same ip on the same server.

Response time, according to the response time of the server nginx, preferentially distributed to the server with fast response.

The centralized strategy can be combined with
upstream tomcat { (tomcat is a custom load balancing rule name)
ip_hash; (ip_hash is the ip-hash method)

??????server 192.168.1.78:80 weight=3 fail_timeout=20s;
??????server 192.168.1.82:80 weight=4 fail_timeout=20s;

can define multiple sets of rules

}

Server { 
    listen 80 default_server; (default listening port 80) 
    listen localhost; (listening server) 
    server_name _; 
    root /usr/share/nginx/html;


    # Load configuration files for the default server block.
    include /etc/nginx/default.d/*.conf;


    Location / { ( / means all requests, can be customized to set different load rules and services for different domain names) 

proxy_pass http://tomcat; (reverse proxy, fill in your own load balancing rule name)
proxy_redirect off; (The following settings can be copied directly. If not, it may lead to some problems such as unauthentication.)
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_connect_timeout 90; The following are just some timeout settings, but don’t)
proxy_send_timeout 90;
proxy_read_timeout 90;
}
# location ~.(gif|jpg|png)$ { (for example, write in regular expression)
# root /home/root/ Images;
# }

    error_page 404 /404.html;
        location = /40x.html {
    }


    error_page 500 502 503 504 /50x.html;
        location = /50x.html {
    }
}

Settings for a TLS enabled server.

#

server {

listen 443 ssl http2 default_server;

listen [::]:443 ssl http2 default_server;

server_name _;

root /usr/share/nginx/html;

#

ssl_certificate “/etc/pki/nginx/server.crt”;

ssl_certificate_key “/etc/pki/nginx/private/server.key”;

ssl_session_cache shared:SSL:1m;

ssl_session_timeout 10m;

ssl_ciphers HIGH:!aNULL:!MD5;

ssl_prefer_server_ciphers on;

#

# Load configuration files for the default server block.

include /etc/nginx/default.d/*.conf;

#

location / {

}

#

error_page 404 /404.html;

location = /40x.html {

}

#

error_page 500 502 503 504 /50x.html;

location = /50x.html {

}

}

}

After the configuration is updated, the reload configuration can take effect without restarting the service.

nginx -s reload

If you can’t access it, it may be because the firewall is open and the port is not open:

Start: systemctl start firewalld
off: systemctl stop firewalld
view status: systemctl status firewalld
boot disable: systemctl disable firewalld
boot enable: systemctl enable firewalld

Open a port:

Add
firewall-cmd –zone=public –add-port=80/tcp –permanent (–permanent is permanent, no failure after restarting this parameter)
Reload
firewall-cmd –reload
view
firewall-cmd — zone = public –query-port = 80 / tcp
delete
firewall-cmd –zone = public –remove- port = 80 / tcp –permanent

selinux nginx

Restart Nginx and bind() to 0.0.0.0:8088 failed (13: Permission denied)

First declare: If you do not use SELinux you can skip this article.

The Nginx service is installed on ContOS 7. For the project, you need to modify the default 80 port of Nginx to 8088. After modifying the configuration file, restart the Nginx service and check the log for the following error:

[emerg]

9011#0: bind() to 0.0.0.0:8088 failed (13: Permission denied)

The permission was denied, and I thought that the port was occupied by another program. I checked the active port but no program used this port. The online search said that it requires root privileges, but I am running the root user. This is very depressed, but it is still Give google the answer, because selinux only allows 80,81,443,8008,8009,8443,9000 as the HTTP port.

To view the http port allowed by selinux, you must use the semanage command. First install the semanage command tool first.

Before installing the semanage tool, we first install a tab to complete the secondary command function tool bash-completion:

Yum -y install bash-completion

Semanage found directly through the yum installation found no such package:

yum install semange

NO package semanage available.

Then find out which package the semanage command provides for this command.

yum provides semanage

Or use the following command:

yum whatprovides /usr/sbin/semanage

We found that we need to install the package policycoreutils- Python to use the semanage command.

Now that we have installed this package via yum, we can use tabs to complete it:

yum install policycoreutils-python.x86_64

Now that you can finally use semanage, let’s first look at the ports that http allow access to:

semanage port -l | grep http_port_t

Http_port_t tcp 80, 81, 443, 488, 8008, 8009, 8443, 9000

Then we will add the port 8088 to be used in the port list:

semanage port -a -t http_port_t -p tcp 8088

semanage port -l | grep http_port_t

Http_port_t tcp 8088, 80, 81, 443, 488, 8008, 8009, 8443, 9000

Ok, now nginx can use port 8088.

The selinux log is in /var/log/audit/audit.log

But the information recorded in this file is not obvious enough, it is difficult to see, we can use the audit2why and audit2allow tools to view, these two tools are also provided by the policycoreutils-python package.

audit2why < /var/log/audit/audit.log

Collect the logs of the selinux tool, there is another tool setroubleshoot, the corresponding package is setroubleshoot-server

Check if host is a live bash script

!/bin/bash
#
TCP-ping in bash (not tested)
HOSTNAME="$1"
PORT="$2"
if [ "X$HOSTNAME" == "X" ]; then
echo "Specify a hostname"
exit 1
fi
if [ "X$PORT" == "X" ]; then
PORT="22"
fi
exec 3<>/dev/tcp/$HOSTNAME/$PORT
if [ $? -eq 0 ]; then
echo "Alive."
else
echo "Dead."
fi
exec 3>&-

Tomcat log cutting script

time=$(date +%H) 
end_time=`expr $time – 2`
a=$end_time
BF_TIME=$(date +%Y%m%d)_$a:00-$time:00
cp /usr/local/tomcat8/logs/catalina.out /var/log/tomcat/oldlog/catalina.$BF_TIME.out
echo ” ” > /usr/local/tomcat8/logs/catalina.out

catalina.out

mkdir  -p  /var/log/tomcat/oldlog/

chmod  +x  /root/tom_log.sh

 crontab -e
0 */2 * * * sh /root/tom_log.sh

ls /var/log/tomcat/oldlog/

catalina.20190102_15:00-17:00.out  catalina.20190102_17:00-19:00.out

docker tomcat + mysql

Build on a clean CentOS image
Centos image preparation
Pull the Centos image on the virtual machine: docker pull centos
Create a container to run the Centos image: docker run -it -d –name mycentos centos /bin/bash
Note: There is an error here [ WARNING: IPv4 forwarding is disabled. Networking will not work. ]

Change the virtual machine file: vim /usr/lib/sysctl.d/00-system.conf
Add the following content
net.ipv4.ip_forward=1
Restart the network: systemctl restart network
Note: There is another problem here, systemctl in docker can not be used normally. Find the following solutions on the official website

Link: https://forums.docker.com/t/systemctl-status-is-not-working-in-my-docker-container/9075/4

Run mirror with the following statement
docker run –privileged -v /sys/fs/cgroup:/sys/fs/cgroup -it -d –name usr_sbin_init_centos centos /usr/sbin/init

1. Must have –privileged

2. Must have -v /sys/fs/cgroup:/sys/fs/cgroup

3. Replace bin/bash with /usr/sbin/init

Install the JAVA environment
Prepare the JDK tarball to upload to the virtual machine
Put the tarball into the docker container using docker cp
docker cp jdk-11.0.2_linux-x64_bin.tar.gz 41dbc0fbdf3c:/

The same as the linux cp specified usage, you need to add the identifier of the container: id or name

Extract the tar package
tar -xf jdk-11.0.2_linux-x64_bin.tar.gz -C /usr/local/java/jdk
Edit profile file export java environment variable

/etc/profile

export JAVA_HOME=/usr/local/java/jdk/jdk1.8.0_91
export PATH=$JAVA_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
Run source /etc/profile to make environment variables take effect
Whether the test is successful
java –version

result

java 11.0.2 2019-01-15 LTS
Java(TM) SE Runtime Environment 18.9 (build 11.0.2+9-LTS)
Java HotSpot(TM) 64-Bit Server VM 18.9 (build 11.0.2+9-LTS, mixed mode)
Install Tomcat
Prepare the tomcat tar package to upload to the virtual machine and cp to the docker container
Extract to
tar -xf apache-tomcat-8.5.38.tar.gz -C /usr/local/tomcat
Set boot boot, by using rc.local file

rc.local Add the following code

export JAVA_HOME=/usr/local/java/jdk/jdk-11.0.2
/usr/local/tomcat/apache-tomcat-8.5.38/bin/startup.sh
Open tomcat

/usr/local/tomcat/apache-tomcat-8.5.38/bin/ directory running
./startup.sh
Detection
curl localhost:8080

?html source content

Install mysql
Get the yum source of mysql
wget -i -c http://dev.mysql.com/get/mysql57-community-release-el7-10.noarch.rpm
Install the above yum source
yum -y install mysql57-community-release-el7-10.noarch.rpm
Yum install mysql
yum -y install mysql-community-server
Change the mysql configuration: /etc/my/cnf
Validate_password=OFF # Turn off password verification
character-set-server=utf8
collation-server=utf8_general_ci
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
Initialize specified but the data directory has files in it # The default behavior of timestamp from 5.6 is already deprecated, need to close the warning

[client]

default-character-set=utf8
Get the mysql initial password

grep “password” /var/log/mysqld.log

[Note] A temporary password is generated for root@localhost: k:nT<dT,t4sF

Use this password to log in to mysql

Go to mysql and proceed

Enter

mysql -u root -p

change the password

ALTER USER ‘root’@’localhost’ IDENTIFIED BY ‘111111’;

Change to make mysql remote access

update user set host = ‘%’ where user = ‘root’;
Test, you can use physical machine, use navicat to access mysql in docker
Packing container
On the docker hub

Submit the container as a mirror

docker commit -a ‘kane’ -m ‘test’ container_id images_name:images_tag

dockerhub
docker push kane0725/tomcat
Local tarball everywhere

Export to cost tar package

docker export -o test.tar a404c6c174a2

Import the tar package into a mirror

docker import test.tar test_images
Use Dockerfile
Note: only build a mirror of tomcat

Ready to work
Create a special folder and put the jdk and tomcat tarballs
Create a Dockerfile in this directory
Centos base image
document content
FROM centos
MAINTAINER tomcat mysql
ADD jdk-11.0.2 /usr/local/java
ENV JAVA_HOME /usr/local/java/
ADD apache-tomcat-8.5.38 /usr/local/tomcat8
EXPOSE 8080
Output results using docker build

[root@localhost dockerfile]

# docker build -t tomcats:centos .
Sending build context to Docker daemon 505.8 MB
Step 1/7 : FROM centos
—> 1e1148e4cc2c
Step 2/7 : MAINTAINER tomcat mysql
—> Using cache
—> 889454b28f55
Step 3/7 : ADD jdk-11.0.2 /usr/local/java
—> Using cache
—> 8cad86ae7723
Step 4/7 : ENV JAVA_HOME /usr/local/java/
—> Running in 15d89d66adb4
—> 767983acfaca
Removing intermediate container 15d89d66adb4
Step 5/7 : ADD apache-tomcat-8.5.38 /usr/local/tomcat8
—> 4219d7d611ec
Removing intermediate container 3c2438ecf955
Step 6/7 : EXPOSE 8080
—> Running in 56c4e0c3b326
—> 7c5bd484168a
Removing intermediate container 56c4e0c3b326
Step 7/7 : RUN /usr/local/tomcat8/bin/startup.sh
—> Running in 7a73d0317db3

Tomcat started.
—> b53a6d54bf64
Removing intermediate container 7a73d0317db3
Successfully built b53a6d54bf64
Docker build problem
Be sure to bring the order behind. Otherwise it will report an error.
“docker build” requires exactly 1 argument(s).
Run a container

docker run -it –name tomcats –restart always -p 1234:8080 tomcats /bin/bash

/usr/local/tomcat8/bin/startup.sh

result

Using CATALINA_BASE: /usr/local/tomcat8
Using CATALINA_HOME: /usr/local/tomcat8
Using CATALINA_TMPDIR: /usr/local/tomcat8/temp
Using JRE_HOME: /usr/local/java/
Using CLASSPATH: /usr/local/tomcat8/bin/bootstrap.jar:/usr/local/tomcat8/bin/tomcat-juli.jar
Tomcat started.
Use docker compose
Install docker compose
Official: https://docs.docker.com/compose/install/

The way I choose is pip installation

installation

pip install docker-compose

docker-compose –version

———————–

docker-compose version 1.23.2, build 1110ad0
Write docker-compose.yml

This yml file builds a mysql a tomcat container

version: “3”
services:
mysql:
container_name: mysql
image: mysql:5.7
restart: always
volumes:
– ./mysql/data/:/var/lib/mysql/
– ./mysql/conf/:/etc/mysql/mysql.conf.d/
ports:
– “6033:3306”
environment:
– MYSQL_ROOT_PASSWORD=
tomcat:
container_name: tomcat
restart: always
image: tomcat
ports:
– 8080:8080
– 8009:8009
links:
– mysql:m1 #Connect to database mirroring
Note:

Volumn must be a path, you cannot specify a file

Tomcat specifies that the external conf has been created unsuccessfully, do not know why, prompt

tomcat | Feb 20, 2019 2:23:29 AM org.apache.catalina.startup.Catalina load
tomcat | WARNING: Unable to load server configuration from [/usr/local/tomcat/conf/server.xml]
tomcat | Feb 20, 2019 2:23:29 AM org.apache.catalina.startup.Catalina start
tomcat | SEVERE: Cannot start server. Server instance is not configured.
tomcat exited with code 1
Run command
Note: Must be executed under the directory of the yml file

docker-compose up -d

———-View docker container——-

[root@localhost docker-compose]

# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1a8a0165a3a8 tomcat “catalina.sh run” 7 seconds ago Up 6 seconds 0.0.0.0:8009->8009/tcp, 0.0.0.0:8080->8080/tcp tomcat
ddf081e87d67 mysql:5.7 “docker-entrypoint…” 7 seconds ago Up 7 seconds 33060/tcp, 0

How to recover “rpmdb open failed” error in RHEL or Centos Linux

You are updating the system through yum command and suddenly power goes down or what happen if yum process is accidentally killed. Post this situation when you tried to update the system again with yum command now you are getting below error message related to rpmdb:

error: rpmdb: BDB0113 Thread/process 2196/139984719730496 failed: BDB1507 Thread died in Berkeley DB library
error: db5 error(-30973) from dbenv->failchk: BDB0087 DB_RUNRECOVERY: Fatal error, run database recovery
error: cannot open Packages index using db5 – (-30973)
error: cannot open Packages database in /var/lib/rpm
CRITICAL:yum.main:

Error: rpmdb open failed
You are also not able to perform rpm query and getting same error message on screen:

[root@testvm~]

# rpm -qa
error: rpmdb: BDB0113 Thread/process 7924/139979327153984 failed: BDB1507 Thread died in Berkeley DB library
error: db5 error(-30973) from dbenv->failchk: BDB0087 DB_RUNRECOVERY: Fatal error, run database recovery
error: cannot open Packages index using db5 – (-30973)
error: cannot open Packages database in /var/lib/rpm
error: rpmdb: BDB0113 Thread/process 7924/139979327153984 failed: BDB1507 Thread died in Berkeley DB library
error: db5 error(-30973) from dbenv->failchk: BDB0087 DB_RUNRECOVERY: Fatal error, run database recovery
error: cannot open Packages database in /var/lib/rpm

[root@testvm ~]

# rpm -Va
error: rpmdb: BDB0113 Thread/process 7924/139979327153984 failed: BDB1507 Thread died in Berkeley DB library
error: db5 error(-30973) from dbenv->failchk: BDB0087 DB_RUNRECOVERY: Fatal error, run database recovery
error: cannot open Packages index using db5 – (-30973)
error: cannot open Packages database in /var/lib/rpm
error: rpmdb: BDB0113 Thread/process 7924/139979327153984 failed: BDB1507 Thread died in Berkeley DB library
error: db5 error(-30973) from dbenv->failchk: BDB0087 DB_RUNRECOVERY: Fatal error, run database recovery
error: cannot open Packages database in /var/lib/rpm
error: rpmdb: BDB0113 Thread/process 7924/139979327153984 failed: BDB1507 Thread died in Berkeley DB library
error: db5 error(-30973) from dbenv->failchk: BDB0087 DB_RUNRECOVERY: Fatal error, run database recovery
error: cannot open Packages database in /var/lib/rpm

[root@testvm~]

#
The reason for this error is rpmdb has corrupted. No worry it is easy to recover the rpmdb by following below steps:

  1. Create backup directory in which you need to dump the rpmdb backup.

mkdir /tmp/rpm_db_bak

  1. Backup the rpmdb files in created backup directory in /tmp

mv /var/lib/rpm/__db* /tmp/rpm_db_bak

  1. Clean the yum cache from below command:

yum clean all

  1. Now again run the yum update command. It will rebuilt the rpmdb and should fetch and apply the updates from your repository or RHSM (or CentOS CDN in case of CentOS Linux)

[root@testvm ~]

# yum update
Loaded plugins: fastestmirror
base | 3.6 kB 00:00
epel/x86_64/metalink | 5.0 kB 00:00
epel | 4.3 kB 00:00
extras | 3.4 kB 00:00
updates | 3.4 kB 00:00
(1/7): base/7/x86_64/group_gz | 155 kB 00:02
(2/7): epel/x86_64/group_gz | 170 kB 00:04
(3/7): extras/7/x86_64/primary_db | 191 kB 00:12
(4/7): epel/x86_64/updateinfo | 809 kB 00:21
(5/7): base/7/x86_64/primary_db | 5.6 MB 00:26
(6/7): epel/x86_64/primary_db | 4.8 MB 00:46
(7/7): updates/7/x86_64/primary_db | 7.8 MB 00:50
Determining fastest mirrors

  • base: mirror.ehost.vn
  • epel: repo.ugm.ac.id
  • extras: mirror.ehost.vn
  • updates: mirror.dhakacom.com
    Resolving Dependencies
    –> Running transaction check
    —> Package NetworkManager.x86_64 1:1.4.0-19.el7_3 will be updated
    —> Package NetworkManager.x86_64 1:1.4.0-20.el7_3 will be an update
    —> Package NetworkManager-adsl.x86_64 1:1.4.0-19.el7_3 will be updated
    […]
    –> Finished Dependency Resolution

Dependencies Resolved

================================================================================

Package Arch Version Repository Size

Installing:
kernel x86_64 3.10.0-514.26.2.el7 updates 37 M
python2-libcomps x86_64 0.1.8-3.el7 epel 46 k
replacing python-libcomps.x86_64 0.1.6-13.el7
Updating:
NetworkManager x86_64 1:1.4.0-20.el7_3 updates 2.5 M
NetworkManager-adsl x86_64 1:1.4.0-20.el7_3 updates 146 k
NetworkManager-bluetooth x86_64 1:1.4.0-20.el7_3 updates 165 k
NetworkManager-glib x86_64 1:1.4.0-20.el7_3 updates 385 k
NetworkManager-libnm x86_64 1:1.4.0-20.el7_3 updates 443 k
NetworkManager-team x86_64 1:1.4.0-20.el7_3 updates 147 k
python-perf x86_64 3.10.0-514.26.2.el7 updates 4.0 M
sudo x86_64 1.8.6p7-23.el7_3 updates 735 k
systemd x86_64 219-30.el7_3.9 updates 5.2 M
systemd-libs x86_64 219-30.el7_3.9 updates 369 k
systemd-sysv x86_64 219-30.el7_3.9 updates 64 k
tuned noarch 2.7.1-3.el7_3.2 updates 210 k
xfsprogs x86_64 4.5.0-10.el7_3 updates 895 k
Removing:
kernel x86_64 3.10.0-123.el7 @anaconda 127 M

Transaction Summary

Install 2 Packages
Upgrade 46 Packages
Remove 1 Package

Total download size: 84 M
Is this ok [y/d/N]: y

Simple way to configure Ngnix High Availability Web Server with Pacemaker and Corosync on CentOS7

Pacemaker is an open source cluster manager software which provide high availability of resources or services in CentOS 7 or RHEL 7 Linux. It has feature of scalable and advanced HA Cluster Manager. This HA cluster manager distributed by ClusterLabs.

Corosync is the core of Pacemaker Cluster Manager as it is responsible for generating heartbeat communication between cluster nodes which make it capable of deploying High Availability in applications. Corosync is derived from an Open Source project OpenAIS under new BSD License.

Pcsd is a Pacemaker command line interface (CLI) and GUI for managing the Pacemaker cluster. PCSD command pcs is use for creating, configuring and adding a new node to cluster.

In this tutorial I will use pcsd in CLI for configuring Active/Passive Pacemaker Cluster to provide high availability of Nginx webservice in CentOS 7. In this article I have tried to give basic idea of how to configure the Pacemaker cluster on CentOS 7 (applicable same to RHEL 7 as CentOS is mimic of RHEL). For basic cluster configuration I have disable the STONITH and ignore the Quorum but for Production environment I suggest to use STONITH feature of Pacemaker.

Here is Short Defination of STONITH: STONITH or Shoot The Other Node In The Head is the fencing implementation on Pacemaker. It is a technique for fencing in computer clusters. Fencing is the isolation of a failed node so that it does not cause disruption to a computer cluster.

For demonstration I have built two VMs (Virtual Machines) on KVM based on my Ubuntu 16.04 base machine and those VMs have private IP addresses.

Note: I am referring my VMs as Cluster node for better presenting them in rest of the topics.

Pre-requisite for configuring pacemaker cluster

Minimum two CentOS 7 Server
webserver01: 192.168.1.33
webserver02: 192.168.1.34
Floating IP Address: 192.168.1.30
Root Privilege

Below are the points which I will follow for Installing and Configuring two node Pacemaker Cluster:

  1. Mapping of Host File
  2. Installation of Epel Repository and Nginx
  3. Installation and Configuration of Pacemaker, Corosync, and Pcsd
  4. Creation and Configuration of Cluster
  5. Disabling of STONITH and Ignoring Quorum Policy
  6. Adding of Floating-IP and Resources
  7. Testing the Cluster service

Steps for Installation and configuration of pacemaker cluster

  1. Mapping of host files:

As in my test lab I am not using DNS for resolving the both pacemaker cluster node hostname thus I have configured /etc/hosts file for resolving hostname of both nodes. But my suggestion is, though you have DNS in your environment for name resolution but still for better Pacemaker Cluster heartbeat communication between cluster nodes you should configure /etc/hosts file.

Edit the /etc/hosts file with desire editor in both cluster nodes, below is example of /etc/hosts file which I have configured in both cluster nodes.

[root@webserver01 ~]

# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.33 webserver01
192.168.1.34 webserver02

[root@webserver01 ~]

#

[root@webserver02 ~]

# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.33 webserver01
192.168.1.34 webserver02

[root@webserver02 ~]

#
Post /etc/hosts file configuration we will test the connectivity of both cluster nodes with each other through ping command:
Example:

ping -c 3 webserver01
ping -c 3 webserver02
If we will get reply like below that means our webservers are communicating with each other.

[root@webserver01 ~]

# ping -c 3 webserver02
PING webserver02 (192.168.1.34) 56(84) bytes of data.
64 bytes from webserver02 (192.168.1.34): icmp_seq=1 ttl=64 time=1.10 ms
64 bytes from webserver02 (192.168.1.34): icmp_seq=2 ttl=64 time=0.727 ms
64 bytes from webserver02 (192.168.1.34): icmp_seq=3 ttl=64 time=0.698 ms

— webserver02 ping statistics —
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 0.698/0.843/1.106/0.188 ms

[root@webserver01 ~]

#

[root@webserver02 ~]

# ping -c 3 webserver01
PING webserver01 (192.168.1.33) 56(84) bytes of data.
64 bytes from webserver01 (192.168.1.33): icmp_seq=1 ttl=64 time=0.197 ms
64 bytes from webserver01 (192.168.1.33): icmp_seq=2 ttl=64 time=0.123 ms
64 bytes from webserver01 (192.168.1.33): icmp_seq=3 ttl=64 time=0.114 ms

— webserver01 ping statistics —
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.114/0.144/0.197/0.039 ms

[root@webserver02~]

#

  1. Installation of Epel Repository and Nginx

In this Steps we will install EPEL (Extra Package for Enterprise Linux) repository and then Nginx. For Nginx installation EPEL repository package need to install first.

yum -y install epel-release
Now install Nginx:

yum -y install nginx

  1. Install and Configure Pacemaker, Corosync, and Pcsd

Now we will install the pacemaker, pcs and corosync package with yum command. These package does not require seperate repository as they will use default CentOS repository.

yum -y install corosync pacemaker pcs
Once Cluster packages will install successfully enable the cluster services in startup through systemctl commands as mentioned below:

systemctl enable pcsd
systemctl enable corosync
systemctl enable pacemaker
Now Start the pcsd service in both cluster nodes and also enable it in system startup.

systemctl start pcsd.service
systemctl enable pcsd.service
The pcsd daemon works with the pcs command-line interface to manage synchronizing the corosync configuration across all nodes in the cluster.

The user hacluster is created automatically with disable password during package installation this account is needed a login credential for syncing the corosync configuration, or starting and stopping the cluster service on other cluster nodes.
In next step we will create a new password for hacluster user and we will use same password for rest cluster node as well.

passwd hacluster
Changing password for user hacluster.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.

  1. Creation and Configuration of Cluster

Note: This steps from Step 4 to 7 will only need to perform on webserver01 server.
This step will cover the creating of new 2 nodes CentOS Linux cluster servers which will host Nginx resources and Floating IP Address.

First of all to create cluster we need to authorize all servers using the pcs command and the hacluster user.

Authorize both cluster webservers with the pcs command and hacluster user and password.

[root@webserver01 ~]

# pcs cluster auth webserver01 webserver02
Username: hacluster
Password:
webserver01: Authorized
webserver02: Authorized

[root@webserver01 ~]

#
Note: If you are getting below error after running above auth command:

[root@webserver01 ~]

# pcs cluster auth webserver01 webserver02
Username: hacluster
Password:
webserver01: Authorized
Error: Unable to communicate with webserver02
Then you need to define firewalld rules in both nodes which enable the communication of both Cluster nodes:

Below are the example for adding rules for Cluster and ngnix as well

[root@webserver01 ~]

# firewall-cmd –permanent –add-service=high-availability
success

[root@webserver01 ~]

# firewall-cmd –permanent –add-service=http
success

[root@webserver01 ~]

# firewall-cmd –permanent –add-service=https
success

[root@webserver01 ~]

# firewall-cmd –reload
success

[root@webserver01 ~]

# firewall-cmd –list-all
public (active)
target: default
icmp-block-inversion: no
interfaces: eth0
sources:
services: ssh dhcpv6-client high-availability http https
ports:
protocols:
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:

[root@webserver01 ~]

#
Now we will define the cluster name and cluster node members.

pcs cluster setup –name web_cluster webserver01 webserver02
Next start the all cluster services and enable them in system startup.

[root@webserver01 ~]

# pcs cluster start –all
webserver02: Starting Cluster…
webserver01: Starting Cluster…

[root@webserver01 ~]

# pcs cluster enable –all
webserver01: Cluster Enabled
webserver02: Cluster Enabled

[root@webserver01 ~]

#
Run the below command to check the Cluster status

pcs status cluster

[root@webserver01 ~]

# pcs status cluster
Cluster Status:
Stack: corosync
Current DC: webserver02 (version 1.1.18-11.el7_5.3-2b07d5c5a9) – partition with quorum
Last updated: Tue Sep 4 02:38:20 2018
Last change: Tue Sep 4 02:33:06 2018 by hacluster via crmd on webserver02
2 nodes configured
0 resources configured

PCSD Status:
webserver01: Online
webserver02: Online

[root@webserver01 ~]

#

  1. Disabling of STONITH and Ignoring Quorum Policy
    In this tutorial we will disable the STONITH and Quorum policy as we are not using fencing device here. But if you want to implement Cluster in Production environment then I suggest to use Fencing and Quorum Policy.

Disable the STOITH:

pcs property set stonith-enabled=false
Ignore the Quorum Policy:

pcs property set no-quorum-policy=ignore
Now Check whether STONITH and Quorum policy are disable or not with below command:

pcs property list

[root@webserver01 ~]

# pcs property list
Cluster Properties:
cluster-infrastructure: corosync
cluster-name: web_cluster
dc-version: 1.1.18-11.el7_5.3-2b07d5c5a9
have-watchdog: false
no-quorum-policy: ignore
stonith-enabled: false

[root@webserver01 ~]

#

  1. Adding of Floating-IP and Resources

Floating IP address are cluster virtual IP address which float or move automatically from one cluster node to another cluster node in event of one Active Cluster node failure or disable which was hosting cluster resources.

In this step we will add Floating IP and Nginx resources:

Adding Floating IP

pcs resource create virtual_ip ocf:heartbeat:IPaddr2 ip=192.168.1.30 cidr_netmask=32 op monitor interval=30s
Adding nginx resources

pcs resource create webserver ocf:heartbeat:nginx configfile=/etc/nginx/nginx.conf op monitor timeout=”5s” interval=”5s”
Now check newly added resources from below command:

pcs status resources

[root@webserver01 ~]

# pcs status resources
virtual_ip (ocf::heartbeat:IPaddr2): Started webserver01
webserver (ocf::heartbeat:nginx): Started webserver01

[root@webserver01 ~]

#

  1. Testing the Cluster service

To check cluster service running status
Now We will check the Cluster service status before moving to test nginx webservice failover in event of one Active Cluster node fail.

To check running cluster service status below is the command with example:

[root@webserver01 ~]

# pcs status
Cluster name: web_cluster
Stack: corosync
Current DC: webserver01 (version 1.1.18-11.el7_5.3-2b07d5c5a9) – partition with quorum
Last updated: Tue Sep 4 03:55:47 2018
Last change: Tue Sep 4 03:15:29 2018 by root via cibadmin on webserver01

2 nodes configured
2 resources configured

Online: [ webserver01 webserver02 ]

Full list of resources:

virtual_ip (ocf::heartbeat:IPaddr2): Started webserver01
webserver (ocf::heartbeat:nginx): Started webserver01

Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled

[root@webserver01 ~]

#
To test ngnix webservice failover:

First of we will create a webpages in both cluster nodes by below command:
In webserver01:

echo ‘

webserver01 – Web-Cluster-Testing

‘ > /usr/share/nginx/html/index.html
echo ‘

webserver02 – Web-Cluster-Testing

‘ > /usr/share/nginx/html/index.html
Now open this web page with Floating IP address (192.168.1.30) which we had configured with Cluster resources in previous steps, you will see currently webpage is accessible from webserver01:

Now stop the cluster service in webserver01 and after it again open the webpage with same floating IP address. Below is the command for Stopping pacemaker cluster in webserver01:

pcs cluster stop webserver01
After stopping the pacemaker cluster in webserver01 this time webpage should be accessed from webserver02:

check Redhat version

The objective of this guide is to provide you with some hints on how to check system version of your Redhat Enterprise Linux (RHEL). There exist multiple ways on how to check the system version, however, depending on your system configuration, not all examples described below may be suitable. For a CentOS specific guide visit How to check CentOS version guide.

Requirements

Privileged access to to your RHEL system may be required.

Difficulty

EASY

Conventions

  • # – requires given linux commands to be executed with root privileges either directly as a root user or by use of sudo command
  • $ – requires given linux commands to be executed as a regular non-privileged user

Instructions

Using hostnamectl

hostnamectl is most likely the first and last command you need to execute to reveal your RHEL system version:

$ hostnamectl 
   Static hostname: localhost.localdomain
Transient hostname: status
         Icon name: computer-vm
           Chassis: vm
        Machine ID: d731df2da5f644b3b4806f9531d02c11
           Boot ID: 384b6cf4bcfc4df9b7b48efcad4b6280
    Virtualization: xen
  Operating System: Red Hat Enterprise Linux Server 7.3 (Maipo)
       CPE OS Name: cpe:/o:redhat:enterprise_linux:7.3:GA:server
            Kernel: Linux 3.10.0-514.el7.x86_64
      Architecture: x86-64

Query Release Package

Use rpm command to query Redhat’s release package:

RHEL 7
$ rpm --query redhat-release-server
redhat-release-server-7.3-7.el7.x86_64
RHEL 8
$ rpm --query redhat-release
redhat-release-8.0-0.34.el8.x86_64

Common Platform Enumeration

Check Common Platform Enumeration source file:

$ cat /etc/system-release-cpe 
cpe:/o:redhat:enterprise_linux:7.3:ga:server

LSB Release

Depending on whether a redhat-lsb package is installed on your system you may also use lsb_release -d command to check Redhat’s system version:

$ lsb_release -d
Description:	Red Hat Enterprise Linux Server release 7.3 (Maipo)

Alternatively install redhat-lsb package with:

# yum install redhat-lsb


Check Release Files

There are number of release files located in the /etc/ directory. Namely os-releaseredhat-release and system-release:

$ ls /etc/*release
os-release  redhat-release  system-release

Use cat to check the content of each file to reveal your Redhat OS version. Alternatively, use the below for loop for an instant check:

$ for i in $(ls /etc/*release); do echo ===$i===; cat $i; done

Depending on your RHEL version, the output of the above shell for loop may look different:

===os-release===
NAME="Red Hat Enterprise Linux Server"
VERSION="7.3 (Maipo)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="7.3"
PRETTY_NAME="Red Hat Enterprise Linux Server 7.3 (Maipo)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:7.3:GA:server"
HOME_URL="https://www.redhat.com/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"

REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 7"
REDHAT_BUGZILLA_PRODUCT_VERSION=7.3
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="7.3"
===redhat-release===
Red Hat Enterprise Linux Server release 7.3 (Maipo)
===system-release===
Red Hat Enterprise Linux Server release 7.3 (Maipo)

Grub Config

The least reliable way on how to check Redhat’s OS version is by looking at Grub configuration. Grub configuration may not produce a definitive answer, but it will provide some hints on how the system booted. 

The default locations of grub config files are /boot/grub2/grub.cfg and /etc/grub2.cfg. Use grep command to check for menuentry keyword:

# grep -w menuentry /boot/grub2/grub.cfg /etc/grub2.cfg

An another alternative is to check the value of the “GRUB Environment Block”:

# grep saved_entry /boot/grub2/grubenv 
saved_entry=Red Hat Enterprise Linux Server (3.10.0-514.el7.x86_64) 7.3 (Maipo)

Nginx server configuration

yum -y install make gcc gcc-c++ openssl openssl-devel pcre-devel zlib-devel

wget -c http://nginx.org/download/nginx-1.14.2.tar.gz

tar zxvf nginx-1.14.2.tar.gz

cd nginx-1.14.2

./configure –prefix=/usr/local/nginx

make && make install

cd /usr/local/nginx

./sbin/nginx

ps aux|grep nginx

Nginx load balancing configuration example

Load balancing is mainly achieved through specialized hardware devices or through software algorithms. The load balancing effect achieved by the hardware device is good, the efficiency is high, and the performance is stable, but the cost is relatively high. The load balancing implemented by software mainly depends on the selection of the equalization algorithm and the robustness of the program. Equalization algorithms are also diverse, and there are two common types: static load balancing algorithms and dynamic load balancing algorithms. The static algorithm is relatively simple to implement, and it can achieve better results in the general network environment, mainly including general polling algorithm, ratio-based weighted rounding algorithm and priority-based weighted rounding algorithm. The dynamic load balancing algorithm is more adaptable and effective in more complex network environments. It mainly has a minimum connection priority algorithm based on task volume, a performance-based fastest response priority algorithm, a prediction algorithm and a dynamic performance allocation algorithm.

The general principle of network load balancing technology is to use a certain allocation strategy to distribute the network load to each operating unit of the network cluster in a balanced manner, so that a single heavy load task can be distributed to multiple units for parallel processing, or a large number of concurrent access or data. Traffic sharing is handled separately on multiple units, thereby reducing the user’s waiting response time.

Nginx server load balancing configuration
The Nginx server implements a static priority-based weighted round-robin algorithm. The main configuration is the proxy_pass command and the upstream command. These contents are actually easy to understand. The key point is that the configuration of the Nginx server is flexible and diverse. How to configure load balancing? At the same time, rationally integrate other functions to form a set of configuration solutions that can meet actual needs.

The following are some basic example fragments. Of course, it is impossible to include all the configuration situations. I hope that it can be used as a brick-and-mortar effect. At the same time, we need to summarize and accumulate more in the actual application process. The places to note in the configuration will be added as comments.

Configuration Example 1: Load balancing of general polling rules for all requests
In the following example fragment, all servers in the backend server group are configured with the default weight=1, so they receive the request tasks in turn according to the general polling policy. This configuration is the easiest configuration to implement Nginx server load balancing. All requests to access www.rmohan.com will be load balanced in the backend server group. The example code is as follows:

Upstream backend #Configuring the backend server group
{
Server 192.168.1.2:80;
Server 192.168.1.3:80;
Server 192.168.1.4:80; #defaultweight=1
}
Server
{
Listen 80;
Server_name www.rmohan.com;
Index index.html index.htm;
Location / {
Proxy_pass http://backend;
Prox_set_header Host $host;
}

}??

Configuration Example 2: Load balancing of weighted polling rules for all requests
Compared with “Configuration Instance One”, in this instance segment, the servers in the backend server group are given different priority levels, and the value of the weight variable is the “weight” in the polling policy. Among them, 192.168.1.2:80 has the highest level, and receives and processes client requests preferentially; 192.168.1.4:80 has the lowest level, which is the server that receives and processes the least client requests, and 192.168.1.3:80 will be between the above two. between. All requests to access www.rmohan.com will implement weighted load balancing in the backend server group. The example code is as follows:

Upstream backend #Configuring the backend server group
{
Server 192.168.1.2:80 weight=5;
Server 192.168.1.3:80 weight=2;
Server 192.168.1.4:80; #defaultweight=1
}
Server
{
Listen 80;
Server_name www.rmohan.com;
Index index.html index.htm;
Location / {
Proxy_pass http://backend;
Prox_set_header Host $host;
}

}

Configuration Example 3: Load balancing a specific resource
In this example fragment, we set up two sets of proxy server groups, a group named “videobackend” for load balancing client requests requesting video resources, and another group for clients requesting filed resources. The end requests load balancing. All requests for “http://www.mywebname/video/” will be balanced in the videobackend server group, and all requests for “http://www.mywebname/file/” will be in the filebackend server group. Get a balanced effect. The configuration shown in this example is to implement general load balancing. For the configuration of weighted load balancing, refer to Configuration Example 2.

In the location /file/ {...} block, we populate the client's real information into the "Host", "X-Real-IP", and "X-Forwareded-For" header fields in the request header. So, the request received by the backend server group retains the real information of the client, not the information of the Nginx server. The example code is as follows:

Upstream videobackend #Configuring backend server group 1
{
Server 192.168.1.2:80;
Server 192.168.1.3:80;
Server 192.168.1.4:80;
}
Upstream filebackend #Configuring backend server group 2
{
Server 192.168.1.5:80;
Server 192.168.1.6:80;
Server 192.168.1.7:80;
}
Server
{
Listen 80;
Server_name www.rmohan.com;
Index index.html index.htm;
Location /video/ {
Proxy_pass http://videobackend; #Use backend server group 1
Prox_set_header Host $host;

}
Location /file/ {
Proxy_pass http://filebackend; #Use backend server group 2
#Retain the real information of the client
Prox_set_header Host $host;
Proxy_set_header X-Real-IP $remote_addr;
Proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

}
}??

Configuration Example 4: Load balancing different domain names
In this example fragment, we set up two virtual servers and two sets of backend proxy server groups to receive different domain name requests and load balance these requests. If the client requests the domain name as “home.rmohan.com”, the server server1 receives and redirects to the homebackend server group for load balancing; if the client requests the domain name as “bbs.rmohan.com”, the server server2 receives the bbsbackend server level. Perform load balancing processing. This achieves load balancing of different domain names.

Note that there is one server server 192.168.1.4:80 in the two backend server groups that is public. All resources under the two domain names need to be deployed on this server to ensure that client requests are not problematic. The example code is as follows:


Upstream bbsbackend #Configuring backend server group 1
{
Server 192.168.1.2:80 weight=2;
Server 192.168.1.3:80 weight=2;
Server 192.168.1.4:80;
}
Upstream homebackend #Configuring backend server group 2
{
Server 192.168.1.4:80;
Server 192.168.1.5:80;
Server 192.168.1.6:80;
}
#Start to configure server 1
Server
{
Listen 80;
Server_name home.rmohan.com;
Index index.html index.htm;
Location / {
Proxy_pass http://homebackend;
Prox_set_header Host $host;

}

}
#Start to configure server 2
Server
{
Listen 80;
Server_name bbs.rmohan.com;
Index index.html index.htm;
Location / {
Proxy_pass http://bbsbackend;
Prox_set_header Host $host;

}

}

Configuration Example 5: Implementing load balancing with URL rewriting
First, let’s look at the specific source code. This is a modification made on the basis of instance one:


Upstream backend #Configuring the backend server group
{
Server 192.168.1.2:80;
Server 192.168.1.3:80;
Server 192.168.1.4:80; #defaultweight=1
}
Server
{
Listen 80;
Server_name www.rmohan.com;
Index index.html index.htm;

Location /file/ {
Rewrite ^(/file/.)/media/(.).*$) $1/mp3/$2.mp3 last;
}

Location / {
Proxy_pass http://backend;
Prox_set_header Host $host;
}

}
This instance fragment adds a URL rewriting function to the URI containing “/file/” compared to “Configuration One”. For example, when the URL requested by the client is “http://www.rmohan.com/file/downlaod/media/1.mp3”, the virtual server first uses the location file/ {…} block to forward to the post. Load balancing is implemented in the backend server group. In this way, load balancing with URL rewriting is easily implemented in the car. In this configuration scheme, it is necessary to grasp the difference between the last flag and the break flag in the rewrite instruction in order to achieve the expected effect.

The above five configuration examples show the basic method of Nginx server to implement load balancing configuration under different conditions. Since the functions of the Nginx server are incremental in structure, we can continue to add more functions based on these configurations, such as Web caching and other functions, as well as Gzip compression technology, identity authentication, and rights management. At the same time, when configuring the server group using the upstream command, you can fully utilize the functions of each command and configure the Nginx server that meets the requirements, is efficient, stable, and feature-rich.

Localtion configuration
syntax structure: location [ = ~ ~* ^~ ] uri{ … }
uri variable is a matching request character, can be a string without regular expression, or a string containing regular
[ ] In the optional uri is required: used to change the matching method of the request string and uri

    = for the standard uri front, requires the request string to strictly match the uri, if the match has been successful, stop the match and immediately process the request 

    ~ indicates that the uri contains a regular expression and is case-sensitive 

    ~* is used to indicate that the uri contains a regular expression that is not case sensitive 

    ^~ requires finding the location that represents the highest match between the uri and the request string, and then processes the request for the 

website error page
1xx: Indication – Indicates that the request has been received, continues processing
2xx: Success – indicates that the request has been successfully received, understood, accepted
3xx: Redirected – further operations must be performed to complete the request
4xx: Client Error – Request has Syntax error or request could not be implemented
5xx: server side error – server failed to implement legitimate request
http message code meaning
to move 301 requested data with There is a new location, and the change is a permanent
redirect 302 request data temporary location change
Cannot find webpage 400 can connect to the server, but due to the address problem, can’t find the webpage
website refuses to display 404 can connect to the website but can’t find the webpage
can’t display the page 405 can connect the website, the page content can’t be downloaded, the webpage writing method
can’t be solved This page is displayed. The server problem is
not executed. 501 The website settings that are not being accessed are displayed as the website requested by the browser. The
version of the protocol is not supported. 505 The protocol version information requested is
:
200 OK //The client request is successful.
400 Bad Request //Client The request has a syntax error and cannot be understood by the server.
401 Unauthorized // The request is unauthorized. This status code must be used with the WWW-Authenticate header field.
403 Forbidden //The server receives the request, but refuses to provide the service
404 Not Found //Request The resource does not exist, eg: entered the wrong URL
500 Internal Server Error //The server has an unexpected error
503 Server Unavailable //The server is currently unable to process the client’s request and may resume normal after a while
eg: HTTP/1.1 200 OK ( CRLF)

common feature File Description
1. error_log file | stderr [debug | info | notice | warn | error | crit | alert | emerg ]
debug — debug level output log information most complete
?? info — normal level output prompt information
?? notice — attention level output Note the information
?? warn — warning level output some insignificant error message
?? error — error level has an error
?? crit affecting the normal operation of the service — serious error level serious error level
?? alert — very serious level very serious
?? emerg — – Super serious super serious
nginx server log file output to a file or output to standard output error output to stder:
followed by the log level option, from low to high divided into debug …. emerg after setting the level Unicom’s high-level will also not record

    2, user user group 

    configuration starter user group wants all can start without writing 

    3, worker_processes number | auto

    Specifies the number nginx process to do more to generate woker peocess number of 
    auto nginx process automatically detects the number 

    4, pid file 

    specifies the file where pid where 
    pid log / nginx.pid time to pay attention to set the profile name, or can not find 

    5, include file 

    contains profile, other configurations incorporated 

    6, acept_mutex on | off 

    connection network provided serialization 

    7, multi_accept on | off 

    settings allow accept multiple simultaneous network connections 

    8, use method 

    selection event driven model 

    9, worker_connections number 

    configuration allows each workr process a maximum number of connections, the default is 1024 

    10, mime-type 

    resource allocation type, mime-type is a media type of network resource 
    format: default_type MIME-type 

    . 11, path access_log [the format [Buffer size =]] 

    from Define the server's log
    path: configuration storage server log file path and name 
    format: optional, self-defined format string server log 
    size: the size of the memory buffer for temporary storage configuration log 

    12, log_format name sting ...; 

    in combination with access_log, Dedicated to define the format of the server log 
    and can define a name for the format, making the access_log easy to call


    Name : the name of the format string default combined 
    string format string of the service log

main log_format 'REMOTE_ADDR $ - $ REMOTE_USER [$ time_local] "$ Request"' 
                '$ $ body_bytes_sent Status "$ HTTP_REFERER"' 
                  ' "$ HTTP_USER_AGENT" "$ HTTP_X_FORWARDED_FOR"';


    ??$remote_addr 10.0.0.1 ---Visitor source address information

????$remote_user – — The authentication information displayed when the nginx server wakes up for authentication access

????[$time_local] — Display access time ? information

????”$request” “GET / HTTP/1.1” — Requested line

            $status 200 --- Show status? Information Shows 304 because of read cache

????$body_bytes_sent 256 — Response data size information

????”http_refer” — Link destination
?? “$http_user_agent” — Browser information accessed by the client

????”$http_X_forwarded_for” is referred to as the XFF header. It represents the client, which is the real IP of the HTTP requester. It is only added when passing the HTTP proxy or load balancing server. It is not the standard request header information defined in the RFC. It can be found in the Squid Cache Proxy Development document.

??13, sendfile no | off

    configuration allows the sendfile mode to transfer files 

    14, sendfile_max_chunk size 

    configure each worker_process of the nginx process to call senfile() each time the maximum amount of data transfer cannot exceed 

    15, keepalive_timeout timeout[header_timeout]; 

    configure connection timeout 
    timeout The server maintains the time of the connection 
    head_timeout, the keeplive field of the response message header sets the timeout period 

    16, and the number of 

    single-link requests for the keepalive_repuests number is 

    17. 

    There are three ways to configure the network listening configuration listener: 
        Listening IP address: 
        listen address[:port] [default_server] [setfib=number] [backlog=number] [rcvbuf=size] 
        

[sndbuf=size]

[deferred] Listening configuration port: Listen port [default_server] [setfib=number] [backlog=number] [rcvbuf=size] [sndbuf=size]

[accept_file=filter]

listen for socket listen unix:path [default_server] [backlog=number] [rcvbuf=size] [ Sndbuf=size] [accept_file=filter]

[deferred]

address : IP address port: port number path: socket file path default_server: identifier, set this virtual host to address:port default host setfib=number: current detachment freeBSD useful Is the 0.8.44 version listening scoket associated routing table backlog=number: Set the listening function listen() to the maximum number of network connections hang freeBSD defaults to -1 Other 511 rcvbuf=size: Listening socket accept buffer size sndbuf=size: Listen Socket send buffer size Deferred : The identifier sets accept() to Deferred accept_file=filter: Sets the listening port to filter the request, since the swimming bind for freeBSD and netBSd 5.0+ : The identifier uses the independent bind() to handle the address:port ssl: identifier Set the paint connection to use ssl mode for 18, server_name name based on the name of the virtual host configuration for multiple matching successful processing priority: exact match server_name wildcard match at the beginning match server_name successful wildcard at the end is matching server_ then successful regex If the matching server_name is successfully matched multiple times in the appeal matching mode, the first matching request will be processed first. After receiving the request , the root path configuration server of the root path configuration request

needs to find the requested resource in the directory specified by the server.
This path is Specify the file directory

    20, alias path (used in the location module) to 

    change the request path of the URI received by the location, which can be followed by the variable information.

    21, index file ...; 

    set the default home page 

    22 of the website , error_page code ...[=[response]] uri 

    set the error page information 

        code to deal with the http error code 
        resoonse optional code code specified error code into new Error code 
        uri error page path or website address 

    23, allow address | CIDR |all 

    configuration ip-based access permission permission 

    address allows access to the client's ip does not support setting 
    CIDR for clients that are allowed to access multiple CIDR such as 185.199.110.153/24 
    All means that all clients can access 

    24, deny address | CIDR |all 

    configuration ip-based access prohibition permission 

    address allows access to the client's ip does not support setting 
    CIDR for clients that allow access to multiple CIDR such as 185.199.110.153/24 
    all for all customers You can access        


    25, auth_basic string |off to 

    configure password-based nginx access.

    string open authentication, verify and configure the type of instructions 

    off closed 

    26, auth_basic_user_file file 

    to configure password to access nginx access permissions to files based on 

    file files need to use absolute paths