November 2024
M T W T F S S
 123
45678910
11121314151617
18192021222324
252627282930  

Categories

November 2024
M T W T F S S
 123
45678910
11121314151617
18192021222324
252627282930  

RHEL / CentOS 7 Network Teaming

RHEL / CentOS 7 Network Teaming

Below is an example on how to configure network teaming on RHEL/CentOS 7. It is assumed that you have at least two interface cards.

Show Current Network Interfaces
[root@rhce-server ~]$ ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eno16777736: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
link/ether 00:0c:29:69:bf:87 brd ff:ff:ff:ff:ff:ff
3: eno33554984: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
link/ether 00:0c:29:69:bf:91 brd ff:ff:ff:ff:ff:ff

The two devices I will be teaming are eno33554984 and eno16777736.

Create the Team Interface
[root@rhce-server ~]$ nmcli connection add type team con-name team0 ifname team0 config ‘{“runner”: {“name”: “activebackup”}}’

This will configure the interface for activebackup. Other runners include broadcast, roundrobin, loadbalance, and lacp.

Configure team0’s IP Address
[root@rhce-server ~]# nmcli connection modify team0 ipv4.addresses 192.168.1.22/24
[root@rhce-server ~]# nmcli connection modify team0 ipv4.method manual

You can also configure IPv6 address by setting the ipv6.addresses field.

Configure the Team Slaves
[root@rhce-server ~]# nmcli connection add type team-slave con-name team0-slave1 ifname eno33554984 master team0
Connection ‘team0-slave1’ (4167ea50-7d3a-4024-98e1-3058a4dcf0fa) successfully added.
[root@rhce-server ~]# nmcli connection add type team-slave con-name team0-slave2 ifname eno16777736 master team0
Connection ‘team0-slave2’ (d5ed65d1-16a7-4bc7-8c4d-78e17a1ed8b3) successfully added.

Check the Connection
[root@rhce-server ~]# teamdctl team0 state
setup:
runner: activebackup
ports:
eno16777736
link watches:
link summary: up
instance[link_watch_0]:
name: ethtool
link: up
eno33554984
link watches:
link summary: up
instance[link_watch_0]:
name: ethtool
link: up
runner:
active port: eno16777736

[root@rhce-server ~]# ping -I team0 192.168.1.1
PING 192.168.1.1 (192.168.1.1) from 192.168.1.24 team0: 56(84) bytes of data.
64 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=1.38 ms

Test Failover
[root@rhce-server ~]# nmcli device disconnect eno16777736
[root@rhce-server ~]# teamdctl team0 state
setup:
runner: activebackup
ports:
eno33554984
link watches:
link summary: up
instance[link_watch_0]:
name: ethtool
link: up
runner:
active port: eno33554984

oracle rman backup

How to sync standby database which is lagging behind from primary database
Primary Database cluster: cluster1.rmohan.com
Standby Database cluster: cluster2.rmohan.com

Primary Database: prim
Standby database: stand

Database version:11.2.0.1.0

Reason:-
1. Might be due to the network outage between the primary and the standby database leading to the archive
gaps. Data guard would be able to detect the archive gaps automatically and can fetch the missing logs as
soon as the connection is re-established.

2. It could also be due to archive logs getting missed out on the primary database or the archives getting
corrupted and there would be no valid backups.

In such cases where the standby lags far behind from the primary database, incremental backups can be used
as one of the methods to roll forward the physical standby database to have it in sync with the primary database.

At primary database:-
SQL> select status,instance_name,database_role from v$database,v$instance;

STATUS INSTANCE_NAME DATABASE_ROLE
———— —————- —————-
OPEN prim PRIMARY

SQL> select thread#,max(sequence#) from v$archived_log group by thread#;

THREAD# MAX(SEQUENCE#)
———- ————–
1 214

At standby database:-
SQL> select status,instance_name,database_role from v$database,v$instance;

STATUS INSTANCE_NAME DATABASE_ROLE
———— —————- —————-
OPEN stand PHYSICAL STANDBY

SQL> select thread#,max(sequence#) from v$archived_log group by thread#;

THREAD# MAX(SEQUENCE#)
———- ————–
1 42
So we can see the standby database is having archive gap of around (214-42) 172 logs.

Step 1: Take a note of the Current SCN of the Physical Standby Database.
SQL> select current_scn from v$database;

CURRENT_SCN
———–
1022779

Step 2 : Cancel the Managed Recovery Process on the Standby database.
SQL> alter database recover managed standby database cancel;

Database altered.

Step 3: On the Primary database, take the incremental SCN backup from the SCN that is currently recorded on the standby database (1022779)
At primary database:-

RMAN> backup incremental from scn 1022779 database format ‘/tmp/rman_bkp/stnd_backp_%U.bak’;

Starting backup at 28-DEC-14

using channel ORA_DISK_1
backup will be obsolete on date 04-JAN-15
archived logs will not be kept or backed up
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=/u01/app/oracle/oradata/prim/system01.dbf
input datafile file number=00002 name=/u01/app/oracle/oradata/prim/sysaux01.dbf
input datafile file number=00005 name=/u01/app/oracle/oradata/prim/example01.dbf
input datafile file number=00003 name=/u01/app/oracle/oradata/prim/undotbs01.dbf
input datafile file number=00004 name=/u01/app/oracle/oradata/prim/users01.dbf
channel ORA_DISK_1: starting piece 1 at 28-DEC-14
channel ORA_DISK_1: finished piece 1 at 28-DEC-14
piece handle=/tmp/rman_bkp/stnd_backp_0cpr8v08_1_1.bak tag=TAG20141228T025048 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:25

using channel ORA_DISK_1
backup will be obsolete on date 04-JAN-15
archived logs will not be kept or backed up
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current control file in backup set
channel ORA_DISK_1: starting piece 1 at 28-DEC-14
channel ORA_DISK_1: finished piece 1 at 28-DEC-14
piece handle=/tmp/rman_bkp/stnd_backp_0dpr8v12_1_1.bak tag=TAG20141228T025048 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 28-DEC-14

We took the backup inside /tmp/rman_bkp directory and ensure that it contains nothing besides the incremental backups of scn.

Step 4: Take the standby controlfile backup of the Primary database controlfile.

At primary database:

RMAN> backup current controlfile for standby format ‘/tmp/rman_bkp/stnd_%U.ctl’;

Starting backup at 28-DEC-14
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including standby control file in backup set
channel ORA_DISK_1: starting piece 1 at 28-DEC-14
channel ORA_DISK_1: finished piece 1 at 28-DEC-14
piece handle=/tmp/rman_bkp/stnd_0epr8v4e_1_1.ctl tag=TAG20141228T025301 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 28-DEC-14

Starting Control File and SPFILE Autobackup at 28-DEC-14
piece handle=/u01/app/oracle/flash_recovery_area/PRIM/autobackup/2014_12_28/o1_mf_s_867466384_b9y8sr8k_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 28-DEC-14

Step 5: Transfer the backups from the Primary cluster to the Standby cluster.
[oracle@cluster1 ~]$ cd /tmp/rman_bkp/
[oracle@cluster1 rman_bkp]$ ls -ltrh
total 24M
-rw-r—–. 1 oracle oinstall 4.2M Dec 28 02:51 stnd_backp_0cpr8v08_1_1.bak
-rw-r—–. 1 oracle oinstall 9.7M Dec 28 02:51 stnd_backp_0dpr8v12_1_1.bak
-rw-r—–. 1 oracle oinstall 9.7M Dec 28 02:53 stnd_0epr8v4e_1_1.ctl

oracle@cluster1 rman_bkp]$ scp *.* oracle@cluster2:/tmp/rman_bkp/
oracle@cluster2’s password:
stnd_0epr8v4e_1_1.ctl 100% 9856KB 9.6MB/s 00:00
stnd_backp_0cpr8v08_1_1.bak 100% 4296KB 4.2MB/s 00:00
stnd_backp_0dpr8v12_1_1.bak 100% 9856KB 9.6MB/s 00:00

Step 6: On the standby cluster, connect the Standby Database through RMAN and catalog the copied
incremental backups so that the Controlfile of the Standby Database would be aware of these
incremental backups.

At standby database:-

SQL>

[oracle@cluster2 ~]$ rman target /
RMAN> catalog start with ‘/tmp/rman_bkp’;

using target database control file instead of recovery catalog
searching for all files that match the pattern /tmp/rman_bkp

List of Files Unknown to the Database
=====================================
File Name: /tmp/rman_bkp/stnd_0epr8v4e_1_1.ctl
File Name: /tmp/rman_bkp/stnd_backp_0dpr8v12_1_1.bak
File Name: /tmp/rman_bkp/stnd_backp_0cpr8v08_1_1.bak

Do you really want to catalog the above files (enter YES or NO)? yes
cataloging files…
cataloging done

List of Cataloged Files
=======================
File Name: /tmp/rman_bkp/stnd_0epr8v4e_1_1.ctl
File Name: /tmp/rman_bkp/stnd_backp_0dpr8v12_1_1.bak
File Name: /tmp/rman_bkp/stnd_backp_0cpr8v08_1_1.bak

Step 7. Shutdown the database and open it in mount stage for recovery purpose.
SQL> shut immediate;
SQL> startup mount;

Step 8.Now recover the database :-
RMAN> rman target /
RMAN> recover database noredo;

Starting recover at 28-DEC-14
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=25 device type=DISK
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00001: /u01/app/oracle/oradata/stand/system01.dbf
destination for restore of datafile 00002: /u01/app/oracle/oradata/stand/sysaux01.dbf
destination for restore of datafile 00003: /u01/app/oracle/oradata/stand/undotbs01.dbf
destination for restore of datafile 00004: /u01/app/oracle/oradata/stand/users01.dbf
destination for restore of datafile 00005: /u01/app/oracle/oradata/stand/example01.dbf
channel ORA_DISK_1: reading from backup piece /tmp/rman_bkp/stnd_backp_0cpr8v08_1_1.bak
channel ORA_DISK_1: piece handle=/tmp/rman_bkp/stnd_backp_0cpr8v08_1_1.bak tag=TAG20141228T025048
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:03

Finished recover at 28-DEC-14
exit.

Step 9 : Shutdown the physical standby database, start it in nomount stage and restore the standby controlfile
backup that we had taken from the primary database.

SQL> shut immediate;
SQL> startup nomount;

[oracle@cluster2 rman_bkp]$ rman target /
RMAN> restore standby controlfile from ‘/tmp/rman_bkp/stnd_0epr8v4e_1_1.ctl’;
ecovery Manager: Release 11.2.0.1.0 – Production on Sun Dec 28 03:08:45 2014

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: PRIM (not mounted)

RMAN> restore standby controlfile from ‘/tmp/rman_bkp/stnd_0epr8v4e_1_1.ctl’;

Starting restore at 28-DEC-14
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=20 device type=DISK

channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
output file name=/u01/app/oracle/oradata/stand/stand.ctl
output file name=/u01/app/oracle/flash_recovery_area/stand/stand.ctl
Finished restore at 28-DEC-14

Step 10: Shutdown the standby database and mount the standby database, so that the standby database would
be mounted with the new controlfile that was restored in the previous step.

SQL> shut immediate;
SQL> startup mount;

At standby database:-
SQL> alter database recover managed standby database disconnect from session;

At primary database:-
SQL> select thread#,max(sequence#) from v$archived_log group by thread#;

THREAD# MAX(SEQUENCE#)
———- ————–
1 215

At standby database:-
SQL> select thread#,max(sequence#) from v$archived_log group by thread#;

THREAD# MAX(SEQUENCE#)
———- ————–
1 215

Step 11.Now we will cancel the recovery to open the database
SQL> alter database recover managed standby database cancel;

SQL> alter database open;
Database altered.

SQL> alter database recover managed standby database using current logfile disconnect from session;
Database altered.

SQL> select open_mode from v$database;

OPEN_MODE
——————–
READ ONLY WITH APPLY

Now standby database is in sync with the Primary Database.

centos 7 cluster

[root@clusterserver1 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.20 clusterserver1.rmohan.com clusterserver1
192.168.1.21 clusterserver2.rmohan.com clusterserver2
192.168.1.22 clusterserver3.rmohan.com clusterserver3

perl -pi.orig -e ‘s/SELINUX=enforcing/SELINUX=permissive/g’ /etc/selinux/config

setenforce 0

timedatectl status

yum install -y ntp
systemctl enable ntpd ; systemctl start ntpd

run ssh-keygen

[root@clusterserver1 ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory ‘/root/.ssh’.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
e4:57:e7:7c:2e:dd:82:9f:d5:c7:57:f9:ef:ce:d5:e0 root@clusterserver1.rmohan.com
The key’s randomart image is:
+–[ RSA 2048]—-+
| |
| |
| . . . |
| o . + .|
| S . +.o|
| . o **|
| . E &|
| . *=|
| oo=|
+—————–+
[root@clusterserver1 ~]#

for i in clusterserver1 clusterserver2 clusterserver3 ; do ssh-copy-id $i; done

/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed — if you are prompted now it is to install the new keys
root@clusterserver1’s password:
Permission denied, please try again.
root@clusterserver1’s password:

Number of key(s) added: 1

Now try logging into the machine, with: “ssh ‘clusterserver1′”
and check to make sure that only the key(s) you wanted were added.

The authenticity of host ‘clusterserver2 (192.168.1.21)’ can’t be established.
ECDSA key fingerprint is 43:25:9c:32:53:18:33:a9:25:f7:cd:bb:b0:64:80:fd.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed — if you are prompted now it is to install the new keys
root@clusterserver2’s password:

Number of key(s) added: 1

Now try logging into the machine, with: “ssh ‘clusterserver2′”
and check to make sure that only the key(s) you wanted were added.

The authenticity of host ‘clusterserver3 (192.168.1.22)’ can’t be established.
ECDSA key fingerprint is 62:79:b1:c7:9b:de:a3:5e:a4:3d:e0:15:2b:f8:c2:f7.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed — if you are prompted now it is to install the new keys
root@clusterserver3’s password:

Number of key(s) added: 1

Now try logging into the machine, with: “ssh ‘clusterserver3′”
and check to make sure that only the key(s) you wanted were added.

yum install iscsi-initiator-utils -y

systemctl enable iscsi
systemctl start iscsi

iscsiadm -m discovery -t sendtargets -p 192.168.1.90:3260

iscsiadm –mode node –targetname iqn.2006-01.com.openfiler:tsn.b01850dab96a –portal 192.168.1.90 –login

iscsiadm -m node -T iqn.2006-01.com.openfiler:tsn.b01850dab96a -p 192.168.1.90:3260 -l

Install corosync and pacemaker on the nodes

yum -y install lvm2-cluster corosync pacemaker pcs fenceagents-all

systemctl enable pcsd.service

systemctl start pcsd.service

echo test123 | passwd –stdin hacluster

pcs cluster auth clusterserver1 clusterserver2 clusterserver3

[root@clusterserver1 ~]# pcs cluster auth clusterserver1 clusterserver2 clusterserver3
Username: hacluster
Password:
clusterserver3: Authorized
clusterserver2: Authorized
clusterserver1: Authorized
[root@clusterserver1 ~]#

[root@clusterserver1 ~]# ls -lt /var/lib/pcsd/
total 20
-rw——- 1 root root 250 Jan 4 03:33 tokens
-rw-r–r– 1 root root 1542 Jan 4 03:33 pcs_users.conf
-rwx—— 1 root root 60 Jan 4 03:28 pcsd.cookiesecret
-rwx—— 1 root root 1233 Jan 4 03:28 pcsd.crt
-rwx—— 1 root root 1679 Jan 4 03:28 pcsd.key
[root@clusterserver1 ~]#

pcs cluster setup –name webcluster clusterserver1 clusterserver2 clusterserver3

[root@clusterserver1 ~]# pcs cluster setup –name webcluster clusterserver1 clusterserver2 clusterserver3
Shutting down pacemaker/corosync services…
Redirecting to /bin/systemctl stop pacemaker.service
Redirecting to /bin/systemctl stop corosync.service
Killing any remaining services…
Removing all cluster configuration files…
clusterserver1: Succeeded
clusterserver2: Succeeded
clusterserver3: Succeeded
Synchronizing pcsd certificates on nodes clusterserver1, clusterserver2, clusterserver3…
clusterserver3: Success
clusterserver2: Success
clusterserver1: Success

Restaring pcsd on the nodes in order to reload the certificates…
clusterserver3: Success
clusterserver2: Success
clusterserver1: Success
[root@clusterserver1 ~]#

[root@clusterserver1 ~]# ls /etc/corosync/
corosync.conf corosync.conf.example corosync.conf.example.udpu corosync.xml.example uidgid.d/
[root@clusterserver1 ~]# ls /etc/corosync/
corosync.conf corosync.conf.example corosync.conf.example.udpu corosync.xml.example uidgid.d/
[root@clusterserver1 ~]# ls /etc/corosync/*
/etc/corosync/corosync.conf /etc/corosync/corosync.conf.example /etc/corosync/corosync.conf.example.udpu /etc/corosync/

/etc/corosync/uidgid.d:
[root@clusterserver1 ~]#
[root@clusterserver1 corosync]# cat corosync.conf
totem {
version: 2
secauth: off
cluster_name: webcluster
transport: udpu
}

nodelist {
node {
ring0_addr: clusterserver1
nodeid: 1
}

node {
ring0_addr: clusterserver2
nodeid: 2
}

node {
ring0_addr: clusterserver3
nodeid: 3
}
}

quorum {
provider: corosync_votequorum
}

logging {
to_logfile: yes
logfile: /var/log/cluster/corosync.log
to_syslog: yes
}
[root@clusterserver1 corosync]#

[root@clusterserver2 ~]# pcs status
Error: cluster is not currently running on this node
[root@clusterserver2 ~]#

[root@clusterserver3 ~]# pcs status
Error: cluster is not currently running on this node
[root@clusterserver3 ~]#

pcs cluster enable –all

[root@clusterserver1 corosync]# pcs cluster enable –all
clusterserver1: Cluster Enabled
clusterserver2: Cluster Enabled
clusterserver3: Cluster Enabled
[root@clusterserver1 corosync]#

Start the cluster
•From any node: pcs cluster start –all

pcsd: active/enabled
[root@clusterserver1 corosync]# pcs status
Cluster name: webcluster
WARNING: no stonith devices and stonith-enabled is not false
Last updated: Mon Jan 4 03:39:26 2016 Last change: Mon Jan 4 03:39:24 2016 by hacluster via crmd on clusterserver1
Stack: corosync
Current DC: clusterserver1 (version 1.1.13-10.el7-44eb2dd) – partition with quorum
3 nodes and 0 resources configured

Online: [ clusterserver1 clusterserver2 clusterserver3 ]

Full list of resources:

PCSD Status:
clusterserver1: Online
clusterserver2: Online
clusterserver3: Online

Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
[root@clusterserver1 corosync]#

Verify Corosync Installation
•corosync-cfgtool -s

[root@clusterserver1 corosync]# corosync-cfgtool -s
Printing ring status.
Local node ID 1
RING ID 0
id = 192.168.1.20
status = ring 0 active with no faults
[root@clusterserver1 corosync]#

[root@clusterserver2 ~]# corosync-cfgtool -s
Printing ring status.
Local node ID 2
RING ID 0
id = 192.168.1.21
status = ring 0 active with no faults
[root@clusterserver2 ~]#

Verify Corosync Installation
•corosync-cmapctl | grep members

[root@clusterserver2 ~]# corosync-cfgtool -s
Printing ring status.
Local node ID 2
RING ID 0
id = 192.168.1.21
status = ring 0 active with no faults
[root@clusterserver2 ~]# corosync-cmapctl | grep members
runtime.totem.pg.mrp.srp.members.1.config_version (u64) = 0
runtime.totem.pg.mrp.srp.members.1.ip (str) = r(0) ip(192.168.1.20)
runtime.totem.pg.mrp.srp.members.1.join_count (u32) = 1
runtime.totem.pg.mrp.srp.members.1.status (str) = joined
runtime.totem.pg.mrp.srp.members.2.config_version (u64) = 0
runtime.totem.pg.mrp.srp.members.2.ip (str) = r(0) ip(192.168.1.21)
runtime.totem.pg.mrp.srp.members.2.join_count (u32) = 1
runtime.totem.pg.mrp.srp.members.2.status (str) = joined
runtime.totem.pg.mrp.srp.members.3.config_version (u64) = 0
runtime.totem.pg.mrp.srp.members.3.ip (str) = r(0) ip(192.168.1.22)
runtime.totem.pg.mrp.srp.members.3.join_count (u32) = 1
runtime.totem.pg.mrp.srp.members.3.status (str) = joined
[root@clusterserver2 ~]#

Verify Corosync Installation
•crm_verify -L -V

[root@clusterserver2 ~]# crm_verify -L -V
error: unpack_resources: Resource start-up disabled since no STONITH resources have been defined
error: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option
error: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity
Errors found during check: config not valid
[root@clusterserver2 ~]#

nginx is a high performance

nginx is a high performance web server software. It is a much more flexible and lightweight program than apache.

yum install epel-release

yum install nginx

ifconfig eth0 | grep inet | awk ‘{ print $2 }’

wget –no-cookies –no-check-certificate –header “Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie” “http://download.oracle.com/otn-pub/java/jdk/8u60-b27/jdk-8u60-linux-x64.tar.gz”
wget http://mirror.nus.edu.sg/apache/tomcat/tomcat-8/v8.0.30/bin/apache-tomcat-8.0.30.tar.gz
tar xzf jdk-8u40-linux-i586.tar.gz
mkdir /usr/java/

cd /usr/java/jdk1.8.0_40/
[root@cluster1 java]# ln -s /usr/java/jdk1.8.0_40/bin/java /usr/bin/java
[root@cluster1 java]# alternatives –install /usr/java/jdk1.8.0_40/bin/java java /usr/java/jdk1.8.0_40/bin/java 2

alternatives –install /usr/java/jdk1.8.0_40/bin/java java /usr/java/jdk1.8.0_40/bin/java 2
alternatives –config java

vi /etc/profile.d/java.sh
export JAVA_HOME=/usr/java/jdk1.8.0_25
PATH=$JAVA_HOME/bin:$PATH
export PATH=$PATH:$JAVA_HOME
export JRE_HOME=/usr/java/jdk1.8.0_25/jre
export PATH=$PATH:/usr/java/jdk1.8.0_25/bin:/usr/java/jdk1.8.0_25/jre/bin

Three, Tomcat load balancing configuration

When Nginx start loading default configuration file /etc/nginx/nginx.conf, while nginx.conf in references /etc/nginx/conf.d catalog all .conf files.

Therefore, some of their own custom configuration can be written to a separate .conf files, as long as the files are placed /etc/nginx/conf.d this directory can be, and easy maintenance.

Create tomcats.conf: vi /etc/nginx/conf.d/tomcats.conf, which reads as follows:

/usr/tomcat/apache-tomcat-8.0.30/bin/startup.sh

vi /etc/nginx/conf.d/tomcats.conf

upstream tomcats {
ip_hash;
server 192.168.1.60:8080;
server 192.168.1.62:8080;
server 192.168.0.63:8080;
}

Modify default.conf: vi /etc/nginx/conf.d/default.conf, amend as follows:
vi /etc/nginx/conf.d/default.conf
need to amend the below lines
#location / {
# root /usr/share/nginx/html;
# index index.html index.htm;
#}

# new configuration default forwards the request to tomcats. conf configuration upstream processing
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header REMOTE-HOST $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://tomcats;
}

After saving reload the configuration: nginx -s reload

Four separate static resource configuration

Modify default.conf: vi /etc/nginx/conf.d/default.conf, add the following configuration:
vi /etc/nginx/conf.d/default.conf

All js, css requests related static resource files processed by Nginx

location ~.*\.(js|css)$ {
root /opt/static-resources;
expires 12h;
}

Request # All photos and other multimedia-related static resource files is handled by Nginx

location ~.*\.(html|jpg|jpeg|png|bmp|gif|ico|mp3|mid|wma|mp4|swf|flv|rar|zip|txt|doc|ppt|xls|pdf)$ {
root /opt/static-resources;
expires 7d;
}

Create a Directory for the Certificate
mkdir /etc/nginx/ssl
cd /etc/nginx/ssl
openssl genrsa -des3 -out server.key 2048
openssl req -new -key server.key -out server.csr
cp server.key server.key.org
openssl rsa -in server.key.org -out server.key
openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt

server {
listen 80;
listen 443 default ssl;
server_name cluster1.rmohan.com;
keepalive_timeout 70;
# ssl on;
ssl_certificate /etc/nginx/ssl/server.crt;
ssl_certificate_key /etc/nginx/ssl/server.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
}

Nginx server security configuration

First, turn off SELinux
Security-Enhanced Linux (SELinux) is a Linux kernel feature that provides security policy protection mechanism supports access control.
However, SELinux brings additional security and the disproportionate use of complexity, cost is not high

sed -i /SELINUX=enforcing/SELINUX=disabled/ /etc/selinux/config

/usr/sbin/sestatus -v # Check status

Second, the least privilege allowed by zoning mount

A separate partition on the server nginx directory.

For example, create a new partition /dev/sda5 (first logical partition), and mounted at /nginx.
Make sure /nginx is noexec,nodev and nosetuid permission to mount

The following is my /etc/fstab mount /nginx information: LABEL=/nginx /nginx ext3 defaults,nosuid,noexec,nodev 1 2

Note: You need to create a new partition using fdisk and mkfs.ext3 command.
Third, to strengthen the Linux security configuration /etc/sysctl.conf

You can control and configure the Linux kernel by editing /etc/sysctl.conf, network settings

# Avoid a smurf attack

net.ipv4.icmp_echo_ignore_broadcasts = 1

# Turn on protection for bad icmp error messages

net.ipv4.icmp_ignore_bogus_error_responses = 1

# Turn on syncookies for SYN flood attack protection

net.ipv4.tcp_syncookies = 1

# Turn on and log spoofed, source routed, and redirect packets

net.ipv4.conf.all.log_martians = 1

net.ipv4.conf.default.log_martians = 1

# No source routed packets here

net.ipv4.conf.all.accept_source_route = 0

net.ipv4.conf.default.accept_source_route = 0

# Turn on reverse path filtering

net.ipv4.conf.all.rp_filter = 1

net.ipv4.conf.default.rp_filter = 1

# Make sure no one can alter the routing tables

net.ipv4.conf.all.accept_redirects = 0

net.ipv4.conf.default.accept_redirects = 0

net.ipv4.conf.all.secure_redirects = 0

net.ipv4.conf.default.secure_redirects = 0

# Don’t act as a router

net.ipv4.ip_forward = 0

net.ipv4.conf.all.send_redirects = 0

net.ipv4.conf.default.send_redirects = 0

# Turn on execshild

kernel.exec-shield = 1

kernel.randomize_va_space = 1

# Tuen IPv6

net.ipv6.conf.default.router_solicitations = 0

net.ipv6.conf.default.accept_ra_rtr_pref = 0

net.ipv6.conf.default.accept_ra_pinfo = 0

net.ipv6.conf.default.accept_ra_defrtr = 0

net.ipv6.conf.default.autoconf = 0

net.ipv6.conf.default.dad_transmits = 0

net.ipv6.conf.default.max_addresses = 1

# Optimization for port usefor LBs

# Increase system file descriptor limit

fs.file-max = 65535

# Allow for more PIDs (to reduce rollover problems); may break some programs 32768

kernel.pid_max = 65536

# Increase system IP port limits

net.ipv4.ip_local_port_range = 2000 65000

# Increase TCP max buffer size setable using setsockopt()

net.ipv4.tcp_rmem = 4096 87380 8388608

net.ipv4.tcp_wmem = 4096 87380 8388608

# Increase Linux auto tuning TCP buffer limits

# min, default, and max number of bytes to use

# set max to at least 4MB, or higher if you use very high BDP paths

# Tcp Windows etc

net.core.rmem_max = 8388608

net.core.wmem_max = 8388608

net.core.netdev_max_backlog = 5000

net.ipv4.tcp_window_scaling = 1

Fourth, remove all unnecessary Nginx module

You need to make the number of modules directly by compiling the source code Nginx minimized. By limiting access to only allow web server module to minimize risk.
You can configure only install nginx modules you need. For example, disabling SSL and autoindex module you can execute the following command:

./configure -without-http_autoindex_module -without-http_ssi_module
make && make install

Change nginx version name, edit the file /h/http/ngx_http_header_filter_module.c?

vim src/http/ngx_http_header_filter_module.c

static char ngx_http_server_string[] = “Server: nginx” CRLF;

static char ngx_http_server_full_string[] = “Server: ” NGINX_VER CRLF;

//change to

static char ngx_http_server_string[] = “Server: Mohan Web Server” CRLF;

static char ngx_http_server_full_string[] = “Server: Mohan Web Server” CRLF;

Close nginx version number display

server_tokens off

Fifth, based Iptables firewall restrictions

The following firewall script block any addition to allowing:

HTTP (TCP port 80) of a request from
ICMP ping requests from
ntp (port 123) requests output
smtp (TCP port 25) request output

Six control buffer overflow attacks

Edit and set all clients buffer size limit is as follows:

client_body_buffer_size 1K;

client_header_buffer_size 1k;

client_max_body_size 1k;

large_client_header_buffers 2 1k;

client_body_buffer_size 1k (default 8k or 16k) This instruction can specify the buffer size of the connection request entity.
If the value exceeds the specified buffer connection request, then the whole or part of the requesting entity will try to write a temporary file.
client_header_buffer_size 1k directive specifies the client request buffer size of the head.
In most cases a request header is not greater than 1k, but if there is a large cookie wap from the client that it may be greater than 1k,
Nginx will assign it a larger buffer, this value can be set inside the large_client_header_buffers .
client_max_body_size 1k- directive specifies the maximum allowable size of the client requesting entity connected, it appears in the Content-Length header field of the request.

If the request is greater than the specified value, the client will receive a “Request Entity Too Large” (413) error. Remember, the browser does not know how to display the error.
large_client_header_buffers- specify the client number and size of some of the larger buffer request header use.
Request a field can not be greater than the buffer size, if the client sends a relatively large head, nginx returns “Request URI too large” (414)
Similarly, the head of the longest field of the request can not be greater than one buffer, otherwise the server will return “Bad request” (400). Separate buffer only when demand.
The default buffer size for the operating system paging file size is usually 4k or 8k, if a connection request is ultimately state to keep- alive, it occupied the buffer will be freed.

You also need to improve server performance control timeouts and disconnects the client. Edit as follows:

client_body_timeout 10;
client_header_timeout 10;
keepalive_timeout 5 5;
send_timeout 10;

• client_body_timeout 10; – directive specifies the timeout request entity read. Here timeout refers to a requesting entity did not enter the reading step, if the connection after this time the client does not have any response, Nginx will return a “Request time out” (408) error.
• client_header_timeout 10; – directive specifies the client request header headline read timeout. Here timeout refers to a request header did not enter the reading step, if the connection after this time the client does not have any response, Nginx will return a “Request time out” (408) error.
• keepalive_timeout 5 5; – the first parameter specifies the timeout length of the client and server connections, over this time, the server will close the connection. The second parameter (optional) specifies the response header Keep-Alive: timeout = time value time, this value can make some browsers know when to close the connection to the server not repeat off if you do not specify this parameter , nginx does not send Keep-Alive header information in the response. (This does not refer to how a connection “Keep-Alive”) These two values ??of the parameters can be different.
• send_timeout 10; directive specifies the timeout is sent to the client after the response, Timeout refers not enter a complete state established, completed only two handshakes, more than this time if the client does not have any response, nginx will close the connection.

Seven control concurrent connections

You can use NginxHttpLimitZone module to restrict a specific session or a special case of concurrent connections IP addresses under. Edit nginx.conf:

### Directive describes the zone, in which the session states are stored i.e. store in slimits. ###

### 1m can handle 32000 sessions with 32 bytes/session, set to 5m x 32000 session ###

limit_zone slimits $binary_remote_addr 5m;

### Control maximum number of simultaneous connections for one session i.e. ###

### restricts the amount of connections from a single ip address ###

limit_conn slimits 5

The above represents the remote IP address to limit each client connection can not be open at the same time more than five.

Eight, only allow access to our domain

If the robot is just random scan all domain name servers, that reject the request. You must allow the configuration of the virtual domain or reverse proxy request. You do not use IP addresses to reject.

if ($host !~ ^(test.in|www.test.in|images.test.in)$ ) {
return 444;
}

Nine, to limit the request method available

GET and POST are the Internet’s most commonly used method. The method of the Web server is defined in RFC 2616. If the Web server is not required to run all available methods, they should be disabled. The following command will filter only allows GET, HEAD and POST methods:

## Only allow these request methods ##

if ($request_method !~ ^(GET|HEAD|POST)$ ) {

return 444;

}

## Do not accept DELETE, SEARCH and other methods ##

More about HTTP method introduced

• GET method is used to request,

• HEAD method is the same, unless GET request to the server can not return the message body.

• POST method can involve many things, such as storage or update data, or ordering products, or send e-mail by submitting the form. This is usually the use of server-side processing, such as PHP, Perl and Python scripts. If the file you want to upload and server processing the data, you must use this method.

Ten, how to refuse a number of User-Agents?

You can easily stop User-Agents, such as scanners, robotics and abuse your server spammers.

## Block download agents ##

if ($http_user_agent ~* LWP::Simple|BBBike|wget) {

return 403;

}

Soso and the proper way to prevent robots:

## Block some robots ##

if ($http_user_agent ~* Sosospider|YodaoBot) {

return 403;

}

XI prevent image hotlinking

Pictures or HTML Daolian mean someone directly with your website address to display pictures on his website. The end result, you need to pay the extra cost of broadband. This is often in the forum and blog. I strongly recommend that you block and prevent hotlinking behavior.

# Stop deep linking or hot linking

location /images/ {

valid_referers none blocked www.example.com example.com;

if ($invalid_referer) {

return 403;

}

}

For example: the redirect and display the specified image

valid_referers blocked www.example.com example.com;

valid_referers blocked www.example.com example.com;

if ($invalid_referer) {

rewrite ^/images/uploads.*\.(gif|jpg|jpeg|png)$ http://www.examples.com/banned.jpg last

}

Twelve, directory restrictions

You can set access permissions on the specified directory. All websites directory should one configuration, allowing only access to the directory.
Access by IP address restrictions
You can restrict access by IP address directory / admin /:

ocation /docs/ {

## block one workstation

deny 192.168.1.1;

## allow anyone in 192.168.1.0/24

allow 192.168.1.0/24;

## drop rest of the world

deny all;

}

Via password protected directory, first create the password file and increase the “user” user

mkdir /usr/local/nginx/conf/.htpasswd/

htpasswd -c /usr/local/nginx/conf/.htpasswd/passwd user

Edit nginx.conf, added need protected directories

### Password Protect /personal-images/ and /delta/ directories ###

location ~ /(personal-images/.*|delta/.*) {

auth_basic “Restricted”;

auth_basic_user_file /usr/local/nginx/conf/.htpasswd/passwd;

}

Once the password file has been generated, you can also use the following command to allow access to the user increases

htpasswd -s /usr/local/nginx/conf/.htpasswd/passwd userName

Thirteen, Nginx SSL Configuration

HTTP is a plain text protocol, which is open to passive surveillance. You should use SSL to encrypt your user content.
Create SSL certificate, execute the following command:

cd /usr/local/nginx/conf

openssl genrsa -des3 -out server.key 1024

openssl req -new -key server.key -out server.csr

cp server.key server.key.org

openssl rsa -in server.key.org -out server.key

openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt

Edit nginx.conf press following updates:

server {

server_name example.com;

listen 443;

ssl on;

ssl_certificate /usr/local/nginx/conf/server.crt;

ssl_certificate_key /usr/local/nginx/conf/server.key;

access_log /usr/local/nginx/logs/ssl.access.log;

error_log /usr/local/nginx/logs/ssl.error.log;

}

Fourteen, Nginx and PHP Security Recommendations

PHP is a popular scripting language on the server side. Edit /etc/php.ini file as follows:

# Disallow dangerous functions

disable_functions = phpinfo, system, mail, exec

## Try to limit resources ##

# Maximum execution time of each script, in seconds

max_execution_time = 30

# Maximum amount of time each script may spend parsing request data

max_input_time = 60

# Maximum amount of memory a script may consume (8MB)

memory_limit = 8M

# Maximum size of POST data that PHP will accept.

post_max_size = 8M

# Whether to allow HTTP file uploads.

file_uploads = Off

# Maximum allowed size for uploaded files.

upload_max_filesize = 2M

# Do not expose PHP error messages to external users

display_errors = Off

# Turn on safe mode

safe_mode = On

# Only allow access to executables in isolated directory

safe_mode_exec_dir = php-required-executables-path

# Limit external access to PHP environment

safe_mode_allowed_env_vars = PHP_

# Restrict PHP information leakage

expose_php = Off

# Log all errors

log_errors = On

# Do not register globals for input data

register_globals = Off

# Minimize allowable PHP post size

post_max_size = 1K

# Ensure PHP redirects appropriately

cgi.force_redirect = 0

# Disallow uploading unless necessary

# Enable SQL safe mode

sql.safe_mode = On

# Avoid Opening remote files

allow_url_fopen = Off

Fifth, if possible, let Nginx run in a chroot jail

The nginx placed in a chroot jail to reduce the potential for illegal entry into other directories. You can use the traditional and nginx installed with chroot. If possible, that use FreeBSD jails, Xen, OpenVZ virtualization container concept.

XVI firewall level limits the number of connections for each IP

Network server must monitor connections and connection limits per second. PF and Iptales are able to enter your nginx server before the end user to block access.
Linux Iptables: limit the number of connections for each Nginx
following example will prevent from a single IP connection of more than 15 the number of ports 80, 60 seconds.

/sbin/iptables -A INPUT -p tcp –dport 80 -i eth0 -m state –state NEW -m recent –set

/sbin/iptables -A INPUT -p tcp –dport 80 -i eth0 -m state –state NEW -m recent –update –seconds 60 –hitcount 15 -j DROP

service iptables save

According to your specific situation to set the connection limit.

XVII configure the operating system to protect Web servers

Like the above described start SELinux Correct set permissions /nginx document root directory.
Nginx running in user nginx. But the root directory (/ nginx or /usr/local/nginx/html/) should not be set, or the user belongs to the user nginx nginx writable.
Find the error file permissions can use the following command:

find /nginx -user nginx

find /usr/local/nginx/html -user nginx

Make sure you are more ownership of the root or other users, a typical permission settings /usr/local/nginx/html/

ls -l /usr/local/nginx/html/

Sample output:

-rw-r-r- 1 root root 925 Jan 3 00:50 error4xx.html

-rw-r-r- 1 root root 52 Jan 3 10:00 error5xx.html

-rw-r-r- 1 root root 134 Jan 3 00:52 index.html

You must delete the backup files from the vi or another text editor to create:

find /nginx -name ‘.?*’ -not -name .ht* -or -name ‘*~’ -or -name ‘*.bak*’ -or -name ‘*.old*’

find /usr/local/nginx/html/ -name ‘.?*’ -not -name .ht* -or -name ‘*~’ -or -name ‘*.bak*’ -or -name ‘*.old*’

To delete these files by -delete option to find command.

Eighth, the outgoing connections limit Nginx

Hackers can use tools such as wget download your local file server. Iptables from using nginx user to block outgoing connections. ipt_owner module tries to match the creator of locally generated packets. The following example allows only users 80 user connections outside.

/sbin/iptables -A OUTPUT -o eth0 -m owner –uid-owner vivek -p tcp –dport 80 -m state –state NEW,ESTABLISHED -j ACCEPT

With the above configuration, your nginx server is already very safe and you can publish web pages. However, you should also find more information on security settings according to your site procedures. For example, wordpress or a third-party program.

nginx is a good web server, providing a full range of speed limit function, the main function module is ngx_http_core_module,
ngx_http_limit_conn_module and ngx_http_limit_req_module, the first module in limit_rate function (limited speed bandwidth),

the latter two modules Literally , functions are limiting connections (limit connection) and restriction request (limit request), these modules are compiled into the default nginx core.

All limits are for IP and therefore CC, DDOS has some defensive role.

Limited bandwidth is very easy to understand, directly on the example

location /mp3 {
limit_rate 200k;
}

There is a way you can make the speed limit is more humane, namely transmission speed after the start of a certain flow,

Such as the first full-speed transmission 1M, then start speed:

location /photo {
limit_rate_after 1m;
limit_rate 100k;
}

Then speak and limit the number of concurrent requests.

Why do these two modules? Because we know that a page is usually more than one child module, such as five pictures, then we request this page initiated a connection,
but this is a connection request that contains the five pictures, which means that a connection can initiate multiple requests . We have to maintain the user experience,
it is to limit the number of connections or requests, to be selected according to actual needs.

limit the number of connections

To restrict access, you must first have a container for connection count, add the following code segment http:

limit_conn_zone $binary_remote_addr zone=addr:5m;

This will create a 5M in memory size, speed pool named addr (each connection occupies 32 or 64 bytes, 5m size which can accommodate tens of thousands of connections, is usually sufficient,
if memory is exhausted 5M , will return to 503)

Next, the need for a different location server (location above) to limit the rate, such as restrictions on the number of concurrent connections per IP is 2,

limit_conn addr 2;

2, limit the number of requests

To limit the number of requests, you must first create a speed pool, add the following code segment at http:

limit_conn_zone $binary_remote_addr zone=addr:5m;

Limit divided into global and local speed limit,

For global speed limit, we only need to be followed by the parameters, such as 20 requests per second, rate = 20r / s, namely:

limit_req_zone $binary_remote_addr zone=perip:5m rate=20r/s;

Sometimes we want to adjust the location segment links, you can help burst parameters

limit_req zone=one burst=50;

If you do not want to delay, there nodelay parameters

limit_req zone=one burst=50 nodelay;

The above is the rate-limiting nginx Introduction, inappropriate, please correct me. As for the specific use of methods which limit must be considered, so as not to damage the user experience.

nginx log filter Web Crawler

Nginx log analysis, when there is a headache many spiders reptiles marks.

Given that most spiders reptiles are called xx-bot or xx-spider, the following methods can be written to a separate log reptiles:

location / {
if ($http_user_agent ~* “bot|spider”) {
access_log /var/log/nginx/spider.access.log;
}
}
Or simply do not write log

location / {
if ($http_user_agent ~* “bot|spider”) {
access_log off;
}
}

Tomcat implement multi-instance use systemd centos 7 RHEL 7

rpm -ivh jdk-8u60-linux-x64.rpm

getent group tomcat || groupadd -r tomcat
getent passwd tomcat || useradd -r -d /opt -s /bin/nologin tomcat

cd /opt
wget http://mirror.nus.edu.sg/apache/tomcat/tomcat-8/v8.0.30/bin/apache-tomcat-8.0.30.tar.gz
tar xzf jdk-8u40-linux-i586.tar.gz

mv apache-tomcat-8.0.30 tomcat01
chown -R tomcat:tomcat tomcat01

tar zxvf apache-tomcat-8.0.30.tar.gz
mv apache-tomcat-8.0.30 tomcat02
chown -R tomcat:tomcat tomcat02

sed -i ‘s/8080/8081/g’ /opt/tomcat01/conf/server.xml
sed -i ‘s/8005/8001/g’ /opt/tomcat01/conf/server.xml
sed -i ‘s/8080/8082/g’ /opt/tomcat02/conf/server.xml
sed -i ‘s/8005/8002/g’ /opt/tomcat02/conf/server.xml

sed -i ‘/8009/d’ /opt/tomcat01/conf/server.xml
sed -i ‘/8009/d’ /opt/tomcat01/conf/server.xml

cd /usr/lib/systemd/system
cat >tomcat01.service <<EOF
[Unit]
Description=Apache Tomcat 7
After=network.target
[Service]
Type=oneshot
ExecStart=/opt/tomcat01/bin/startup.sh
ExecStop=/opt/tomcat01/bin/shutdown.sh
RemainAfterExit=yes
User=tomcat
Group=tomcat
[Install]
WantedBy=multi-user.target
EOF

sed ‘s/tomcat01/tomcat02/g’ tomcat01.service > tomcat02.service

systemctl enable tomcat01
systemctl enable tomcat02
systemctl start tomcat01
systemctl start tomcat02

proxy_cache_path /var/cache/nginx/proxy_cache levels=1:2 keys_zone=static:10m inactive=30d max_size=1g;

upstream tomcat {
ip_hash ;
#hash $remote_addr consistent;
server 127.0.0.1:8081 max_fails=1 fail_timeout=2s ;
server 127.0.0.1:8082 max_fails=1 fail_timeout=2s ;
keepalive 16;
}

server {
listen 80;
server_name tomcat.example.com;

charset utf-8;
access_log /var/log/nginx/tomcat.access.log main;
root /usr/share/nginx/html;
index index.html index.htm index.jsp;

location / {
proxy_pass http://tomcat;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504;

proxy_connect_timeout 300;
proxy_send_timeout 300;
proxy_read_timeout 300;
proxy_http_version 1.1;
proxy_set_header Connection “”;

add_header X-Backend “$upstream_addr”;
}

location ~* ^.+\.(js|css|ico|gif|jpg|jpeg|png)$ {
proxy_pass http://tomcat ;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504;

proxy_connect_timeout 300;
proxy_send_timeout 300;
proxy_read_timeout 300;
proxy_http_version 1.1;
proxy_set_header Connection “”;

proxy_cache static;
proxy_cache_key $host$uri$is_args$args;
proxy_cache_valid 200 302 7d;
proxy_cache_valid 404 1m;
proxy_cache_valid any 1h;
add_header X-Cache $upstream_cache_status;

#log_not_found off;
#access_log off;
expires max;
}

location ~ /\.ht {
deny all;
}

}

RHEL -7 Notes

study notes 2

— command line file

1, create and delete files

Create a file touch xxxx

touch -t 20151225 create a file and specify time properties

rm xxx delete the file

rm -rf forced to delete files

2. Create a directory and delete the directory

mkdir -p xxx/yyy recursively create directories;

rmdir xxx delete empty directories;

rm -rf XXX forcibly remove non-empty directories;

3, copy files and directories

cp /path1/xxx /path2/xxx/

cp -p /path1/xxx /path2/yyy copy files to retain the original file attributes;

cp -Rf (or -rf)/path1/ /path2/ copy files to a directory directory path1 path2

cp -a cp -dR –preserve=all

4 Cut files

mv /path1/xx /path2/yy

5 View Files

cat xx less
more
less

3– redirection and piping

Redirect correct content:

cat xx.file> yy.file equivalent

cat xx.file 1> yy.file redirected to the file contents yy.file, will overwrite the original file contents;

cat /etc/passwd & >> /tmp/xx

cat /etc/passwd > /tmp/xx 2>&1

tail -f /var/log/messages >/tmp/xx 2>/tmp/yy

ps aux | grep tail | grep -v ‘grep’

ls -al /dev/

to redirect the file contents input to a command

tr

cat > /tmp/xxx <<EOF
>test1
>test2
>test3
>EOF

cat <<EOF> /tmp/xxx
>test2
>test3
>test4
>EOF

number of rows grep -n before looking for the number of rows plus -i ignore case, -A 3 looking for then displayed, B 3 show the number of rows -v looking for negative keywords before after content, -q Do not display output;

grep -n1 B1 -A root /etc/passwd

ifconfig | grep ‘inet’|grep -v ‘inet6’| awk ‘BEGIN{print “IP\t\tnetmask”}{print $2,”\t”,$4}END{}’

ifconfig | grep ‘inet’|grep -v ‘inet6’| tee -a /tmp/yy|awk ‘BEGIN{print “IP\t\tnetmask”}{print $2,”\t”,$4}END{}’

4 – Vim editor use

1, gedit graphical editing files

2, Vim operating a file, if the file exists is opened, if the file does not exist, it is created:

When using Vim to edit the file to open, the default is the command-line mode:

4. When you edit the file, enter the command line insert mode, press the following keys to enter:

i, from the current cursor enters;

a, from the current cursor one character to enter;

o, in the current row into the next row;

I, from the current cursor jump to the beginning of the line and into the Bank;

A, skip to the end and enter the Bank;

O, in the Bank of the line to insert a row;

r, to replace the current character;

R, to replace the current character and move to the next character;

number + G: jump to a specific row, such as 10G jumps to line 10, GG jump to the last line, gg jump to the first line;

number + yy: copy current number of rows down, in any line can be pasted by p;

number + dd: Cut down the number of rows currently in any line adhesive according to p;

u: undo the last step of the operation;

ctrl + r: to restore the previous step;

ctrl + v: to enter visual block mode, the cursor moves up and down, select the content, press y copy selected content, at any position by pasting;

Fast beginning of the line to add a comment #, move the cursor to select the line, then press I to the start position,

press #, press ESC to exit

#abrt:x:173:173::/etc/abrt:/sbin/nologin
#pulse:x:171:171:PulseAudio System Daemon:/var/run/pulse:/sbin/nologin
#gdm:x:42:42::/var/lib/gdm:/sbin/nologin
#gnome-initial-setup:x:993:991::/run/gnome-initial-setup/:/sbin/nologin
:split?ctrl+w

To view detailed help Vim, you can enter Vimtutor.

5, line mode to save the file, find, property settings, replacement and other operations

Enter the last line mode, ESC from insert into command mode, enter 🙁 / or generally used to find, n from the look down, N from the look-up)

Save: wq to save and exit, or x;

Force Quit: q without saving the file content;!

Display line numbers: set nu, if the default display line numbers, you will need to modify the home directory vimrc file or / etc / vimrc, no file is created, insert a row set nu;

Switching specified line: direct input line number;

Replace: 1,$s /old /new /g Replace globally all

m, ns /old /new /g replace m row to match the contents of all n rows, on behalf of the current line, $representatives last line,
$ -. 1 represents the penultimate line, (1, $) can also be used to replace%,
They are expressed full text. If you want to match the contents of which have special characters such as /, *, etc., to be added in front of the escape character \

You can use the s # old # new #, using the # separator, the special characters do not need to escape;

Find backslash below, if you want to ignore the case, look at the back of the content plus \ c, for example: / servername \ c

study notes 5– manage users and user groups

[root@RHEL7HARDEN /]# passwd –help
Usage: passwd [OPTION…] <accountName>
-k, –keep-tokens keep non-expired authentication tokens
-d, –delete delete the password for the named account (root only)
-l, –lock lock the password for the named account (root only)
-u, –unlock unlock the password for the named account (root only)
-e, –expire expire the password for the named account (root only)
-f, –force force operation
-x, –maximum=DAYS maximum password lifetime (root only)
-n, –minimum=DAYS minimum password lifetime (root only)
-w, –warning=DAYS number of days warning users receives before password expiration (root only)
-i, –inactive=DAYS number of days after password expiration when an account becomes disabled (root only)
-S, –status report password status on the named account (root only)
–stdin read new tokens from stdin (root only)

Help options:
-?, –help Show this help message
–usage Display brief usage message
[root@RHEL7HARDEN /]#

[root@RHEL7HARDEN /]# chage –help
Usage: chage [options] LOGIN

Options:
-d, –lastday LAST_DAY set date of last password change to LAST_DAY
-E, –expiredate EXPIRE_DATE set account expiration date to EXPIRE_DATE
-h, –help display this help message and exit
-I, –inactive INACTIVE set password inactive after expiration
to INACTIVE
-l, –list show account aging information
-m, –mindays MIN_DAYS set minimum number of days before password
change to MIN_DAYS
-M, –maxdays MAX_DAYS set maximim number of days before password
change to MAX_DAYS
-R, –root CHROOT_DIR directory to chroot into
-W, –warndays WARN_DAYS set expiration warning days to WARN_DAYS

root@RHEL7HARDEN /]# useradd –help
Usage: useradd [options] LOGIN
useradd -D
useradd -D [options]

Options:
-b, –base-dir BASE_DIR base directory for the home directory of the
new account
-c, –comment COMMENT GECOS field of the new account
-d, –home-dir HOME_DIR home directory of the new account
-D, –defaults print or change default useradd configuration
-e, –expiredate EXPIRE_DATE expiration date of the new account
-f, –inactive INACTIVE password inactivity period of the new account
-g, –gid GROUP name or ID of the primary group of the new
account
-G, –groups GROUPS list of supplementary groups of the new
account
-h, –help display this help message and exit
-k, –skel SKEL_DIR use this alternative skeleton directory
-K, –key KEY=VALUE override /etc/login.defs defaults
-l, –no-log-init do not add the user to the lastlog and
faillog databases
-m, –create-home create the user’s home directory
-M, –no-create-home do not create the user’s home directory
-N, –no-user-group do not create a group with the same name as
the user
-o, –non-unique allow to create users with duplicate
(non-unique) UID
-p, –password PASSWORD encrypted password of the new account
-r, –system create a system account
-R, –root CHROOT_DIR directory to chroot into
-s, –shell SHELL login shell of the new account
-u, –uid UID user ID of the new account
-U, –user-group create a group with the same name as the user
-Z, –selinux-user SEUSER use a specific SEUSER for the SELinux user mapping

[root@RHEL7HARDEN /]# usermod –help
Usage: usermod [options] LOGIN

Options:
-c, –comment COMMENT new value of the GECOS field
-d, –home HOME_DIR new home directory for the user account
-e, –expiredate EXPIRE_DATE set account expiration date to EXPIRE_DATE
-f, –inactive INACTIVE set password inactive after expiration
to INACTIVE
-g, –gid GROUP force use GROUP as new primary group
-G, –groups GROUPS new list of supplementary GROUPS
-a, –append append the user to the supplemental GROUPS
mentioned by the -G option without removing
him/her from other groups
-h, –help display this help message and exit
-l, –login NEW_LOGIN new value of the login name
-L, –lock lock the user account
-m, –move-home move contents of the home directory to the
new location (use only with -d)
-o, –non-unique allow using duplicate (non-unique) UID
-p, –password PASSWORD use encrypted password for the new password
-R, –root CHROOT_DIR directory to chroot into
-s, –shell SHELL new login shell for the user account
-u, –uid UID new UID for the user account
-U, –unlock unlock the user account
-Z, –selinux-user SEUSER new SELinux user mapping for the user account

[root@RHEL7HARDEN /]# useradd test1
[root@RHEL7HARDEN /]# mkdir /home/test
root@RHEL7HARDEN /]# usermod -d /home/test1 test
usermod: user ‘test’ does not exist
[root@RHEL7HARDEN /]# cp -a /etc/skel/.[^.]* /home/test/
[root@RHEL7HARDEN /]# groups test1
test1 : test1
usermod -a -G mohan test1

usermod -g

[root@RHEL7HARDEN /]# gpasswd –help
Usage: gpasswd [option] GROUP

Options:
-a, –add USER add USER to GROUP
-d, –delete USER remove USER from GROUP
-h, –help display this help message and exit
-Q, –root CHROOT_DIR directory to chroot into
-r, –delete-password remove the GROUP’s password
-R, –restrict restrict access to GROUP to its members
-M, –members USER,… set the list of members of GROUP
-A, –administrators ADMIN,…
set the list of administrators for GROUP
Except for the -A and -M options, the options cannot be combined.

504 Gateway apache tomcat issue

The servlet was taking a long time to compress the log files, and Apache‘s timeout was set to 2min.

The error was fixed by increasing the TimeOut Directive on the httpd.conf file:

#
# Timeout: The number of seconds before receives and sends time out.
#
##Timeout 120
Timeout 600

MARIADB MASTER SLAVE

MARIADB MASTER SLAVE

Install on both master and slave

yum install mariadb-server mariadb -y

systemctl enable mariadb

systemctl start mariadb.service

mysql_secure_installation

Master

Add below lines on the mysql

vi /etc/my.cnf

[server]
# add follows in [server] section : get binary logs
log-bin=mysql-bin
# define uniq server ID
server-id=101

Restart the mariadb service

systemctl restart mariadb.service

mysql -u root -p

Enter password:

Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 9
Server version: 10.2.8-MariaDB-log MariaDB Server

Copyright (c) 2000, 2017, Oracle, MariaDB Corporation Ab and others.

Type ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the current input statement.

# create user (set any password for ‘password’ section)

MariaDB [(none)]> grant replication slave on *.* to replication@’%’ identified by ‘P@assword’;

Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> flush privileges;

Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> exit

Bye

Slave

Add below lines on the mysql on the slave server

vi /etc/my.cnf

slave node
[server]
# add follows in [server] section : get binary logs

log-bin=mysql-bin
# define server ID (different one from Master Host)

server-id=102
# read only

read_only=1
# define own hostname
report-host=slaveserver

Restart the mariadb service
systemctl restart mariadb.service

Get Dump-Data on Master Host server

mysql -u root -p

Enter password:

Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 10
Server version: 10.2.8-MariaDB-log MariaDB Server

Copyright (c) 2000, 2017, Oracle, MariaDB Corporation Ab and others.

Type ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the current input statement.

# lock all tables

MMariaDB [(none)]> flush tables with read lock;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> show master status;
+——————+———-+————–+——————+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+——————+———-+————–+——————+
| mysql-bin.000001 | 541 | | |
+——————+———-+————–+——————+
1 row in set (0.00 sec)

mysqldump -u root -p –all-databases –lock-all-tables –events > mysql_dump.sql

unlock the tables on the master server

Enter password:
# back to the remained window and unlock

MariaDB [(none)]> unlock tables;

Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> exit

Bye

scp mysql_dump.sql dev01@ec2-13-127-218-218.ap-south-1.compute.amazonaws.com:/tmp/

Go to the Slave Host.

import dump from Master Host

[root@ip-172-31-25-39 ~]# mysql -u root -p < /tmp/mysql_dump.sql
Enter password:

Configure replication settings on Slave Host. It’s OK all, make sure the settings work normally to create databases on Master Host.
# import dump from Master Host

mysql -u root -p

CHANGE MASTER TO MASTER_HOST=’13.127.203.148′, MASTER_USER=’replication’, MASTER_PASSWORD=’P@assword’, MASTER_LOG_FILE=’| mysql-bin.000001′, MASTER_LOG_POS=541;

MariaDB [(none)]> start slave;

Query OK, 0 rows affected (0.00 sec)
# show status

MariaDB [(none)]> show slave status\G

asterA has the following error in show slave status:

Last_IO_Errno: 1236
Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: ‘Could not find first log file name in binary log index file’

Solution:

Slave: stop slave;

Master: flush logs

Master: show master status; — take note of the master log file and master log position

Slave: CHANGE MASTER TO MASTER_LOG_FILE=’log-bin.00000X?, MASTER_LOG_POS=106;

Slave: start slave;

MariaDB [(none)]> stop slave;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> reset slave;
Query OK, 0 rows affected (0.01 sec)

MariaDB [(none)]> CHANGE MASTER TO MASTER_HOST=’13.127.203.148′, MASTER_USER=’replication’, MASTER_PASSWORD=’P@assword’, MASTER_LOG_FILE=’| mysql-bin.000005′, MASTER_LOG_POS=245;
Query OK, 0 rows affected (0.01 sec)

MariaDB [(none)]> start slave;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> show slave status\G ;

 

show binary logs ;
>for f in $(cat mysqld-bin.index); do test -f $f || echo "Not found $f" ; done;

Docker Swarm on Centos 7

Install of docker

Lets remove all the unnecessary software on the machine

yum remove docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-selinux docker-engine-selinux docker-engine

now we have to install dependancy software for Docker

yum install -y yum-utils device-mapper-persistent-data lvm2

Let add a docker repo

yum-config-manager –add-repo https://download.docker.com/linux/centos/docker-ce.repo

Enable the yum docker repo on the centos 7.4 server

yum-config-manager –enable docker-ce-edge

List current docker

yum list docker-ce –showduplicates | sort -r

Install latest docker community edition on the server

yum install docker-ce -y

After install ensure we have enable the services and start the docker

systemctl start docker

systemctl enable docker

systemctl status docker

download https://github.com/MohanRamadoss/docker/tree/master/centos7-nginx

docker build -t=”nginx/centos-nginx” .

Initialize a swarm. The docker engine targeted by this command becomes a manager in the newly created single-node swarm.

[root@clusterserver1 centos7-nginx]# docker swarm init
Swarm initialized: current node (d58oju29e07c4u59tnf5e1ysn) is now a manager.

To add a worker to this swarm, run the following command:

docker swarm join –token SWMTKN-1-0uppeopy2mg0a8cblugyv6gje9gssnhxtl5qk0sa5nqubojap8-b1rzpxnxxzu0ubt7q9mhtn0dr 192.168.1.20:2377

To add a manager to this swarm, run ‘docker swarm join-token manager’ and follow the instructions.

In the output from the initialization command (the swarm join token is provided), use the token for joining new worker nodes into the cluster.

If you accidentally close your terminal and can’t remember the token, not to worry, use the following command to retrieve the command with the join-token for either joining a new manager or a new worker.

OK, now we have a swarm cluster with only one manager node:

[root@clusterserver3 centos7-nginx]# docker swarm join –token SWMTKN-1-0uppeopy2mg0a8cblugyv6gje9gssnhxtl5qk0sa5nqubojap8-b1rzpxnxxzu0ubt7q9mhtn0dr 192.168.1.20:2377

[root@clusterserver4 centos7-nginx]# docker swarm join –token SWMTKN-1-0uppeopy2mg0a8cblugyv6gje9gssnhxtl5qk0sa5nqubojap8-b1rzpxnxxzu0ubt7q9mhtn0dr 192.168.1.20:2377

root@clusterserver1 centos7-nginx]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
d58oju29e07c4u59tnf5e1ysn * clusterserver1.rmohan.com Ready Active Leader
s0vxqeav20pzsnuegh6re5oc7 clusterserver3.rmohan.com Ready Active
k9eaqn6279iozux3edmpu92jx clusterserver4.rmohan.com Ready Active
[root@clusterserver1 centos7-nginx]#

Get the Service Up

Hope you’re rubbing your hands together like I am at the moment. Yes, finally, we get to the stage of getting the customized Apache image onto the Docker Swarm cluster.

The following steps need to be executed on the Swarm manager node, so let’s jump on clusterserver1.

docker service create –name swarm_cluster –replicas=2 -p 80:80 nginx/centos-nginx:latest

[root@clusterserver1 centos7-nginx]# docker service update –publish-add 80:80 swarm_cluster
swarm_cluster
overall progress: 3 out of 3 tasks
1/3: running [==================================================>]
2/3: running [==================================================>]
3/3: running [==================================================>]
verify: Service converged
[root@clusterserver1 centos7-nginx]#

[root@clusterserver1 centos7-nginx]# docker service inspect swarm_cluster

[root@clusterserver1 centos7-nginx]# docker service ps swarm_cluster

[root@clusterserver1 centos7-nginx]# docker ps –format ‘table {{.ID}} {{.Image}} {{.Ports}}’

 

[root@clusterserver1 centos7-nginx]# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
t6vd2agk6wdg swarm_cluster replicated 2/2 nginx/centos-nginxv1:latest *:80->80/tcp

Let kill the docker instance in one of the node we can see the docker container is spin up

root@clusterserver1 centos7-nginx]# docker service ps swarm_cluster
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
pjmpcau7uylu swarm_cluster.1 nginx/centos-nginx:latest clusterserver1.rmohan.com Ready Ready less than a second ago
s7s802sc0dpu \_ swarm_cluster.1 nginx/centos-nginx:latest clusterserver3.rmohan.com Shutdown Failed 4 seconds ago “task: non-zero exit (137)”
47542368ylsw swarm_cluster.2 nginx/centos-nginx:latest clusterserver4.rmohan.com Running Running 15 minutes ago
[root@clusterserver1 centos7-nginx]#

Horizontal Scaling

One of the coolest things ab0ut cluster orchestration is auto-scaling, which is also a great feature of Docker Swarm.

At the moment, I’ve got 2 replicas hosting “swarm_cluster” service, and I want to add one more, which can be simply done by running the following command on the swarm manager

[root@clusterserver1 centos7-nginx]# docker service scale swarm_cluster=6
swarm_cluster scaled to 6
overall progress: 6 out of 6 tasks
1/6: running [==================================================>]
2/6: running [==================================================>]
3/6: running [==================================================>]
4/6: running [==================================================>]
5/6: running [==================================================>]
6/6: running [==================================================>]
verify: Service converged
[root@clusterserver1 centos7-nginx]#

[root@clusterserver1 centos7-nginx]# docker service ps swarm_cluster
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
s7s802sc0dpu swarm_cluster.1 nginx/centos-nginx:latest clusterserver3.rmohan.com Running Running 13 minutes ago
47542368ylsw swarm_cluster.2 nginx/centos-nginx:latest clusterserver4.rmohan.com Running Running 13 minutes ago
z6c82pw2nd6o swarm_cluster.3 nginx/centos-nginx:latest clusterserver1.rmohan.com Running Running 13 minutes ago
wu7i6bdbqfxs swarm_cluster.4 nginx/centos-nginx:latest clusterserver1.rmohan.com Running Running 8 minutes ago
vax5efeo3ay1 swarm_cluster.5 nginx/centos-nginx:latest clusterserver3.rmohan.com Running Running 8 minutes ago
fzoy80qhv219 swarm_cluster.6 nginx/centos-nginx:latest clusterserver4.rmohan.com Running Running 8 minutes ago

Rolling Update

The last thing I want to demonstrate is a rolling update of the service by making a minor change on the index.html file.

Then I need to rebuild my Docker image and push it to my public DockerHub repository; please refer to the previous sections for the details.

Once I’ve got my new image available in the remote repository, I run the following command on the swarm manager node – clusterserver1:

docker build -t=”nginx/centos-nginxv1″ .

docker service update –image nginx/centos-nginxv1:latest swarm_cluster

[root@clusterserver1 centos7-nginx]# docker service update –image nginx/centos-nginxv1:latest swarm_cluster
image nginx/centos-nginxv1:latest could not be accessed on a registry to record
its digest. Each node will access nginx/centos-nginxv1:latest independently,
possibly leading to different nodes running different
versions of the image.

swarm_cluster
overall progress: 2 out of 2 tasks
1/2: running [==================================================>]
2/2: running [==================================================>]
verify: Service converged
[root@clusterserver1 centos7-nginx]#

 

Cleaning up

[root@clusterserver1 centos7-nginx]# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
t6vd2agk6wdg swarm_cluster replicated 2/2 nginx/centos-nginxv1:latest *:80->80/tcp

$ docker service rm swarm_cluster
swarm_cluster

$ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS

$ docker swarm leave –force

Node left the swarm.

Install Mod Security on Nginx for CentOS 6 and 7

Install Mod Security on Nginx for CentOS 6 and 7

Introduction

ModSecurity is a toolkit for real-time web application monitoring, logging, and access control. you can consider it as an enabler, there are no hard rules telling you what to do, instead, it is up to you to choose your own path through the available features. The freedom to choose what to do is an essential part of ModSecurity’s identity and goes very well with its open source nature. With full access to the source code, your freedom to choose extends to the ability to customize and extend the tool itself to make it fit your needs.

We are assuming that you have root permission, otherwise, you may start commands with “sudo”.

 

Attention

Building a ModSecurity on a Nginx server is kinda hard because you have to download and compile both of them yourself and installing them through a package installer is not possible for now, meanwhile, you have to install previous releases of the Nginx web server.

Download Nginx and ModSecurity

You can download the compatible version of Nginx and ModSecurity easily with “Wget”:

wget http://nginx.org/download/nginx-1.8.0.tar.gz wget https://www.modsecurity.org/tarball/2.9.1/modsecurity-2.9.1.tar.gz

Extract them as well:

tar xvzf nginx-1.8.0.tar.gz tar xvzf modsecurity-2.9.1.tar.gz

And you should download some dependencies so you can compile them:

yum install gcc make automake autoconf libtool pcre pcre-devel libxml2 libxml2-devel curl curl-devel httpd-devel

Compiling ModSecurity with Nginx

Enter the ModSecurity directory:

cd modsecurity-2.9.1 ./configure --enable-standalone-module make

Then we are going to install Nginx with ModSecurity module:

cd nginx-1.8.0
./configure \
> --user=nginx \
> --group=nginx \
> --sbin-path=/usr/sbin/nginx \
> --conf-path=/etc/nginx/nginx.conf \
> --pid-path=/var/run/nginx.pid \
> --lock-path=/var/run/nginx.lock \
> --error-log-path=/var/log/nginx/error.log \
> --http-log-path=/var/log/nginx/access.log \
> --add-module=../modsecurity-2.9.1/nginx/modsecurity

Now we can compile and install Nginx:

make make install

Configure Nginx and ModSecurity

We have to move the ModSecurity config files to Nginx main directory, execute the commands below:

cp modsecurity-2.9.1/modsecurity.conf-recommended /etc/nginx/ cp modsecurity-2.9.1/unicode.mapping /etc/nginx/

Now we have to rename the ModSecurity config file;

cd /etc/nginx/ mv modsecurity.conf-recommended modsecurity.conf

Open the “nginx.conf” and add the following lines under the directive “location /” it’s about line 47:

nano nginx.conf

ModSecurityEnabled on;
ModSecurityConfig modsecurity.conf;

Save and Exit

Create Nginx user with the command below:

useradd -r nginx

We can test our Nginx config file to check if everything is ok:

cd /usr/sbin/ ./nginx -t

You should get something like below:


nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

Creating the Nginx Service

It’s time to create the Nginx Service so you can start, stop and see your service status:

Create the init.d script file with your text editor in the following path:

nano /etc/init.d/nginx

Paste the following script in your file then save and exit:


#!/bin/sh
#
# nginx - this script starts and stops the nginx daemon
#
# chkconfig:   - 85 15
# description:  NGINX is an HTTP(S) server, HTTP(S) reverse \
#               proxy and IMAP/POP3 proxy server
# processname: nginx
# config:      /etc/nginx/nginx.conf
# config:      /etc/sysconfig/nginx
# pidfile:     /var/run/nginx.pid

# Source function library.
. /etc/rc.d/init.d/functions

# Source networking configuration.
. /etc/sysconfig/network

# Check that networking is up.
[ "$NETWORKING" = "no" ] && exit 0

nginx="/usr/sbin/nginx"
prog=$(basename $nginx)

NGINX_CONF_FILE="/etc/nginx/nginx.conf"

[ -f /etc/sysconfig/nginx ] && . /etc/sysconfig/nginx

lockfile=/var/lock/subsys/nginx

make_dirs() {
   # make required directories
   user=`$nginx -V 2>&1 | grep "configure arguments:.*--user=" | sed 's/[^*]*--user=\([^ ]*\).*/\1/g' -`
   if [ -n "$user" ]; then
      if [ -z "`grep $user /etc/passwd`" ]; then
         useradd -M -s /bin/nologin $user
      fi
      options=`$nginx -V 2>&1 | grep 'configure arguments:'`
      for opt in $options; do
          if [ `echo $opt | grep '.*-temp-path'` ]; then
              value=`echo $opt | cut -d "=" -f 2`
              if [ ! -d "$value" ]; then
                  # echo "creating" $value
                  mkdir -p $value && chown -R $user $value
              fi
          fi
       done
    fi
}

start() {
    [ -x $nginx ] || exit 5
    [ -f $NGINX_CONF_FILE ] || exit 6
    make_dirs
    echo -n $"Starting $prog: "
    daemon $nginx -c $NGINX_CONF_FILE
    retval=$?
    echo
    [ $retval -eq 0 ] && touch $lockfile
    return $retval
}

stop() {
    echo -n $"Stopping $prog: "
    killproc $prog -QUIT
    retval=$?
    echo
    [ $retval -eq 0 ] && rm -f $lockfile
    return $retval
}

restart() {
    configtest || return $?
    stop
    sleep 1
    start
}

reload() {
    configtest || return $?
    echo -n $"Reloading $prog: "
    killproc $nginx -HUP
    RETVAL=$?
    echo
}

force_reload() {
    restart
}

configtest() {
  $nginx -t -c $NGINX_CONF_FILE
}

rh_status() {
    status $prog
}

rh_status_q() {
    rh_status >/dev/null 2>&1
}

case "$1" in
    start)
        rh_status_q && exit 0
        $1
        ;;
    stop)
        rh_status_q || exit 0
        $1
        ;;
    restart|configtest)
        $1
        ;;
    reload)
        rh_status_q || exit 7
        $1
        ;;
    force-reload)
        force_reload
        ;;
    status)
        rh_status
        ;;
    condrestart|try-restart)
        rh_status_q || exit 0
            ;;
    *)
        echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload|configtest}"
        exit 2
esac

Create the “nginx.service” file in the following path:

nano /lib/systemd/system/nginx.service

Paste the following script then save and exit:

[Unit]
Description=The NGINX HTTP and reverse proxy server
After=syslog.target network.target remote-fs.target nss-lookup.target

[Service]
Type=forking
PIDFile=/run/nginx.pid
ExecStartPre=/usr/sbin/nginx -t
ExecStart=/usr/sbin/nginx
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s QUIT $MAINPID
PrivateTmp=true

[Install]
WantedBy=multi-user.target

Now you can easily use the following commands to control your Nginx service:

systemctl enable nginx systemctl start nginx systemctl restart nginx systemctl status nginx

Varify ModSecurity working with Nginx properly

 

cd /usr/sbin/ ./nginx -V

if you get something like below it means that your Nginx compiled with ModSecurity successfully:


built by gcc 4.4.7 20120313 (Red Hat 4.4.7-18) (GCC)
configure arguments: --user=nginx --group=nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --add-module=../modsecurity-2.9.1/nginx/modsecurity

If you want to check if the ModSecurity module has been loaded on your Nginx successfuly you have to check last lines of your Nginx’s error log:

cd /var/log/nginx/ tail error.log

You have to search for something like below:

[notice] 13285#0: ModSecurity: PCRE compiled version="7.8 "; loaded version="7.8 2008-09-05"

Rule-Set Recommendation

Start a docker container on CentOS at boot time as a linux service

Note: If docker daemon does not start at boot, you might want to enable the docker service

1
systemctl enabledocker.service

Here are the steps.

Create the file /etc/systemd/system/docker_demo_container.service

1
2
3
4
5
6
7
8
9
10
11
[Unit]
Wants=docker.service
After=docker.service
[Service]
RemainAfterExit=yes
ExecStart=/usr/bin/dockerstart my_container_name
ExecStop=/usr/bin/dockerstop my_container_name
[Install]
WantedBy=multi-user.target

Now I can start the service

1
systemctl start docker_demo_container

And I can enable the service so it is executed at boot

1
systemctl enabledocker_demo_container

That’s it, my container is started at boot.

 

[Unit]
Description=MariaDB container
Requires=docker.service
After=docker.service
[Service]
User=php
Restart=always
RestartSec=10
# ExecStartPre=-/usr/bin/docker kill database
# ExecStartPre=-/usr/bin/docker rm database
ExecStart=/usr/bin/docker start -a database
ExecStop=/usr/bin/docker stop -t 2 database
[Install]

WantedBy=multi-user.target

 

 

#!/bin/bash
exec "$@"

and I have the following in my Dockerfile:

ADD entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
CMD ["date"]