July 2025
M T W T F S S
 123456
78910111213
14151617181920
21222324252627
28293031  

Categories

July 2025
M T W T F S S
 123456
78910111213
14151617181920
21222324252627
28293031  

NFSv4 mounts show “nobody” as owner and group on a RHEL 6 client

Issue

  • On Red Hat Enterprise Linux a NFS mounted share shows “nobody” as the owner and groupowner of all the files and directory.

Resolution

  1. Create the same user on the Server and Client
  2. Use a centralized namespace like LDAP domain, NIS, Active Directory etc

Root Cause

The observed behavior is an expected and intended behavior and is not related to RHEL5 or RHEL6 but instead it is related to NFSv3 and NFSv4.

In NFSv3 the username and groupname is mapped from the UID/GID value, the UID/GID of the user creating the resource is saved on the server, When the clients access it , the /etc/passwd and /etc/gpasswd file will be checked to see if the id exists and for which user it will be mapped to , If there is a user with the same uid and gid, then it will be mapped to that user , else the numeric value will be shown.

In NFSv4 the concept is user@domainname, if there is no centralized usermapping, then the user will be mapped to the default user nobody or whatever user has been configured in /etc/idmapd.conf.

Check for mis-configuration of the /etc/imapd.conf file. If you make changes to the idmapd.conf file, on RHEL 6.5 and newer the command to clear out the old mappings is:

# nfsidmap -c

NFSv4 mount incorrectly shows all files with ownership as nobody:nobody

 From the client, the mounted NFSv4 share has ownership for all files and directories listed as nobody:nobody instead of the actual user that owns them on the NFSv4 server, or who created the new file and directory.
Seeing nobody:nobody permissions on nfsv4 shares on the nfs client. Also seeing the following error in /var/log/messages:nss_getpwnam: name ‘root@example.com’ does not map into domain ‘localdomain’
Resolution
Modify the /etc/idmapd.conf with the proper domain (FQDN), on both the client and server. In this example, the proper domain is “example.com” so the “Domain =” directive within /etc/idmapd.conf should be modified to read:

Domain = example.com
Note:
If using a NetApp Filer, the NFS.V4.ID.DOMAIN parameter must be set to match the “Domain =” parameter on the client.
If using a Solaris machine as the NFS server, the NFSMAPID_DOMAIN value in /etc/default/nfs must match the RHEL clients Domain.
To put the changes into effect restart the rpcidmapd service and remount the NFSv4 filesystem:

# service rpcidmapd restart
# mount -o remount /nfs/mnt/point
Note: It is only necessary to restart rpc.idmapd service on systems where rpc.idmapd is actually performing the id mapping. On RHEL 6.3 and newer NFS CLIENTS, the maps are stored in the kernel keyring and the id mapping itself is performed by the /sbin/nfsidmap program. On older NFS CLIENTS (RHEL 6.2 and older) as well as on all NFS SERVERS running RHEL, the id mapping is performed by rpc.idmapd.
Ensure the client and server have matching UID’s and GID’s. It is a common misconception that the UID’s and GID’s can differ when using NFSv4. The sole purpose of id mapping is to map an id to a name and vice-versa. ID mapping is not intended as some sort of replacement for managing id’s.
On Red Hat Enterprise Linux 6, if the above settings have been applied and UID/GID’s are matched on server and client and users are still being mapped to nobody:nobody than a clearing of the idmapd cache may be required:

# nfsidmap -c
Note: The above command is only necessary on systems that use the keyring-based id mapper, i.e. NFS CLIENTS running RHEL 6.3 and higher. On RHEL 6.2 and older NFS CLIENTS as well as all NFS SERVERS running RHEL, the cache should be cleared out when rpc.idmapd is restarted.
Another check, see if the passwd:, shadow: and group: settings are set correctly in the /etc/nsswitch.conf file on both Server and Client.
Disabling idmapping
By default, RHEL6.3 and newer NFS clients and servers disable idmapping when utilizing the AUTH_SYS/UNIX authentication flavor by enabling the following booleans:

NFS client
# echo ‘Y’ > /sys/module/nfs/parameters/nfs4_disable_idmapping

NFS server
# echo ‘Y’ > /sys/module/nfsd/parameters/nfs4_disable_idmapping
If using a NetApp filer, the options nfs.v4.id.allow_numerics on command can be used to disable idmapping. More information can be found here.
With this boolean enabled, NFS clients will instead send numeric UID/GID numbers in outgoing attribute calls and NFS servers will send numeric UID/GID numbers in outgoing attribute replies.
If NFS clients sending numeric UID/GID values in a SETATTR call receive an NFS4ERR_BADOWNER reply from the NFS server clients will re-enable idmapping and send user@domain strings for that specific mount from that point forward.
Note: This option can only be used with AUTH_SYS/UNIX authentication flavors, if you wish to use something like Kerberos, idmapping must be used.
Root Cause
NFSv4 utilizes ID mapping to ensure permissions are set properly on exported shares, if the domains of the client and server do not match then the permissions are mapped to nobody:nobody.
Diagnostic Steps
Debugging/verbosity can be enabled by editing /etc/sysconfig/nfs:

RPCIDMAPDARGS=”-vvv”
The following output is shown in /var/log/messages when the mount has been completed and the system shows nobody:nobody as user and group permissions on directories and files:

Jun 3 20:22:08 node1 rpc.idmapd[1874]: nss_getpwnam: name ‘root@example.com’ does not map into domain ‘localdomain’
Jun 3 20:25:44 node1 rpc.idmapd[1874]: nss_getpwnam: name ‘root@example.com’ does not map into domain ‘localdomain’
Collect a tcpdump of the mount attempt:

# tcpdump -s0 -i {INTERFACE} host {NFS.SERVER.IP} -w /tmp/{casenumber}-$(hostname)-$(date +”%Y-%m-%d-%H-%M-%S”).pcap &
If a TCP packet capture has been obtained, check for a nfs.nfsstat4 packet that has returned a non-zero response equivalent to 10039 (NFSV4ERR_BADOWNER).
From the NFSv4 RFC:

NFS4ERR_BADOWNER = 10039,/* owner translation bad */

NFS4ERR_BADOWNER An owner, owner_group, or ACL attribute value
can not be translated to local representation.

These commands are what I did on CentOS Linux release 7.2.1511 (Core)

Install nfs-utils

yum install -y nfs-utils

Append text to /etc/fstab

192.168.1.100:/mnt/nfs-server /mnt/nfs-client nfs defaults,nofail,x-systemd.automount 0 0

Some articles said noauto,x-systemd.automount is better, but it worked without noauto for me.

Check whether mount works

systemctl start rpcbind
systemctl enable rpcbind
mount -a

Fix the problem CentOS 7 won’t auto-mount NFS on boot

Append text to the end of /usr/lib/systemd/system/nfs-idmap.service

[Install]
WantedBy=multi-user.target

Append text to the end of /usr/lib/systemd/system/nfs-lock.service

[Install]
WantedBy=nfs.target

Enable related services

systemctl enable nfs-idmapd.service 
systemctl enable rpc-statd.service 

systemctl enable rpcbind.socket

systemctl status nfs-idmapd.service -l
systemctl status rpc-statd.service –l

Then restarted the OS, I got it.

shutdown -r now

XFS Filesystem has duplicate UUID problem Administration

XFS Filesystem has duplicate UUID problem Administration
If you can not mount your XFS partition with classical wrong fs type, bad superblock etc. error and you see a message in kernel logs (dmesg) like that:

XFS: Filesystem sdb7 has duplicate UUID – can’t mount
you can still mount the filesystem with nouuid options as below:

mount -o nouuid /dev/sdb7 disk-7

But every mount, you have to provide nouuid option. So, for exact solution you have to generate a new UUID for this partition with xfs_admin utility:
xfs_admin -U generate /dev/sdb7

Clearing log and setting UUID
writing all SBs
new UUID = 01fbb5f2-1ee0-4cce-94fc-024efb3cd3a4

after that, you can mount this XFS partition regularly.

Lsyncd on CentOS 7 & RHEL 7

Lsyncd stands for “Live Syncing Daemon“, as the name suggest lsyncd is used to sync or replicate files & directories locally and remotely after a specific time of interval. It uses Rsync & ssh in the backend.

Lsyncd works on Master and Slave architecture where it monitors the directory on the master server, if any changes or modification are done then lsyncd will replicate the same on its slave servers after specific interval of time.

In this article we will discuss how to install and use lsyncd on CentOS 7 & RHEL 7.

Scenario : Suppose want to sync the folder “/var/www/html” from Master server to Slave server

Master clusterserver1 IP = 192.168.1.20
Slave clusterserver2 IP = 192.168.1.21
Directory to be Sync = /var/www/html
First Enable Key based authentication between Master and Slave Server.

Login to Master server & generate the public and Private keys using ssh-keygen command.
[root@clusterserver1 html]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory ‘/root/.ssh’.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
a6:6e:8f:e8:2b:62:0c:a7:25:1f:c3:2b:74:eb:5a:33 root@clusterserver1.rmohan.com
The key’s randomart image is:
+–[ RSA 2048]—-+
| |
| |
| |
| |
| . S |
|o.*. o |
|+*.E. . |
|+o=.oo.. |
|.+o++oo.. |
+—————–+
[root@clusterserver1 html]#
ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.1.21

root@clusterserver1 html]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory ‘/root/.ssh’.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
a6:6e:8f:e8:2b:62:0c:a7:25:1f:c3:2b:74:eb:5a:33 root@clusterserver1.rmohan.com
The key’s randomart image is:
+–[ RSA 2048]—-+
| |
| |
| |
| |
| . S |
|o.*. o |
|+*.E. . |
|+o=.oo.. |
|.+o++oo.. |
+—————–+
[root@clusterserver1 html]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.1.21
The authenticity of host ‘192.168.1.21 (192.168.1.21)’ can’t be established.
ECDSA key fingerprint is 43:25:9c:32:53:18:33:a9:25:f7:cd:bb:b0:64:80:fd.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed — if you are prompted now it is to install the new keys
root@192.168.1.21’s password:

Number of key(s) added: 1

Now try logging into the machine, with: “ssh ‘root@192.168.1.21′”
and check to make sure that only the key(s) you wanted were added.

[root@clusterserver1 html]# ssh 192.168.1.21
Last login: Sun Jul 10 16:26:31 2016 from 192.168.1.1
[root@clusterserver2 ~]# logout
Connection to 192.168.1.21 closed.
[root@clusterserver1 html]#
[root@clusterserver1 html]# rpm -iUvh http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-8.noarch.rpm
Retrieving http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-8.noarch.rpm
Preparing… ################################# [100%]
package epel-release-7-8.noarch is already installed
[root@clusterserver1 html]#

[root@clusterserver1 html]# yum install lsyncd

[root@clusterserver1 html]# cp /usr/share/doc/lsyncd-2.1.5/examples/lrsync.lua /etc/lsyncd.conf
cp: overwrite ‘/etc/lsyncd.conf’? y
[root@clusterserver1 html]#
[root@clusterserver1 html]# cat /etc/lsyncd.conf
—-
— User configuration file for lsyncd.

— Simple example for default rsync.

settings = {
logfile = “/var/log/lsyncd.log”,
statusFile = “/var/log/lsyncd.stat”,
statusInterval = 2,
}
sync{
default.rsync,
source=”/var/www/html”,
target=”192.168.1.21:/var/www/html”,
rsync={rsh =”/usr/bin/ssh -l root -i /root/.ssh/id_rsa”,}
}
[root@clusterserver1 html]#

CentOS 7 – Create CentOS 7 Mirror

CentOS 7 – Create CentOS 7 Mirror

The tutorial below will show you how to configure a CentOS 7 server with Nginx to act as a mirror for other CentOS 7 servers.
Steps
First we need to update and install all the necessary packages.
yum update
yum install -y createrepo rsync nginx
Now we need to setup our directories and permissions:

mkdir -p /var/www/html/repos/centos/7.2/os/x86_64/
mkdir -p /var/www/html/repos/centos/7.2/updates/x86_64/
chmod 770 -R /var/www
chown $USER:nginx -R /var/www
Now we are going to configure Nginx to use the location we just created. Replace the contents of /etc/nginx/nginx.conf with:

user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

events {
worker_connections 1024;
}

http {
log_format main ‘$remote_addr – $remote_user [$time_local] “$request” ‘
‘$status $body_bytes_sent “$http_referer” ‘
‘”$http_user_agent” “$http_x_forwarded_for”‘;

access_log /var/log/nginx/access.log main;

sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;

include /etc/nginx/mime.types;
default_type application/octet-stream;

include /etc/nginx/conf.d/*.conf;
}
Create a file at /etc/nginx/conf.d/repo.conf with the following contents
server {
listen 80;
server_name localhost;
root /var/www/html/repos;

location / {
autoindex on;
}

}
Run the following command
http://mirror.bytemark.co.uk/centos/7.2.1511/

createrepo /var/www/html/repos/centos/7.2/os/x86_64/
createrepo /var/www/html/repos/centos/7.2/updates/x86_64/

Now we need to fetch the data for the mirror. Go to the CentOS mirrors list and pick the mirror closest to you that has an rsync address (6th column in the table). For me, I am going to use Bytemark since I am in the United Kingdom.
rsync://mirror.nus.edu.sg/centos
Take the given url, and add the following to the end of it:
/7/os/x86_64/
/7/updates/x86_64/

Now use those urls in the commands below
rsync -avz -avz –delete –exclude=’repo*’ \
rsync://mirror.bytemark.co.uk/centos/7.2.1511/os/x86_64/ \
/var/www/html/repos/centos/7/os/x86_64/

rsync -avz –delete –exclude=’repo*’ \
rsync://mirror.bytemark.co.uk/centos/7.2.1511/updates/x86_64/ \
/var/www/html/repos/centos/7.2/updates/x86_64/

Next we need to update the repo metadata by running:

createrepo –update /var/www/html/repos/centos/7.2/os/x86_64/
createrepo –update /var/www/html/repos/centos/7.2/updates/x86_64/

Configure Cron For Automatic Updating

Create a script with the following contents:
#!/bin/bash
rsync -avz -avz –delete –exclude=’repo*’ \
rsync://mirror.bytemark.co.uk/centos/7.2.1511/os/x86_64/ \
/var/www/html/repos/centos/7/os/x86_64/

rsync -avz –delete –exclude=’repo*’ \
rsync://mirror.bytemark.co.uk/centos/7.2.1511/updates/x86_64/ \
/var/www/html/repos/centos/7.2/updates/x86_64/
/usr/bin/createrepo –update \
/var/www/html/repos/centos/7/os/x86_64/

/usr/bin/createrepo –update \
/var/www/html/repos/centos/7/updates/x86_64/

Now configure the cron service to call that script from at midnight every day by executing crontab -e and adding the following line:

@daily /bin/bash /path/to/script.sh
Configure Automatic Startup
Run the commands below to ensure nginx starts up on boot.
systemctl enable nginx.service && sudo systemctl enable firewalld.service
systemctl start firewalld.service
firewall-cmd –permanent –add-service=http
firewall-cmd –reload
systemctl reboot

Configure SELinux
If you are running selinux and you don’t want to disable it, then run the following command to have selinux allow nginx to serve content from /var/www

chcon -Rt httpd_sys_content_t /var/www

Configure Client To Use Own Mirror

There’s no point having a mirror unless you configure your other servers to use that mirror for updates. Edit the file at /etc/yum.repos.d/CentOS-Base.repo and comment out any lines starting with mirrorlist or baseurl underneath [base] or [updates]. Then add your own baseurl to these sections, with your own mirrors url. For example, my mirror is located internally at http://centos-mirror.programster.org, so my base urls were:
[base]
baseurl=http://centos-mirror.rmohan.com/centos/$releasever/os/$basearch/

[updates]
baseurl=http://centos-mirror.rmohan.com/centos/$releasever/updates/$basearch/

MySQL Enterprise Backup (MEB)

MySQL Enterprise Backup (MEB) has the capability to make real incremental (differential and cumulative?) backups. The actual releases are quite cool and you should really look at it…

Unfortunately the original MySQL documentation is much too complicated for my simple mind. So I did some testing and simplified it a bit for our customers…

If you want to dive into the original documentation please look here: Making an Incremental Backup .

If you want to use MySQL Enterprise Backup please let us know and we send you a quote

First you have to do a full backup. We did it as follows:
mysqlbackup –defaults-file=/etc/my.cnf –user=root –backup-dir=/backup/full backup
mysqlbackup –defaults-file=/etc/my.cnf –user=root –backup-dir=/backup/full apply-log
Then you can do the incremental backup:
mysqlbackup –defaults-file=/etc/my.cnf –user=root –incremental –incremental-base=dir:/backup/full –incremental-backup-dir=/backup/incremental1 backup
mysqlbackup –defaults-file=/etc/my.cnf –user=root –backup-dir=/backup/full –incremental-backup-dir=/backup/incremental1 apply-incremental-backup
this incremental backup you can repeat several times…
FULL MYSQL BACKUP
mysqlbackup –defaults-file=/etc/my.cnf –user=root –backup-dir=/backup/full backup
mysqlbackup –defaults-file=/etc/my.cnf –user=root –backup-dir=/backup/full apply-log
FIRST MYSQL INCREMENTAL BACKUP
mysqlbackup –defaults-file=/etc/my.cnf –user=root –incremental –incremental-base=dir:/backup/full –incremental-backup-dir=/backup/incremental1 backup
mysqlbackup –defaults-file=/etc/my.cnf –user=root –backup-dir=/backup/full –incremental-backup-dir=/backup/incremental1 apply-incremental-backup
SECOND MYSQL INCREMENTAL BACKUP
mysqlbackup –defaults-file=/etc/my.cnf –user=root –incremental –incremental-base=dir:/backup/full –incremental-backup-dir=/backup/incremental2 backup
mysqlbackup –defaults-file=/etc/my.cnf –user=root –backup-dir=/backup/full –incremental-backup-dir=/backup/incremental2 apply-incremental-backup

Compiling and Installing Linux kernel from Source

Compiling and Installing Linux kernel from Source

As you many know, Kernel 4.x.x has been released with tones of changes and enhancements. We all know new kernel always have latest drivers, support for new devices, new features, filesystem enhancements.The feature that all Linux IT guys awaits for, yes it’s Live kernel patching permit a user to update and patch kernel without a reboots. In this article we will Compiling and Installing Linux kernel from Source

Installing Linux kernel from Source

1. Installing dependencies: – Update your system and install dependencies which is required to download, compile and install the Linux kernel

# yum update
# yum groupinstall “Development Tools”
# yum install wget gcc perl ncurses ncurses-devel bc openssl-devel zlib-devel binutils-devel
2. Downloading the Kernel: – Downloading the latest Kernel from kernel.org and extract it to /usr/src

# wget https://cdn.kernel.org/pub/linux/kernel/v4.x/linux-4.4.1.tar.xz
# tar -xf linux-4.4.1.tar.xz -C /usr/src/
# cd /usr/src/linux-4.4.1
3. Configure kernel modules: – We have downloaded and extracted the kernel Now it’s time to  configure Linux kernel. All require drivers, features can be configured this way. When we run “make menuconfig” command it will display all available configuration on Kernel. Once we have selected all of the feature wheat we needed to run our system we can save it and it will  generate a .config file with all selected options.

# make menuconfig
If you are not sure about all this, you can run “make oldconfig” you can keep old kernel configuration to to create newer kernel configuration.

# make oldconfig
If you have a GUI installed on your system you can also run “make xconfig” command which is a QT based wizard of menuconfig command

# make xconfig
4. Compiling Kernel Modules: – Now it’s time to build our Kernel modules. make command will build our kernel modules which we have generated on our .config file. During building modules you might get several error about the dependencies. Just install those dependencies and retry.

If you previously build the kernel module cleanup the previous build..

# make clean
make command can take hours, depends on your system configuration. We will create a bzip compressed kernel for i386 and AMD64 architecture.

# make bzImage
# make modules
OR
# make
If you want to distribute your kernel, you can create rpm from your source. After a successful build you can get your kernel on /usr/src/redhat/SRPMS/ directory

# make rpm
There are several options available to make your kernel, you can get help with

# make help
5. Installing Kernel Modules: – Once module compilation is done it’s time to install Linux Kernel on your system. below command will install all new Kernel and modules on your system and also update your Grub to use latest kernel.

# make modules_install install
you may require the initrd image to properly boot your new Linux kernel. You can create initrd for you new linux kernel with mkinitrd command

# mkinitrd -o /boot/initrd.kernel-4.x.x.xdefault.img initrd.kernel-4.x.x.xdefault
If you have build the RPM of your kernel you can install it using rpm command.

# rpm -ivh –nodeps kernel-4.x.x.xdefault.x64.rpm
After installing the latest kernel on your  system run uname command to ensure you are using it.

# uname -r
At the end

It’s Done !! Now you having the latest kernel version but probably you don’t need to compile Latest version of Kernel on your system until have a specific requirement. Sometime it can break your system functionality. If your version having any vulnerability just patch it and  wait for your official repository to release the latest kernel for your Distribution.

AWS Services

Amazon Web Services is widely used IaaS platform. AWS offer approx 55 services and mastering each services is very difficult. Some services is made for Network engineers, some for developers, and some for both. But we can have a brief look here about each services and what they are for. Let’s have AWS Services Overview

Compute

  • EC2 (Elastic Compute Cloud): Basic building block of AWS. It’s a Virtual machine inside a AWS cloud. It can be Linux, Windows or any other OS.
  • EC2 Container Service: It’s an Docker implementation of AWS. You can start, stop, manage you docker container from this service.
  • Elastic Beanstalk: Elastic Beanstalk allow us automatically create AWS infrastructure for your application.
  • Lambda: Run your Code in Response to AWS Events like run the code if someone uploads a file to AWS S3. It’s a powerful feature that allow us to create and deploy application without a EC2 instance.

Storage & Content Delivery

  • S3: Amazon Simple Storage Service (S3) is a cloud bases storage.
  • CloudFront: AWS Content delivery network.
  • Elastic File System: Managed file system for EC2 instances. It’s like a NAS storage which can be shared with multiple EC2 instances.
  • Glacier: Archive Storage in the Cloud. Backup your data on Cloud with low cost.
  • Import/Export Snowball: Amezon provides a storage device around 50TB encrypted with AES52. Store your large amount of data and send it to AWS. AWS will upload this data to S3 storage.
  • Storage Gateway: Storage Gateway connects you on premise software appliance(provided by AWS) to AWS Cloud for seamless data integration.

Database

  • RDS: Amazon Relational Database Service (Amazon RDS) allow us to create scaleble Relational Database engines like MySQL, MariaDB,  PostgreSQL, Oracle and MSSSQL.
  • DynamoDB: AWS implementation of NoSQL. Provides performance with seamless scalability.
  • ElastiCache: In-Memory Cache implementation of AWS support two most popular caching engines Memcached and Redis.
  • Redshift: AWS Data Warehousing service allow user to create RedShift clusters. Cluster can hold PetaByte data for analysis and business intelligence.
  • DMS: Managed Database Migration Service allow to migrate database from one to another like MySQL to oracle or MSSQL.

Networking

  • VPC: Amezon Virtual Private Cloud is an isolated network on AWS in which you can launch your AWS resources. It’s like a private network inside a AWS Cloud.
  • Direct Connect: Dedicated private connection between your network or DataCenter and AWS. It’s allow us to connect our private and public resources at AWS.
  • Route 53: AWS DNS service.

Developer Tools

  • CodeCommit: AWS private Git Implementation fast and secure
  • CodeDeploy: Automate Code Deployments from Git or S3.
  • CodePipeline: CodePipeline builds, tests, and deploys your code every time there is a code change, based on the release process models you define.

Management Tools

  • CloudWatch: Monitor your AWS Resources and Applications health
  • CloudFormation: Create templets for AWS resources for your application and launch all resources at once. A single templet will launch EC2, RDS, VPC at once.
  • CloudTrail: Provide logs for AWS API usage and information about callers IP, Time, Request parameter etc. API caller can be Management Console, SDK or command line tools.
  • Config: Track AWS resource config changes, provide notification. Like created or deleted resource on AWS.
  • OpsWorks: OpsWorks is an chief automation tool implementation by AWS.
  • Service Catalog: Create and Use Standardized Products
  • Trusted Advisor: Best practice adviser service from AWS for security, performance, fault tolerance and cost optimization.

Security & Identity

  • Identity & Access Management: Manage User accounts, access, roles and security keys. Provides granular control over your AWS resources.
  • Directory Service: Microsoft Active Directory implementation by AWS. Can create or add to existing Active Directory domains.
  • Inspector: Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS
  • WAF: AWS Web application firewall protect web server from exploits, SQL injection, cross site scripting and many other web attacks.
  • Certificate Manager: Provision, Manage, and Deploy SSL/TLS Certificates

Analytics :

  •  Elastic MapReduce: Elastic MapReduce (EMR) is a Managed Hadoop Framework for quickly process of vast amount of data.
  • Data Pipeline: Crate data processing workflow. Process your data from your AWS resources like S3 and RDS or your on premise data. Store result-sets to S3, RDS or EMR.
  • Elasticsearch Service: ElasticSearch implementation of AWS. Create, manage and scale your Elastisearch nodes and cluster at AWS.
  • Kinesis: Work with Real-Time Streaming Data, Process it and store it on S3 or RedShift.
  • Machine Learning: Easily create application from Visualization tools.

Internet of Things

  • AWS IoT: Connect any internet enabled device to AWS, store and manage it’s data, create application which can mange there devices from internet. It’s a bi-directional communication between AWS and your devices.

Game Development

  • GameLift: Deploy and Scale Session-based Multiplayer Games

Mobile Services

  • Mobile Hub: Build, Test, and Monitor Mobile Apps, store it’s data to AWS, Use CDN, authentication, push notifications, analytic.
  • Cognito: Store mobile user data to AWS like user preferences and stats without writing a back-end code.
  • Device Farm: Test Android, FireOS, and iOS Apps on Real Devices in the Cloud
  • Mobile Analytics: Collect, View and Export App Analytics
  • SNS: Push Notification Service for mobile devices.

Application Services

  • API Gateway: Build, Deploy and Manage APIs
  • AppStream: Low Latency Application Streaming
  • CloudSearch: Managed Search Service
  • Elastic Transcoder: Easy-to-Use Scalable Media Transcoding
  • SES: Email Sending and Receiving Service
  • SQS: Message Queue Service
  • SWF: Workflow Service for Coordinating Application Components

Enterprise Applications

  • WorkSpaces: Desktops in the Cloud
  • WorkDocs: Secure Enterprise Storage and Sharing Service
  • WorkMail: Secure Email and Calendaring Service

 

 

Amazon Elastic Compute Cloud (EC2) service allow us to create resizable server instances on the cloud. AWS EC2 service can create, remove, scale your instance capacity in a minute as your requirement changes.

AWS EC2 instance types pricing model

On Demand: Pay fixed hourly rates with no commitment. You can create, scale and delete your instance any time.
Reserved: You have to fixed the capacity for certain time period like 1 or 3 years terms commitment. You can’t remove instance during these time period. Compare to On Demand it’s provide significant discount on hourly charges.
Sport Instances: AWS data-centers always have some unused resources which you can bid on. You bid for the instance capacity and time period and if you bid price is equal or greater then the spot price you will purchase that spot instance. If someone bid higher then your bid pricing you will get an notice from AWS and it will be removed. You can look here for bid pricing statistics based on region and AZ

How to select instance type

On Demand: Applications with no commitment, unpredictable work load, being developed of tested on AWS, Adding more capacity for your application for certain time.
Reserved: You know application behavior and it’s capacity. Long term applications, want to reducing cost.
Spot: Large computing capacity for period of time, Low computing price of period of time, Application which have flexible start and end time.

Note: If you terminate Spot instance you will pay partial hour charge and if AWS terminate they will.

EC2 instance family based on capacity

Type Capacity Used For
T2 General Purpose Low Computing for static website and small applications
M3 & M4 General Purpose General purpose applications
C3 & C4 Compute Optimized Compute Intense applications
R3 Memory Optimized Memory intense applications, Web server
G2 Graphic or GPU Optimized Video Processing and encoding, 3D Applications, Machine Learning
I2 High Speed Storage High Speed IO intense application, NoSQL
D2 Dense Storage Best for file servers

Elastic Block Storage (EBS)

Amezon Elastic Block Storage is a Block device for EC2 instances. It’s like an HDD attached to your System.  You can install OS on EBS volume, can add additional EBS volume on Your EC2 instances.

EBS volumes type

General purpose SSD: 99.99% Availability and  Upto 10000 IOPS
Provisioned IOPS SSD: More then 10000 IOPS. For IO intensive applications.
Magnetic: Lowest storage cost. Low IOPS. Best for File Storage

Install Tomcat 9 on Centos 7

Install Tomcat 9 on Centos 7

cd /opt/
wget –no-cookies –no-check-certificate –header “Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie” “http://download.oracle.com/otn-pub/java/jdk/8u101-b13/jdk-8u101-linux-x64.tar.gz”
tar -zxvf jdk-8u101-linux-x64.tar.gz
alternatives –install /usr/bin/java java /opt/jdk1.8.0_101/bin/java 2
alternatives –config java
java -version
vi /etc/profile.d/java.sh
export JAVA_HOME=/opt/jdk1.8.0_101
export JRE_HOME=/opt/jdk1.8.0_101/jre
export PATH=$PATH:/opt/jdk1.8.0_101/bin:/opt/jdk1.8.0_101/jre/bin
chmod +x /etc/profile.d/java.sh
source /etc/profile.d/java.sh
export

cd /usr/local
wget http://www.us.apache.org/dist/tomcat/tomcat-9/v9.0.0.M1/bin/apache-tomcat-9.0.0.M1.tar.gz
wget http://download.nus.edu.sg/mirror/apache/tomcat/tomcat-9/v9.0.0.M11/bin/apache-tomcat-9.0.0.M11.tar.gz
tar -zxvf apache-tomcat-9.0.0.M11.tar.gz
cd apache-tomcat-9.0.0.M11
cd bin/
./startup.sh
ps -ef | grep java
./shutdown.sh

 

 

 

[root@clusterserver1 conf]# cat server.xml
<?xml version=”1.0″ encoding=”UTF-8″?>
<!–
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements.  See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the “License”); you may not use this file except in compliance with
the License.  You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an “AS IS” BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
–>
<!– Note:  A “Server” is not itself a “Container”, so you may not
define subcomponents such as “Valves” at this level.
Documentation at /docs/config/server.html
–>
<Server port=”8005″ shutdown=”SHUTDOWN”>
<Listener className=”org.apache.catalina.startup.VersionLoggerListener” />
<!– Security listener. Documentation at /docs/config/listeners.html
<Listener className=”org.apache.catalina.security.SecurityListener” />
–>
<!–APR library loader. Documentation at /docs/apr.html –>
<Listener className=”org.apache.catalina.core.AprLifecycleListener” SSLEngine=”on” />
<!– Prevent memory leaks due to use of particular java/javax APIs–>
<Listener className=”org.apache.catalina.core.JreMemoryLeakPreventionListener” />
<Listener className=”org.apache.catalina.mbeans.GlobalResourcesLifecycleListener” />
<Listener className=”org.apache.catalina.core.ThreadLocalLeakPreventionListener” />

<!– Global JNDI resources
Documentation at /docs/jndi-resources-howto.html
–>
<GlobalNamingResources>
<!– Editable user database that can also be used by
UserDatabaseRealm to authenticate users
–>
<Resource name=”UserDatabase” auth=”Container”
type=”org.apache.catalina.UserDatabase”
description=”User database that can be updated and saved”
factory=”org.apache.catalina.users.MemoryUserDatabaseFactory”
pathname=”conf/tomcat-users.xml” />
</GlobalNamingResources>

<Service name=”Catalina_2″>
<Connector port=”80″ protocol=”HTTP/1.1″ connectionTimeout=”20000″ redirectPort=”443″ />
<Connector port=”8010″ protocol=”AJP/1.3″ redirectPort=”443″ />
<Engine name=”Catalina_2″ defaultHost=”localhost”>
<Realm className=”org.apache.catalina.realm.LockOutRealm”>
<Realm className=”org.apache.catalina.realm.UserDatabaseRealm” resourceName=”UserDatabase”/>
</Realm>
<Host name=”localhost”  appBase=”webapps_2″ unpackWARs=”true” autoDeploy=”true”>
<Valve className=”org.apache.catalina.valves.AccessLogValve” directory=”logs”
prefix=”localhost_access_log.” suffix=”.txt”
pattern=”%h %l %u %t &quot;%r&quot; %s %b” />
</Host>
</Engine>
</Service>

<!– A “Service” is a collection of one or more “Connectors” that share
a single “Container” Note:  A “Service” is not itself a “Container”,
so you may not define subcomponents such as “Valves” at this level.
Documentation at /docs/config/service.html
–>
<Service name=”Catalina”>

<!–The connectors can use a shared executor, you can define one or more named thread pools–>
<!–
<Executor name=”tomcatThreadPool” namePrefix=”catalina-exec-”
maxThreads=”150″ minSpareThreads=”4″/>
–>

<!– A “Connector” represents an endpoint by which requests are received
and responses are returned. Documentation at :
Java HTTP Connector: /docs/config/http.html
Java AJP  Connector: /docs/config/ajp.html
APR (HTTP/AJP) Connector: /docs/apr.html
Define a non-SSL/TLS HTTP/1.1 Connector on port 8080
–>
<Connector port=”8080″ protocol=”HTTP/1.1″
connectionTimeout=”20000″
redirectPort=”8443″ />
<!– A “Connector” using the shared thread pool–>
<!–
<Connector executor=”tomcatThreadPool”
port=”8080″ protocol=”HTTP/1.1″
connectionTimeout=”20000″
redirectPort=”8443″ />
–>
<!– Define a SSL/TLS HTTP/1.1 Connector on port 8443
This connector uses the NIO implementation with the JSSE engine. When
using the JSSE engine, the JSSE configuration attributes must be used.
–>
<!–
<Connector port=”8443″ protocol=”org.apache.coyote.http11.Http11NioProtocol”
maxThreads=”150″ SSLEnabled=”true”>
<SSLHostConfig>
<Certificate certificateKeystoreFile=”conf/localhost-rsa.jks”
type=”RSA” />
</SSLHostConfig>
</Connector>
–>
<!– Define a SSL/TLS HTTP/1.1 Connector on port 8443 with HTTP/2
This connector uses the APR/native implementation. When using the
APR/native implementation or the OpenSSL engine with NIO or NIO2 then
the OpenSSL configuration attributes must be used.
–>
<!–
<Connector port=”8443″ protocol=”org.apache.coyote.http11.Http11AprProtocol”
maxThreads=”150″ SSLEnabled=”true” >
<UpgradeProtocol className=”org.apache.coyote.http2.Http2Protocol” />
<SSLHostConfig>
<Certificate certificateKeyFile=”conf/localhost-rsa-key.pem”
certificateFile=”conf/localhost-rsa-cert.pem”
certificateChainFile=”conf/localhost-rsa-chain.pem”
type=”RSA” />
</SSLHostConfig>
</Connector>
–>

<!– Define an AJP 1.3 Connector on port 8009 –>
<Connector port=”8009″ protocol=”AJP/1.3″ redirectPort=”8443″ />

<!– An Engine represents the entry point (within Catalina) that processes
every request.  The Engine implementation for Tomcat stand alone
analyzes the HTTP headers included with the request, and passes them
on to the appropriate Host (virtual host).
Documentation at /docs/config/engine.html –>

<!– You should set jvmRoute to support load-balancing via AJP ie :
<Engine name=”Catalina” defaultHost=”localhost” jvmRoute=”jvm1″>
–>
<Engine name=”Catalina” defaultHost=”localhost”>

<!–For clustering, please take a look at documentation at:
/docs/cluster-howto.html  (simple how to)
/docs/config/cluster.html (reference documentation) –>
<!–
<Cluster className=”org.apache.catalina.ha.tcp.SimpleTcpCluster”/>
–>

<!– Use the LockOutRealm to prevent attempts to guess user passwords
via a brute-force attack –>
<Realm className=”org.apache.catalina.realm.LockOutRealm”>
<!– This Realm uses the UserDatabase configured in the global JNDI
resources under the key “UserDatabase”.  Any edits
that are performed against this UserDatabase are immediately
available for use by the Realm.  –>
<Realm className=”org.apache.catalina.realm.UserDatabaseRealm”
resourceName=”UserDatabase”/>
</Realm>

<Host name=”localhost”  appBase=”webapps”
unpackWARs=”true” autoDeploy=”true”>

<!– SingleSignOn valve, share authentication between web applications
Documentation at: /docs/config/valve.html –>
<!–
<Valve className=”org.apache.catalina.authenticator.SingleSignOn” />
–>

<!– Access log processes all example.
Documentation at: /docs/config/valve.html
Note: The pattern used is equivalent to using pattern=”common” –>
<Valve className=”org.apache.catalina.valves.AccessLogValve” directory=”logs”
prefix=”localhost_access_log” suffix=”.txt”
pattern=”%h %l %u %t &quot;%r&quot; %s %b” />

</Host>
</Engine>
</Service>
</Server>

Linux server security configuration of Nginx

Under Linux server security configuration of Nginx

1, some common sense
under Linux, you want to read a file, you first need to have execute permissions for the folder where the file, and then you need to read permissions on the file.

Execute permissions php files do not need the file, you only need read permission nginx and php-fpm run accounts.

After uploading Trojans, you can not list the contents of a folder with php-fpm running account permission to read the relevant folder permissions Trojans execute commands with the account permissions php-fpm related.

If the Trojan to execute the command, you need php-fpm account the corresponding sh Executive authority.

Reads a file within the folder, the folder is not necessary to have read access, only for folders have execute permissions.

1, the top of the configuration
1, the top of the configuration

#define Nginx users and user groups to run
user nginx;

# processes Files
pid /var/run/nginx.pid;

#Error log locations and levels, Debug, the info, Notice, The warn, error, criteres
error_log /var/log/nginx/error.log warn;

#Nginx number of worker processes, and can be set for the number of CPU cores available.
worker_processes 8;

# each worker limit the maximum number of open file descriptors. The theoretical value should be opened up to the number (the value of the system ulimit -n) divided by the number of processes and nginx file, but nginx allocation request is not uniform, it is proposed that is consistent with the value of ulimit -n.
worker_rlimit_nofile 65535;

2, Events module

events {
# worker processes simultaneously set a maximum number of connections open
worker_connections 2048;

# tell nginx connection after receiving a new notification to accept as many connections
multi_accept on;

#set for multiplexing client thread polling method. If you use Linux 2.6+, you should use epoll. If you use * BSD, you should use kqueue.
use epoll;
}

3?HTTP
http {
#hide Nginx version number, to improve security.
server_tokens off;

# Open and efficient file transfer mode, sendfile sendfile directive specifies Nginx whether to call a function to output files for common applications is set on, if used to download applications such as disk IO heavy duty applications, can be set off, in order to balance the disk and network I / O processing speed and reduce the load on the system.
sendfile on;

# whether open access directory listing, turned off by default.
autoindex off;

#?? Nginx
tcp_nopush on;

# tell Nginx to send a data package in all the header files, not one by one to send

# Nginx told not to cache data, but transmits a section – when the need for timely sending data, it should be when setting this property to the application, which sends a small piece of data can not be obtained immediately return a value. Nginx default tcp nopush always work in the state. However, when the open front sendfile on; when its work is characterized by a final package nopush will automatically switch to turn nopush off. To reduce that delay of 200ms, open nodelay on;
it is quickly transmitted. The conclusion is that sendfile on; when open, tcp_nopush and tcp_nodelay are on is possible.
tcp_nodelay on;

# log format set
log_format main ‘$remote_addr – $remote_user [$time_local] “$request” ‘
‘$status $body_bytes_sent “$http_referer” ‘
‘”$http_user_agent” “$http_x_forwarded_for”‘;
# define access log set to off to turn off logging, improve performance
access_log /var/log/nginx/access.log main;

#Connection timeout, in seconds
keepalive_timeout 120;

# read the HTTP header timeout, the default value of 60. Client and server to establish a connection start after receiving HTTP header, in the process, if not read in a time interval (timeout) to the client sent byte is considered overtime, returned to the client 408 ( “Request timed out”) response.
client_header_timeout 60;

# the default value of 60. Similar client_header_timeout, but this time-out only when valid HTTP packet body read.
client_body_timeout 10;

# send a response, the default value of 60. That Nginx server to the client to send data packets, but the client does not have to receive the packet. If a connection over send_timeout defined timeout period, then Nginx will close the connection.
send_timeout 60;

# by sending RST packets to the client after a direct connection timeout to reset the connection. When this option is turned on, Nginx will timeout after a connection, instead of using the normal case under the four-way handshake to close a TCP connection, but sends RST reset packets directly to users without waiting for user’s response, released directly on Nginx server All about the cache (such as TCP sliding window) socket used. Compared to the normal shutdown mode, which allows the server to avoid many in FIN_WAIT_1, FIN_WAIT_2, TCP TIME_WAIT state connection.
Note that the use RST reset packets to close the connection will bring some problems, by default will not open.
reset_timedout_connection off;

# To restrict access, you must have a connection to the container counts, “zone =” is to give it a name, you can easily call, agreed to keep the name below limit_conn. $ binary_remote_addr binary to store the client’s address, 1m can store 32,000 concurrent sessions.
limit_conn_zone $binary_remote_addr zone=addr:5m;

# given the key to set the maximum number of connections. Here is the key addr, we set value is 100, which means that we allow each IP address to open up to 100 simultaneous connections.
limit_conn addr 100;

# 100k limit for each connection. That if one IP allows two concurrent connections, then the IP is the speed limit 200K.
limit_rate 100k;

#include directive is another file that contains the contents of the current file. Here we use it to load the file extension and the file type mapping table. nginx according to the mapping relationship set http request response Content-Type header value. When not found in the mapping table, the default value nginx.conf in default-type specified.
include /etc/nginx/mime.types;

#default MIME-type # settings files used
default_type text/html;

# default encoding
charset UTF-8;

# This module can read the pre-compressed gz file, thus reducing each request gzip compression CPU resource consumption. After the module is enabled, nginx first checks whether the file exists gz ending requests for static files, if there is a direct return to the gz file contents.
gzip_static off;

# Turn gzip compression.
gzip on;

# disable client is IE6 when gzip functions.
gzip_disable “msie6”;

##Nginx as a reverse proxy when enabled. Available Values: OFF | expired The | NO-Cache | NO-Sotre | Private | no_last_modified | no_etag | the auth | the any
gzip_proxied any;

# set the minimum number of pages that allow compressed bytes, the number of bytes from the header page header Content- Length of the acquired. Recommendations set larger than the number of bytes 1k, 1k may be less than the greater the pressure.
gzip_min_length 1024;

# Set the data compression level. This level can be any number between 1-9, 9 is the slowest but maximum compression ratio
gzip_comp_level 5;

# Set the system to obtain several cache unit for storing gzip compression result data stream. 4 4k 4k representative example as a unit, according to the original data size in units of 4 times 4k application memory. If not set, the default is the same size as the original application with data memory space to store gzip compressed results.
gzip_buffers 4 16k;

# Set the desired compressed data format. Nginx default only text / html compression.
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;

# designated as open file cache, the default is not enabled, max specify the cache number of recommendations and open the file number is consistent, inactive refers to how long the file has not been deleted cached request.
open_file_cache max=65535 inactive=30s;

# valid information check how long the cache
open_file_cache_valid 30s;

# within #open_file_cache instruction inactive time parameter file using the least number of times, if this number is exceeded, the file descriptor has been opened in the cache of. Last-Modified the same situation, because when nginx after a static file cache, if it is still access the 30s, so it’s cache has existed until the 30s you do not visit so far.
open_file_cache_min_uses 2;

# whether the records cache error
open_file_cache_errors on;

include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}

4, SERVER module

server {
# listening port, nginx based HOST request to determine which configuration section to use SERVER. If no matching server_name, use the default configuration file first. Plus default_server you can not specify the default rule to match.
#listen 80;
listen 80 default_server;

# can have multiple domain names, separated by spaces
server_name www.test.com test.com;
root /user/share/nginx/html/test;

# 404 page configuration
error_page 404 /404.html;

# configuration ssl, when there is need to open.
ssl on;
ssl_certificate /etc/nginx/ssl/server.crt;
ssl_certificate_key /etc/nginx/ssl/server.key;

location / {
index index.html index.php;
}

# picture cache time
location ~ .*.(gif|jpg|jpeg|png|bmp|swf)$ {
expires 10d;
}

#js and CSS cache time
location ~ .*.(js|css)?$ {
expires 1h;
}

location ~ [^/]\.php(/|$) {
fastcgi_index index.php;
# open PATH_INFO support role is in accordance with the parameters given regular expression is divided into a $ fastcgi_script_name and $ fastcgi_path_info.
# For example: When requested index.php / id / 1 without this line configuration, fastcgi_script_name is /index.php/id/1,fastcgi_path_info is empty.
# Plus, fastcgi_script_name is index.php, fastcgi_path_info is / the above mentioned id / 1
fastcgi_split_path_info ^(.+\.php)(.*)$;

# This value is the PHP in $ _SERVER [ ‘SCRIPT_FILENAME’] value
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info;

# specify the FastCGI server listening port and address. PHP-FPM and shall be the same settings.
127.0.0.1:9000;#fastcgi_pass;
fastcgi_pass unix:/var/run/php5-fpm.sock;
include fastcgi_params;
}
}

2, prohibit access to IP

2, the common mode
1. Let Trojans after uploading can not be performed: Upload directory for added configured in nginx configuration file, so this directory is unable to resolve PHP
2. Let Trojans do not see the non-execution site directory files: Cancel php-fpm running account read permissions for other directories
3. run can not be performed after the Trojan: cancel php-fpm execute permissions for the account of sh
4. after command execution permission is not too high: php-fpm account with root or not root join group

3, the specific configuration of
1 to deny access to files and execute php

3, the specific configuration of
1 to deny access to files and execute php
location ~ /(attachments|upload)/.*\.(php|php5)?$ {
deny all;
}

2, prohibit access to IP

// Disable the wording of
the deny 10.0.0.0/24;

// wording allowed
the allow 10.0.0.0/24;
the deny All;

3, according to the user’s real IP connection limits do

## Here for the original user’s IP address
map $http_x_forwarded_for $clientRealIp {
“” $remote_addr;
~^(?P<firstAddr>[0-9\.]+),?.*$ $firstAddr;
}

## For the original user IP address restrictions do
limit_conn_zone $clientRealIp zone=TotalConnLimitZone:20m ;
limit_conn TotalConnLimitZone 50;
limit_conn_log_level notice;

## for the original user’s IP address restrictions do
limit_req_zone $clientRealIp zone=ConnLimitZone:20m rate=10r/s;
#limit_req zone=ConnLimitZone burst=10 nodelay;
limit_req_log_level notice;

## specific server configuration
server {
the listen 80;
LOCATION ~ \ .php $ {
## queuing up to 5, since the processing 10 requests per second + 5 line, one second you send up to 15 request over more direct return 503 error to you
limit_req Zone = ConnLimitZone Burst = 5 NoDelay ™;

fastcgi_pass 127.0.0.1:9000;
fastcgi_index the index.php;
the include fastcgi_params;
}

}

4, after a multi-CDN obtain the original user’s IP address, nginx configuration

$ HTTP_X_FORWARDED_FOR $ clientRealIp {Map
## not through a proxy, the direct use of REMOTE_ADDR
“” $ REMOTE_ADDR;
## with a regular match, made from the user’s original x_forwarded_for the IP
##, for example X-Forwarded-For: 202.123.123.11, 208.22.22.234 , 192.168.2.100, …
## where the first 202.123.123.11 is the user’s real IP, behind other is through the CDN server
~ ^ (? P <firstAddr> [0-9 \.] +) ,? . * $ $ firstAddr;
}

## through the map command, we created a variable nginx $ clientRealIp, this is the real IP address of the original user,
## whether the user is a direct access, or access through a bunch of CDN after we You can obtain the correct IP address of the original

5?hide version

server_tokens off;
proxy_hide_header X-Powered-By;
// compile time or modify the source code

6?disable non-essential method

if ($request_method !~ ^(GET|HEAD|POST)$ ) {
return 444;
}

7 Disable extension

location ~* .(txt|doc|sql|gz|svn|git)$ {
deny all;
}

8, the rational allocation response header
add_header Strict-Transport-Security “max-age=31536000”;
add_header X-Frame-Options deny;
add_header X-Content-Type-Options nosniff;
add_header Content-Security-Policy “default-src ‘self’; script-src ‘self’ ‘unsafe-inline’ ‘unsafe-eval’ https://a.disquscdn.com; img-src ‘self’ data: https://www.google-analytics.com; style-src ‘self’ ‘unsafe-inline’; frame-src https://disqus.com”;

Strict-Transport-Security (abbreviated as HSTS) you can tell the browser within the specified max-age, always through HTTPS access

X-Frame-Options page to specify whether to allow this to be nested iframe, deny that allowed any nested occur

9, refused several User-Agents

if ($http_user_agent ~* LWP::Simple|BBBike|wget) {
return 403;
}

10, to prevent image hotlinking

valid_referers blocked www.example.com example.com;
if ($invalid_referer) {
rewrite ^/images/uploads.*\.(gif|jpg|jpeg|png)$ http://www.examples.com/banned.jpg last
}

11, a control buffer overflow attacks

client_body_buffer_size 1K;
client_header_buffer_size 1k;
client_max_body_size 1k;
large_client_header_buffers 2 1k;

explain

1, client_body_buffer_size 1k- (default 8k or 16k) This instruction can specify a connection request buffer size entities. If the value exceeds the specified buffer connection request, then the whole or part of the requesting entity will attempt to write to a temporary file.
2, client_header_buffer_size 1k- directive specifies the client request buffer size of the head. In most cases a request header is not greater than 1k, but if there is a large cookie wap from clients it may be greater than 1k, Nginx will be assigned to it a larger buffer, this value can be set inside the large_client_header_buffers .
3, client_max_body_size 1k- directive specifies the maximum allowable size of the client requesting entity connected, it appears in the Content-Length header field of the request. If the request is greater than the specified value, the client will receive a “Request Entity Too Large” (413 ) error. Remember, the browser does not know how to show this error.
4, large_client_header_buffers- specify the client number and size of some of the larger buffer request header use. Request a field can not be greater than the buffer size, if the client sends a relatively large head, nginx returns “Request URI too large” (414 )
client_body_timeout 10;
client_header_timeout 10;
keepalive_timeout 5 5;
send_timeout 10;

1, client_body_timeout 10; – read instruction specified timeout request entity. Here timeout refers to a requesting entity did not enter the reading step, if the connection after this time the client does not have any response, Nginx will return a “Request time out” (408) error.
2, client_header_timeout 10; – directive specifies the client request header headline read timeout. Here timeout refers to a request header did not enter the reading step, if the connection after this time the client does not have any response, Nginx will return a “Request time out” (408) error.
3, keepalive_timeout 5 5; – the first parameter specifies the timeout length of the client and server connections, over this time, the server closes the connection. The second value of the parameter (optional) specifies the response header Keep-Alive: timeout = time value of the time, this value can make some browsers know when to close the connection to the server without having to repeatedly shut down, if you do not specify this parameter , nginx does not send Keep-Alive header information in the response. (This does not refer to how to connect a “Keep-Alive”) values of these two parameters can be different.
4, send_timeout 10; directive specifies the timeout is sent to the client after the response, Timeout refers not enter the state established a complete, finished only two handshakes, if more than this time the client send nothing, nginx will close the connection.
12, a control concurrent connections

limit_zone slimits $binary_remote_addr 5m;
limit_conn slimits 5;

13?sysctl.conf

# Avoid a smurf attack
net.ipv4.icmp_echo_ignore_broadcasts = 1

# Turn on protection for bad icmp error messages
net.ipv4.icmp_ignore_bogus_error_responses = 1

# Turn on syncookies for SYN flood attack protection
net.ipv4.tcp_syncookies = 1

# Turn on and log spoofed, source routed, and redirect packets
net.ipv4.conf.all.log_martians = 1
net.ipv4.conf.default.log_martians = 1

# No source routed packets here
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.accept_source_route = 0

# Turn on reverse path filtering
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.rp_filter = 1

# Make sure no one can alter the routing tables
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.default.accept_redirects = 0
net.ipv4.conf.all.secure_redirects = 0
net.ipv4.conf.default.secure_redirects = 0

# Don’t act as a router
net.ipv4.ip_forward = 0
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0

# Turn on execshild
kernel.exec-shield = 1
kernel.randomize_va_space = 1

# Tuen IPv6
net.ipv6.conf.default.router_solicitations = 0
net.ipv6.conf.default.accept_ra_rtr_pref = 0
net.ipv6.conf.default.accept_ra_pinfo = 0
net.ipv6.conf.default.accept_ra_defrtr = 0
net.ipv6.conf.default.autoconf = 0
net.ipv6.conf.default.dad_transmits = 0
net.ipv6.conf.default.max_addresses = 1

# Optimization for port usefor LBs
# Increase system file descriptor limit
fs.file-max = 65535

# Allow for more PIDs (to reduce rollover problems); may break some programs 32768
kernel.pid_max = 65536

# Increase system IP port limits
net.ipv4.ip_local_port_range = 2000 65000

# Increase TCP max buffer size setable using setsockopt()
net.ipv4.tcp_rmem = 4096 87380 8388608
net.ipv4.tcp_wmem = 4096 87380 8388608

# Increase Linux auto tuning TCP buffer limits
# min, default, and max number of bytes to use
# set max to at least 4MB, or higher if you use very high BDP paths
# Tcp Windows etc
net.core.rmem_max = 8388608
net.core.wmem_max = 8388608
net.core.netdev_max_backlog = 5000
net.ipv4.tcp_window_scaling = 1

14 Firewall Rules
/sbin/iptables -A INPUT -p tcp –dport 80 -i eth0 -m state –state NEW -m recent –set
/sbin/iptables -A INPUT -p tcp –dport 80 -i eth0 -m state –state NEW -m recent –update –seconds 60 –hitcount 15 -j DROP

15?Nginx

/sbin/iptables -A OUTPUT -o eth0 -m owner –uid-owner vivek -p tcp –dport 80 -m state –state NEW,ESTABLISHED -j ACCEPT

 

 

www.rmohan.com
www.rmohan.net

nginx

nginx.conf

www.rmohan.com  http://192.168.1.18:8080
www.rmohan.net  http://192.168.1.18:8181

server1?

server {
listen      80;
server_name  www.rmohan.com;

location / {

proxy_set_header  Host rmohan.com;
proxy_redirect off;
proxy_set_header  X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_pass http://192.168.95.180:8080;

}
}

server2?

server {
listen      80;
server_name  www.rmohan.net;

location / {

proxy_set_header  Host rmohan.net;
proxy_redirect off;
proxy_set_header  X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_pass http://192.168.95.181:8181;

}
}

postfix

  • Biglobe is OP25B because there are regulations, to set via the relay server of Biglobe. Thus SASL to transmit authentication.
  • SPF perform the source domain authentication in.
  • S25R , Greylisting , Tarpitting prevent access from suspicious server approach.
  • Because you do not want to do in the form of a patch to Postfix, the Debian package, plug-ins, the measures to be carried out in the setting change.

Biglobe relay

  • Installation of SASL module.
     #aptitude install libsasl2-modules sasl2-bin
    
  • Relays Biglobe relay server, configured to use SASL at that time.
    main.cf

     # BIGLOBE transfer server relayhost = [#####. Biglobe.ne.jp]
    
     # To enable SASL authentication in the Postfix SMTP client.
     smtp_sasl_auth_enable = yes
    
     # Specify the SMTP client lookup tables smtp_sasl_password_maps = hash: / etc / postfix / isp_passwd
    
     # Since the SMTP server-side SASL mechanism and the home server-side SASL mechanism of the ISP might fail to # authentication it's a mismatch, to fix the mechanism to be used in the following.
     smtp_sasl_mechanism_filter = cram-md5, login, plain
    
  • Describe the setting of the password to be used for transmission.
    isp_passwd

     [#####. Biglobe.ne.jp] ***** @ bma.biglobe.ne.jp:*******
    
     #postmap isp_passwd
    
  • Complete the setting of the relay to restart

Setting of SMTP AUTH

  • Installation of the necessary modules
     #aptitude install libsasl2-modules sasl2-bin
    
  • Additional authentication user
     # Saslpasswd2 -u [domain name] -c [user name]
    
  • Set to read in Postfix
     #chgrp postfix / etc / sasldb2
     #chmod 640 / etc / sasldb2
     #ln / etc / sasldb2 / var / spool / postfix / etc
    
  • Postfix configuration of
    main.cf

     smtpd_sasl_auth_enable = yes
     smtpd_sasl_local_domain = example.com
     smtpd_sasl_security_options = noanonymous, noplaintext
    
     smtpd_recipient_restrictions =
       permit_mynetworks,
       permit_sasl_authenticated,
       reject_unauth_destination
    
  • Completion SMTP AUTH settings restart

Access regulations from suspicious server

  • Set the number of allowable error to prevent the account survey of brute force. 70 seconds of the response delay in the more than five times the error. Cut at 8 times error.
    main.cf

     smtpd_soft_error_limit = 5
     smtpd_hard_error_limit = 8
     smtpd_error_sleep_time = 70
    
     smtpd_delay_reject = yes
    
  • Install the policy server for Greylisting.
     # Apt-get install postgrey
    

    main.cf

     smtpd_restriction_classes = check_greylist
     check_greylist = check_policy_service inet: 60000
    
  • At the time of the RCPT command, things get caught in the S25R will prompt the retransmission Greylisting. Things that have been retransmitted, multiplied by the response delay in further Tarpitting.
    The methodological, screen out method in Tarpitting in addition to Rgrey.
    main.cf

     # RCPT check (Greylisting) & Terpit
     smtpd_recipient_restrictions =
                     permit_mynetworks
                     reject_unauth_destination
                     check_client_access regexp: / etc / postfix / check_client_fqdn_greylist ? ? 1
                     check_client_access regexp: / etc / postfix / check_client_fqdn_tarpit ? ? 2
                     check_recipient_access hash: / etc / postfix / recipient_restrictions
    

    check_client_fqdn_greylist

     / ^ Unknown $ / check_greylist
     /^[^\.]*[0-9][^0-9\.]+[0-9]/ Check_greylist
     /^[^\.]*[0-9]{5}/ Check_greylist
     /^([^\.]+\.)?[0-9][^\.]*\.[^\.]+\..+\.[az]/ Check_greylist
     /^[^\.]*[0-9]\.[^\.]*[0-9]-[0-9]/ Check_greylist
     /^[^\.]*[0-9]\.[^\.]*[0-9]\.[^\.]+\..+\./ Check_greylist
     /^(dhcp|dialup|ppp|adsl)[^\.]*[0-9]/ check_greylist
    

    check_client_fqdn_tarpit

     / ^ Unknown $ / sleep 70
     /^[^\.]*[0-9][^0-9\.]+[0-9]/ Sleep 70
     /^[^\.]*[0-9]{5}/ Sleep 70
     /^([^\.]+\.)?[0-9][^\.]*\.[^\.]+\..+\.[az]/ Sleep 70
     /^[^\.]*[0-9]\.[^\.]*[0-9]-[0-9]/ Sleep 70
     /^[^\.]*[0-9]\.[^\.]*[0-9]\.[^\.]+\..+\./ Sleep 70
     /^(dhcp|dialup|ppp|adsl)[^\.]*[0-9]/ sleep 70
    
  • The check_client_fqdn_greylist and check_client_fqdn_tarpit to postmap, complete if you restart Postfix.
  • taRgrey If you find the things that implement in Postfix, the place that you want to migrate to over there. There is only a patch is now?

Sent in SPF original domain authentication

  • To get the script to perform the SPF. SPF Project get the postfix-policyd-spf from the page.
  • Installation since the script uses the Perl of Mail :: SPF :: Query library.
     apt-get install libmail-spf-perl libmail-spf-query-perl
    
  • It registered as a service to use a script in Postfix.
    master.cf

     policy unix - nn - - spawn user = nobody argv = / usr / bin / perl [location of the script that was placed above]
    
  • Set to perform a check of SPF at the time of connection.
    main.cf

     # Connection check
     smtpd_client_restrictions =
                     permit_mynetworks
                     reject_rbl_client spamcop.net
                     reject_rbl_client all.rbl.jp
                     check_policy_service unix: private / policy ? ?
                     check_client_access hash: / etc / postfix / client_restrictions
    
  • Server Received-SPF is when you have new mail: completion if so as to grant the header.

Premise

  • Making the CA’s self-signed, make a server certificate by signing of the CA.
  • Import the CA certificate to the Trusted or something USB memory, to verify the certificate chain.
  • Said that even if, quite appropriate.

X.509 v3 configuration file for the extension

  • Creating a file to set the extension properties of version 3.
    ext.cnf

     # ? default value used at the time of the certificate request (CSR) [req]
       default_bits = 2048
       distinguished_name = req_distinguished_name
       attributes = req_attributes
       default_md = sha1
       string_mask = nombstr
    
     # ? default value of the contents described in the certificate request (CSR) [req_distinguished_name]
       countryName = Country Name (2 letter code)
       countryName_default = JP
       stateOrProvinceName = State or Province Name (full name)
       stateOrProvinceName_default = 
       localityName = Locality Name (eg, city)
       localityName_default = 
       0.organizationName = Organization Name (eg, company)
       0.organizationName_default = 
       organizationalUnitName = Organizational Unit Name (eg, section)
       commonName = Common Name (*** IMPORTANT ***)
       commonName_default =
       emailAddress = Email Address
       emailAddress_default =
    
     # ? I heard use at the time of the certificate request (CSR), I do not know well [req_attributes]
       challengePassword = A challenge password
       challengePassword_min = 4
       challengePassword_max = 20
       unstructuredName = An optional company name
    
     # V3 extensions for CA [v3_ca]
       basicConstraints = CA: true
       subjectKeyIdentifier = hash
       authorityKeyIdentifier = keyid: always, issuer: always
       keyUsage = cRLSign, keyCertSign
       nsCertType = sslCA, emailCA
       # ? here suspicious extendedKeyUsage = 1.3.6.1.5.5.7.3.2, 1.3.6.1.5.5.7.3.1, 1.3.6.1.5.5.7.3.4
    
     # V3 extensions for server certificate [cert_server]
       basicConstraints = CA: FALSE
       subjectKeyIdentifier = hash
       authorityKeyIdentifier = keyid: always, issuer: always
       keyUsage = digitalSignature, keyEncipherment
       nsCertType = server
       # ? here suspicious extendedKeyUsage = 1.3.6.1.5.5.7.3.1
    

Task Command

  • Creating a private key for the CA. 2048-bit RSA. Encrypt the key itself in 192-bit AES.
     $ Openssl genrsa -aes192 -out ca.key 2048
    
  • Creating a CA certificate. Create a certificate of X.509 v3. Period is suitably 10 years. In addition to the easy-to-understand PEM format as also output in the TEXT format.
     $ Openssl req -new -x509 -days 3652 \
         -key ca.key -out ca.crt \
         -config ext.cnf -extensions v3_ca -text
    
  • Create a file describing the serial No of the certificate to be managed by the CA. Error in the following and this is not.
     $ Echo "00"> ca.srl
    
  • Creating a private key for the server. 2048-bit RSA. Encryption key itself is no. (Asked passphrase to Apache startup if there)
     $ Openssl genrsa -out server.key 2048
    
  • Create a certificate request for the server. Easy-to-understand way, in addition to the PEM also TEXT format output.
     $ Openssl req -new \
         -key server.key -out server.csr \
         -config ext.cnf -text
    
  • Create a server certificate of X.509 v3 signed with the CA key. Period is suitably 700 days.
     $ Openssl x509 -req -days 700 \
         -in server.csr -out server.crt \
         -CA Ca.crt -CAkey ca.key \
         -extfile ext.cnf -extensions cert_server -CAserial ca.srl
    

Apache built-in

  • After suitably incorporated in the Apache.
    Add the following to the appropriate VirtualHost setting

     Listen 443
    
     ## SSL Virtual Host Context
     <VirtualHost *: 443>
       SSLEngine on
    
       SSLCertificateFile server.crt
       SSLCertificateKeyFile server.key
       SSLCACertificateFile ca.crt
      
       DocumentRoot ************
      
       ~ ~ ~ ~ ~ ~ Suitable below
     </ VirtualHost>