|
Tomcat starts with slow optimization methods
Tomcat started today and found that the start is very slow, take a few minutes, the problem worthy of attention, so go to view the log, found that time is caused by a random number caused by the problem. Tomcat Session ID calculated by SHA1 algorithm, the time to calculate Session ID must have a secret key, in order to improve the security Tomcat at startup time through the random number generated secret key.
First, the environment introduced
System version: CentOS 7.2
Software version: Tomcat 8
Second, the log analysis, troubleshooting reasons are as follows:
4-Mayr-2017 8:07:49 .623 INFO [localhost-startStop-1] org.apache.catalina.util.SessionIdGeneratorBase.createSecureRandom Creation of SecureRandominstance for session ID generation using [SHA1PRNG] took [55,507] milliseconds.
4- Mayr-2017 8:07:49 .653 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectoryDeployment of web application directory / application / apache-tomcat-8.0.27 / webapps / ROOT has finished In 165,935 ms
The main reason:
is generated by the random number of the key when the card stuck, causing Tomcat to start slow or failure.
# Affect the strength of the random number is generated with the entropy, the specific meaning of their own daughters check, not here to elaborate.
To see if there is enough entropy to generate a random number, you can use the
following command to see [root @ qiuyuetao tools] # cat /proc/sys/kernel/random/entropy_avail
7
In order to speed up /dev/random to provide the speed of random numbers, you can operate the device peripherals,
let it produce a lot of interrupt (such as network transmission data,
press the button, move the mouse, at the command line to knock on several different commands, commonly known as poly Gas
cat /dev/random ## can consume energy
Third,
there are three ways to deal with and optimize the solution;
Method 1: Use rngd software to increase the entropy pool *****
Recommended to use grep rdrand /proc/cpuinfo # Need cpu support
yum install rng-tools # Install rngd service (entropy service , Increase the entropy pool)
systemctl start rngd # start the service
Method 2: java
environment to modify the configuration file vim $JAVA_HOME/jre/lib/security/java.security
securerandom.source = file: /dev/random
to
securerandom.source = file: /dev/urandom
Method 3: You can
configure a JRE using a non-blocking Entropy Source: vim $TOMCAT_HOME/bin/catalina.sh
if [[$ JAVA_OPTS “! = * – Djava.security.egd = *]]; then
JAVA_OPTS =” $ JAVA_OPTS -Djava.security.egd = file: /dev/urandom ”
fi
## This system attribute egd said entropy collection daemon (entropy gathering daemon)
CentOS 6.8 to compile and install LNMP Brief
deas: According to information systems and Linux systems company’s Web site, select the appropriate installation package for installation
First, view system information
# Uname -a # View kernel /operating system /the CPU information
# Head -n 1 /etc/issue # view the operating system version
# Grep MemTotal / proc / meminfo # view the amount of memory
# fdisk the -l # view all partitions
Second, the specific installation
Conventional installation dependencies
yum -y install gcc gcc-c++ autoconf libjpeg libjpeg-devel libpng libpng-devel freetype freetype-devel libxml2 libxml2-devel glibc glibc-devel glib2 glib2-devel bzip2 bzip2-devel ncurses ncurses-devel curl curl-devel e2fsprogs e2fsprogs-devel krb5 krb5-devel libidn libidn-devel openldap openldap-devel openldap-clients openldap-servers make zlib-devel pcre-devel openssl-devel libtool* git tree bison perl gd gd-devel
Install libiconv library
1 tar zxvf libiconv-1.14.tar.gz
2 cd libiconv-1.14
3 ./configure –prefix=/usr/local/libiconv
4 make && make install
5 cd ..
Installation libmcrypt, mhash, mcrypt library
1 tar zxvf libmcrypt-2.5.8.tar.gz
2 cd libmcrypt-2.5.8
3 ./configure
4 make && make install
5 cd ..
6 tar jxvf mhash-0.9.9.9.tar.bz2
7 cd mhash-0.9.9.9
8 ./configure
9 make && make install
10 cd ..
11 tar zxvf mcrypt-2.6.8.tar.gz
12 cd mcrypt-2.6.8
13 ./configure
14 make && make install
15 cd ..
CMake installation tool
1 tar zxvf cmake-3.7.2.tar.gz
2 cd cmake-3.7.2
3 ./bootstrap && make && make install
4 cd..
Installing MySQL
1
2 rpm -e mysql –nodeps
mysql
4 groupadd mysql && useradd -g mysql -M mysql
5 tar zxvf mysql-5.6.24.tar.gz
6 cd mysql-5.6.24
7 cmake \
8 -DCMAKE_INSTALL_PREFIX=/usr/local/mysql \
9 -DMYSQL_DATADIR=/usr/local/mysql/data \
10 -DSYSCONFDIR=/etc \
11 -DMYSQL_USER=mysql \
12 -DWITH_MYISAM_STORAGE_ENGINE=1 \
13 -DWITH_INNOBASE_STORAGE_ENGINE=1 \
14 -DWITH_ARCHIVE_STORAGE_ENGINE=1 \
15 -DWITH_MEMORY_STORAGE_ENGINE=1 \
16 -DWITH_READLINE=1 \
17 -DMYSQL_UNIX_ADDR=/var/lib/mysql/mysql.sock \
18 -DMYSQL_TCP_PORT=3306 \
19 -DENABLED_LOCAL_INFILE=1 \
20 -DENABLE_DOWNLOADS=1 \
21 -DWITH_PARTITION_STORAGE_ENGINE=1 \
22 -DEXTRA_CHARSETS=all \
23 -DDEFAULT_CHARSET=utf8 \
24 -DDEFAULT_COLLATION=utf8_general_ci \
25 -DWITH_DEBUG=0 \
26 -DMYSQL_MAINTAINER_MODE=0 \
27 -DWITH_SSL:STRING=bundled \
28 -DWITH_ZLIB:STRING=bundled
29 make && make install
31 chown -R mysql:mysql /usr/local/mysql
my.cnf?
33 cp support-files/my-default.cnf /etc/my.cnf
[mysqld]
35 vi /etc/my.cnf
36 datadir = /usr/local/mysql/data #MySQL
38 /usr/local/mysql/scripts/mysql_install_db –basedir=/usr/local/mysql –datadir=/usr/local/mysql/data –user=mysql
40 cp support-files/mysql.server /etc/init.d/mysqld
41 chmod +x /etc/init.d/mysqld
mysql
43 service mysqld start
45 chkconfig mysqld on
47 echo ‘PATH=/usr/local/mysql/bin:$PATH’>>/etc/profile
48 export PATH
50 source /etc/profile
root
52 /usr/local/mysql/bin/mysqladmin -uroot -p password
53 cd ..
Installing PHP
1 tar zxvf php-5.6.30.tar.gz
2 cd php-5.6.30
3 ./configure \
4 –prefix=/usr/local/php \
5 –with-fpm-user=www –with-fpm-group=www \
6 –with-config-file-path=/usr/local/php/etc \
7 –with-mhash –with-mcrypt –enable-bcmath \
8 –enable-mysqlnd –with-mysql –with-mysqli –with-pdo-mysql \
9 –with-gd –enable-gd-native-ttf –with-jpeg-dir –with-png-dir –with-freetype-dir \
10 –enable-fpm \
11 –enable-mbstring \
12 –enable-pcntl \
13 –enable-sockets \
14 –enable-opcache \
15 –with-openssl \
16 –with-zlib \
17 –with-curl \
18 –with-libxml-dir \
19 –with-iconv-dir
20 make && make install
php-fpm
22 mv /usr/local/php/etc/php-fpm.conf.default /usr/local/php/etc/php-fpm.conf
php
24 cp php.ini-production /usr/local/php/etc/php.ini
php-fpm
26 cp sapi/fpm/init.d.php-fpm /etc/init.d/php-fpm
28 chmod +x /etc/init.d/php-fpm
30 chkconfig php-fpm on#
31 groupadd www && useradd -d /home/www -g www www
php-fpm
33 service php-fpm start
34 cd ..
35 vim /etc/profile
PATH=/usr/local/php/bin:/usr/local/mysql/bin:$PATH
37 export PATH
38 source /etc/profile
Install Nginx
tar zxvf nginx-1.10.2.tar.gz
cd nginx-1.10.2
./configure \
–user=www \
–group=www \
–prefix=/usr/local/nginx \
–conf-path=/etc/nginx/nginx.conf \
–error-log-path=/var/log/nginx/error.log \
–http-log-path=/var/log/nginx/access.log \
–pid-path=/var/run/nginx.pid \
–with-http_stub_status_module \
–with-http_gzip_static_module \
–with-http_ssl_module \
–with-pcre
make && make install
Add Nginx Boot Manager scripts /etc/init.d/nginx
#!/bin/sh
#
# nginx – this script starts and stops the nginx daemon
#
# chkconfig: – 85 15# description: NGINX is an HTTP(S) server, HTTP(S) reverse \
# proxy and IMAP/POP3 proxy server
# processname: nginx# config: /etc/nginx/nginx.conf
# config: /etc/sysconfig/nginx# pidfile: /var/run/nginx.pid
# Source function library.
. /etc/rc.d/init.d/functions
# Source networking configuration.
. /etc/sysconfig/network
# Check that networking is up.
[ “$NETWORKING” = “no” ] && exit 0
nginx=”/usr/sbin/nginx”
prog=$(basename $nginx)
NGINX_CONF_FILE=”/etc/nginx/nginx.conf”
[ -f /etc/sysconfig/nginx ] && . /etc/sysconfig/nginx
lockfile=/var/lock/subsys/nginx
make_dirs() {
# make required directories
user=`$nginx -V 2>&1 | grep “configure arguments:.*–user=” | sed ‘s/[^*]*–user=\([^ ]*\).*/\1/g’ -`
if [ -n “$user” ]; then
if [ -z “`grep $user /etc/passwd`” ]; then
useradd -M -s /bin/nologin $user
fi
options=`$nginx -V 2>&1 | grep ‘configure arguments:’`
for opt in $options; do
if [ `echo $opt | grep ‘.*-temp-path’` ]; then
value=`echo $opt | cut -d “=” -f 2`
if [ ! -d “$value” ]; then
# echo “creating” $value
mkdir -p $value && chown -R $user $value
fi
fi
done
fi
}
start() {
[ -x $nginx ] || exit 5
[ -f $NGINX_CONF_FILE ] || exit 6
make_dirs
echo -n $”Starting $prog: ”
daemon $nginx -c $NGINX_CONF_FILE
retval=$?
echo
[ $retval -eq 0 ] && touch $lockfile
return $retval
}
stop() {
echo -n $”Stopping $prog: ”
killproc $prog -QUIT
retval=$?
echo
[ $retval -eq 0 ] && rm -f $lockfile
return $retval
}
restart() {
configtest || return $?
stop
sleep 1
start
}
reload() {
configtest || return $?
echo -n $”Reloading $prog: ”
killproc $nginx -HUP
RETVAL=$?
echo
}
force_reload() {
restart
}
configtest() {
$nginx -t -c $NGINX_CONF_FILE
}
rh_status() {
status $prog
}
rh_status_q() {
rh_status >/dev/null 2>&1
}
case “$1″ in
start)
rh_status_q && exit 0
$1
;;
stop)
rh_status_q || exit 0
$1
;;
restart|configtest)
$1
;;
reload)
rh_status_q || exit 7
$1
;;
force-reload)
force_reload
;;
status)
rh_status
;;
condrestart|try-restart)
rh_status_q || exit 0
;;
*)
echo $”Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload|configtest}”
exit 2
esac
Usage Guidelines
chmod +x /etc/init.d/nginx
service nginx start # start nginx service
chkconfig nginx on # boot
cd ..
So far, LNMP environment has been set up is completed.
Red Hat Enterprise Linux installation at 7.3 Nginx
I. Introduction
Nginx ( “engine x”) is a by Russian designer Igor Sysoev procedures developed high-performance Web server and reverse proxy, also a IMAP / POP3 / SMTP proxy server. In the case of high concurrent connections, Nginx Apache server is a good substitute.
Second, prepare
1, environment
Platform: Red Hat Enterprise Linux Server Release 7.3 (Maipo)
3.10.0-514.el7.x86_64
2, install build tools and libraries
yum -y install make zlib zlib-devel gcc-c++ libtool openssl openssl-devel
3, install pcre
PCRE role is to support Ngnix Rewrite function.
Check whether the installation pcre
# pcre-config –version
bove indicate that you have installed.
If not installed, with reference to the following steps:
1) Download
https://sourceforge.net/projects/pcre/files/pcre/
2) extracting installation package:
# PCRE zxvf the tar-8.35.tar.gz
3) compile and install
# cd pcre-8.35
# ./configure
# make && make install
Third, the installation
1, download the installation package nginx
http://nginx.org/download/
2, extract
# tar zxvf nginx-1.10.2.tar.gz
3. Compile
# ./configure –prefix=/usr/local/nginx –with-http_stub_status_module –with-http_ssl_module –with-pcre
4, installation
# make
# make install
5, test
View nginx version
# /usr/local/nginx/sbin/nginx -v
Displays version information, proof has been successfully installed
Fourth, the configuration
1. Create a user
Nginx create run-use user ruready:
# / usr / sbin / groupadd ruready
# / usr / sbin / useradd -g ruready ruready
2???nginx.conf
# vi /usr/local/nginx/conf/nginx.conf
user ruready ruready;
worker_processes 2;
error_log /usr/local/nginx/logs/error.log crit; # ?????????
pid /usr/local/nginx/logs/nginx.pid;
#Specifies the value for maximum file descriptors that can be opened by this process.
worker_rlimit_nofile 65535;
events {
use epoll;
worker_connections 65535;
}
http {
include mime.types;
default_type application/octet-stream;
log_format main ‘$remote_addr – $remote_user [$time_local] “$request” ‘
‘$status $body_bytes_sent “$http_referer” ‘
‘”$http_user_agent” “$http_x_forwarded_for”‘;
access_log /usr/local/nginx/logs/access.log main;
sendfile on;
tcp_nopush on;
keepalive_timeout 60;
gzip on;
gzip_min_length 1k;
gzip_buffers 4 16k;
gzip_http_version 1.0;
gzip_comp_level 2;
gzip_types text/plain application/x-javascript text/css application/xml;
gzip_vary on;
# server
server {
listen 80;
server_name localhost;
charset utf-8;
access_log /usr/local/nginx/logs/host.access.log main;
location / {
root html;
index index.html index.htm;
}
error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
location ~ \.php$ {
root html;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
include fastcgi_params;
}
# deny access to .htaccess files, if Apache’s document root
# concurs with nginx’s one
#
#location ~ /\.ht {
# deny all;
#}
}
# another virtual host using mix of IP-, name-, and port-based configuration
#
#server {
# listen 8000;
# listen somename:8080;
# server_name somename alias another.alias;
# location / {
# root html;
# index index.html index.htm;
# }
#}
# HTTPS server
#
#server {
# listen 443 ssl;
# server_name localhost;
# ssl_certificate cert.pem;
# ssl_certificate_key cert.key;
# ssl_session_cache shared:SSL:1m;
# ssl_session_timeout 5m;
# ssl_ciphers HIGH:!aNULL:!MD5;
# ssl_prefer_server_ciphers on;
# location / {
# root html;
# index index.html index.htm;
# }
#}
}
3, check the correctness of the configuration file ngnix.conf
# /usr/local/nginx/sbin/nginx -t
Fifth, start
1, the start command
# /usr/local/nginx/sbin/nginx
3, can be tested through command links
links 127.0.0.1:8080
Six commonly used commands
/usr/local/nginx/sbin/nginx -c /usr/local/nginx/sbin/nginx/nginx.conf # ??????????
/ Usr / local / nginx / sbin / nginx -s reload # reload the configuration file
/usr/local/nginx/sbin/nginx -s reopen # ?? Nginx
/usr/local/nginx/sbin/nginx -s stop # ?? Nginx
Seven other
1, set the boot
echo “/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf” >> /etc/rc.local
2, added to the service Service
touch /etc/init.d/nginx
chmod 755 nginx // modify the script file permissions nginx
chkconfig –add nginx // The script file is added in chkconfig
chkconfig –level 35 nginx on // set nginx at startup in level 3 and 5
nginx document reads as follows:
#!/bin/sh
#
# nginx – this script starts and stops the nginx daemon
#
# chkconfig: – 85 15
# description: Nginx is an HTTP(S) server, HTTP(S) reverse \
# proxy and IMAP/POP3 proxy server
# processname: nginx
# config: /etc/nginx/nginx.conf
# config: /etc/sysconfig/nginx
# pidfile: /var/run/nginx.pid
# Source function library.
. /etc/rc.d/init.d/functions
# Source networking configuration.
. /etc/sysconfig/network
# Check that networking is up.
[ “$NETWORKING” = “no” ] && exit 0
nginx=”/usr/local/nginx/sbin/nginx”
prog=$(basename $nginx)
NGINX_CONF_FILE=”/usr/local/nginx/conf/nginx.conf”
[ -f /etc/sysconfig/nginx ] && . /etc/sysconfig/nginx
lockfile=/var/lock/subsys/nginx
start() {
[ -x $nginx ] || exit 5
[ -f $NGINX_CONF_FILE ] || exit 6
echo -n $”Starting $prog: ”
daemon $nginx -c $NGINX_CONF_FILE
retval=$?
echo
[ $retval -eq 0 ] && touch $lockfile
return $retval
}
stop() {
echo -n $”Stopping $prog: ”
killproc $prog -QUIT
retval=$?
echo
[ $retval -eq 0 ] && rm -f $lockfile
return $retval
killall -9 nginx
}
restart() {
configtest || return $?
stop
sleep 1
start
}
reload() {
configtest || return $?
echo -n $”Reloading $prog: ”
killproc $nginx -HUP
RETVAL=$?
echo
}
force_reload() {
restart
}
configtest() {
$nginx -t -c $NGINX_CONF_FILE
}
rh_status() {
status $prog
}
rh_status_q() {
rh_status >/dev/null 2>&1
}
case “$1″ in
start)
rh_status_q && exit 0
$1
;;
stop)
rh_status_q || exit 0
$1
;;
restart|configtest)
$1
;;
reload)
rh_status_q || exit 7
$1
;;
force-reload)
force_reload
;;
status)
rh_status
;;
condrestart|try-restart)
rh_status_q || exit 0
;;
*)
echo $”Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload|configtest}”
exit 2
esac
Keepalived Nginx Dual Primary High Availability Load Balancing Cluster
Purpose of the experiment: Use keepalived to implement Nginx’s dual main high availability load balancing cluster.
Experimental environment: two Nginx proxy (two main Nginx, each need two NIC, eth0 connection network, eth1 connected to the external network), two web server (request load balancing), a client used to verify the results.
Note: In order not to affect the experimental results, before the start of the experiment to close iptables and selinux
Steps:
First, configure IP
1. Configure the IP of the A host
# Ip addr add dev eth0 192.168.1.2/24
2. Configure the IP of the B host
# Ip addr add dev eth0 192.168.1.23/24
3. Configure the IP of the C host
# Ip addr add dev eth0 192.168.1.3/24
4. Configure the IP of the D host
# Ip addr add dev eth0 192.168.1.33/24
Second, the configuration of web services (C and D hosts have done the same configuration, just modify the default home page IP address for the machine can be IP to distinguish)
1. install apache
# Yum -y install apache
2. Create a default home page
# Vim /var/www/html/index.html
192.168.1.3 h1>
3. Start apache
# Service httpd start
Third, the configuration sorry_server (This service is configured on the Nginx proxy host, two Nginx proxy do the same configuration, just modify the default home page IP address for the machine can be IP to distinguish)
1. install apache
# Yum -y install apache
2. Create a default home page
# Vim /var/www/html/index.html
sorry_server: 192.168.1.2 h1>
3. Modify the listening port to 8080 to avoid conflicts with the ports that nginx listens on
# Vim /etc/httpd/conf/httpd.conf
Listen 8080
4. Start apache service
Fourth, the configuration agent (two Nginx proxy do the same configuration)
1. Install nginx
# Yum -y install nginx
2. Define the upstream cluster group, defined in the http {} section;
# Vim /etc/nginx/nginx.conf
Http {
Upstream websrvs {
Server 192.168.1.3:80;
Server 192.168.1.33:80;
Server 127.0.0.1:8080 backup;
}
}
3. Call the defined cluster group and call it in the location {} section of the server {} section;
# Vim /etc/nginx/conf.d/default.conf
Server {
Location / {
Proxy_pass http: // wersrvs ;
Index index.html;
}
}
4. Start the service
# Service nginx start
Fifth, configure keepalived
A on the host operation
1. Install keepalived
# Yum -y install keepalived
2. Edit the A host’s configuration file /etc/keepalived/keepalived.conf, as follows:
! Configuration File for keepalived
Global_defs {
Notification_email {
Root @ localhost
}
Notification_email_from keepalived @ localhost
Smtp_server 127.0.0.1
Smtp_connect_timeout 30
Router_id CentOS 6
Vrrp_mcast_group4 224.0.100.39
}
Vrrp_script chk_down {
Script “[[-f / etc / keepalived / down]] && exit 1 || exit 0”
Interval 1
Weight -5
}
Vrrp_script chk_nginx {
Script “killall -0 nginx && exit 0 || exit 1”
Interval 1
Weight -5
Fall 2
Rise
}
Vrrp_instance ngx {
State MASTER
Interface eth1
Virtual_router_id 14
Priority 100
Advert_int 1
Authentication {
Auth_type PASS
Auth_pass MDQ41fTp
}
Virtual_ipaddress {
192.168.20.100/24 ??dev eth1
}
Track_script {
Chk_down
Chk_nginx
}
}
Vrrp_instance ngx2 {
State BACKUP
Interface eth1
Virtual_router_id 15
Priority 98
Advert_int 1
Authentication {
Auth_type PASS
Auth_pass XYZ41fTp
}
Virtual_ipaddress {
192.168.20.200/24 ??dev eth1
}
Track_script {
Chk_down
Chk_nginx
}
}
B host also for the same configuration, a little change can be, need to modify the following places:
Vrrp_instance ngx {
State BACKUP
Priority 98
}
Vrrp_instance ngx2 {
State MASTER
Priority 100
}
6, simulation failure, verify the results
1. Start the two Nginx proxy keepalived services
# Service keepalived start
2. Visit 192.168.20.100, the result should be the back-end web server polling response request
MongoDB replication set settings Replication Sets
Master-Slave Master-slave replication: -master only need to add a parameter when a service starts, while another service plus -slave and -source parameters, you can achieve synchronization.
The latest version of MongoDB is no longer recommended for this program.
Replica Sets replication set: MongoDB version 1.6 for developers of new features replica set, which is stronger than some of the previous replication function, increased fault automatic switching and automatic restoration member nodes, exactly the same data between various DB, greatly reducing maintenance success. auto shard has been clearly stated does not support replication paris, we recommended replica set, replica set failover is completely automatic.
Replica Sets structure is very similar to a cluster, if one node failure occurs, the other nodes will immediately take over the business without having to shut down operations.
??MongoDB
[root@node1 ~]# vim /etc/yum.repos.d/Mongodb.repo
[mongodb-org-3.4]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/RedHat/$releasever/mongodb-org/3.4/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-3.4.asc
[root@node1 ~]# yum install -y mongodb-org
[root@node1 ~]# service mongod start
Starting mongod: [ OK ]
[root@node1 ~]# ps aux|grep mong
mongod 1361 5.7 14.8 351180 35104 ? Sl 01:26 0:01 /usr/bin/mongod -f /etc/mongod.conf
[root@node1 ~]# mkdir -p /mongodb/data
[root@node1 ~]# chown -R mongod:mongod /mongodb/
[root@node1 ~]# ll /mongodb/
total 4
drwxr-xr-x 2 mongod mongod 4096 May 18 02:04 data
[root@node1 ~]# grep -v “^#” /etc/mongod.conf |grep -v “^$”
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
storage:
dbPath: /mongodb/data
journal:
enabled: true
processManagement:
fork: true # fork and run in background
pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile
net:
port: 27017
bindIp: 0.0.0.0 # Listen to local interface only, comment to listen on all interfaces.
[root@node1 ~]# service mongod start
Starting mongod: [ OK ]
node2?node2
Replication Sets
–oplogSize
–dbpath
–logpath
–port
–replSet
–replSet test/
–maxConns
–fork
–logappend
–keyFile
v3.4.4
[root@node1 ~]# vim /etc/mongod.conf
replication:
oplogSizeMB: 1024
replSetName: rs0
Keyfile
openssl rand -base64 756 >
chmod 400
Configuration File
If using a configuration file, set the security.keyFile option to the keyfile’s path, and the replication.replSetName option to the replica set name:
security:
keyFile:
replication:
replSetName:
Command Line
If using the command line option, start the mongod with the –keyFile and –replSet parameters:
mongod –keyFile –replSet
Replication Sets?
[root@node1 ~]# openssl rand -base64 756 > /mongodb/mongokey
[root@node1 ~]# cat /mongodb/mongokey
gxpcgjyFj2qE8b9TB/0XbdRVYH9VDb55NY03AHwxCFU58MUjJMeez844i1gaUo/t
…..
…..
[root@node1 ~]# chmod 400 /mongodb/mongokey
[root@node1 ~]# chown mongod:mongod /mongodb/mongokey
[root@node1 ~]# ll /mongodb/
total 8
drwxr-xr-x 4 mongod mongod 4096 May 19 18:39 data
-r——– 1 mongod mongod 1024 May 19 18:29 mongokey
[root@node1 ~]# vim /etc/mongod.conf
#security:
security:
keyFile: /mongodb/mongokey
#operationProfiling:
#replication:
replication:
oplogSizeMB: 1024
replSetName: rs0
[root@node1 ~]# service mongod restart
Stopping mongod: [ OK ]
Starting mongod: [ OK ]
[root@node1 ~]# iptables -I INPUT 4 -m state –state NEW -p tcp –dport 27017 -j ACCEPT
hosts
[root@node1 ~]# rsync -avH –progress ‘-e ssh -p 22’ /etc/hosts root@node2.rmohan.com:/mongodb/
[root@node1 ~]# rsync -avH –progress ‘-e ssh -p 22’ /etc/hosts root@node3.rmohan.com:/mongodb/
[root@node1 ~]# rsync -avH –progress ‘-e ssh -p 22’ /mongodb/mongokey root@node3.rmohan.com:/mongodb/
[root@node1 ~]# rsync -avH –progress ‘-e ssh -p 22’ /mongodb/mongokey root@node3.rmohan.com:/mongodb/
[root@node1 ~]# rsync -avH –progress ‘-e ssh -p 22’ /etc/mongod.conf root@node2.rmohan.com:/etc/
[root@node1 ~]# rsync -avH –progress ‘-e ssh -p 22’ /etc/mongod.conf root@node3.rmohan.com:/etc/
rsync?openssh-clients
[root@node1 ~]# mongo
> help
db.help() help on db methods
db.mycoll.help() help on collection methods
sh.help() sharding helpers
rs.help() replica set helpers
…..
> rs.help()
rs.status() { replSetGetStatus : 1 } checks repl set status
rs.initiate() { replSetInitiate : null } initiates set with default settings
rs.initiate(cfg) { replSetInitiate : cfg } initiates set with configuration cfg
rs.conf() get the current configuration object from local.system.replset
…..
> rs.status()
{
“info” : “run rs.initiate(…) if not yet done for the set”,
“ok” : 0,
“errmsg” : “no replset config has been received”,
“code” : 94,
“codeName” : “NotYetInitialized”
}
> rs.initiate()
{
“info2” : “no configuration specified. Using a default configuration for the set”,
“me” : “node1.rmohan.com:27017”,
“ok” : 1
}
rs0:OTHER>
rs0:PRIMARY> rs.status()
{
“set” : “rs0”,
“date” : ISODate(“2017-05-18T17:00:49.868Z”),
“myState” : 1,
“term” : NumberLong(1),
“heartbeatIntervalMillis” : NumberLong(2000),
“optimes” : {
“lastCommittedOpTime” : {
“ts” : Timestamp(1495126845, 1),
“t” : NumberLong(1)
},
“appliedOpTime” : {
“ts” : Timestamp(1495126845, 1),
“t” : NumberLong(1)
},
“durableOpTime” : {
“ts” : Timestamp(1495126845, 1),
“t” : NumberLong(1)
}
},
“members” : [
{
“_id” : 0,
“name” : “node1.rmohan.com:27017”,
“health” : 1,
“state” : 1,
“stateStr” : “PRIMARY”,
“uptime” : 1239,
“optime” : {
“ts” : Timestamp(1495126845, 1),
“t” : NumberLong(1)
},
“optimeDate” : ISODate(“2017-05-18T17:00:45Z”),
“infoMessage” : “could not find member to sync from”,
“electionTime” : Timestamp(1495126824, 2),
“electionDate” : ISODate(“2017-05-18T17:00:24Z”),
“configVersion” : 1,
“self” : true
}
],
“ok” : 1
}
rs0:PRIMARY> rs.add(“node2.rmohan.com”)
{ “ok” : 1 }
rs0:PRIMARY> rs.add(“node3.rmohan.com”)
{ “ok” : 1 }
rs0:PRIMARY> rs.status()
{
“set” : “rs0”,
“date” : ISODate(“2017-05-18T17:08:47.724Z”),
“myState” : 1,
“term” : NumberLong(1),
“heartbeatIntervalMillis” : NumberLong(2000),
“optimes” : {
“lastCommittedOpTime” : {
“ts” : Timestamp(1495127325, 1),
“t” : NumberLong(1)
},
“appliedOpTime” : {
“ts” : Timestamp(1495127325, 1),
“t” : NumberLong(1)
},
“durableOpTime” : {
“ts” : Timestamp(1495127325, 1),
“t” : NumberLong(1)
}
},
“members” : [
{
“_id” : 0,
“name” : “node1.rmohan.com:27017”,
“health” : 1, //
“state” : 1, //1???PRIMARY?slave
“stateStr” : “PRIMARY”, //
“uptime” : 1717,
“optime” : {
“ts” : Timestamp(1495127325, 1),
“t” : NumberLong(1)
},
“optimeDate” : ISODate(“2017-05-18T17:08:45Z”),
“electionTime” : Timestamp(1495126824, 2),
“electionDate” : ISODate(“2017-05-18T17:00:24Z”),
“configVersion” : 3,
“self” : true
},
{
“_id” : 1,
“name” : “node2.rmohan.com:27017”,
“health” : 1,
“state” : 2,
“stateStr” : “SECONDARY”,
“uptime” : 64,
“optime” : {
“ts” : Timestamp(1495127325, 1),
“t” : NumberLong(1)
},
“optimeDurable” : {
“ts” : Timestamp(1495127325, 1),
“t” : NumberLong(1)
},
“optimeDate” : ISODate(“2017-05-18T17:08:45Z”),
“optimeDurableDate” : ISODate(“2017-05-18T17:08:45Z”),
“lastHeartbeat” : ISODate(“2017-05-18T17:08:46.106Z”),
“lastHeartbeatRecv” : ISODate(“2017-05-18T17:08:47.141Z”),
“pingMs” : NumberLong(0),
“syncingTo” : “node1.rmohan.com:27017”,
“configVersion” : 3
},
{
“_id” : 2,
“name” : “node3.rmohan.com:27017”,
“health” : 1,
“state” : 2,
“stateStr” : “SECONDARY”,
“uptime” : 55,
“optime” : {
“ts” : Timestamp(1495127325, 1),
“t” : NumberLong(1)
},
“optimeDurable” : {
“ts” : Timestamp(1495127325, 1),
“t” : NumberLong(1)
},
“optimeDate” : ISODate(“2017-05-18T17:08:45Z”),
“optimeDurableDate” : ISODate(“2017-05-18T17:08:45Z”),
“lastHeartbeat” : ISODate(“2017-05-18T17:08:46.195Z”),
“lastHeartbeatRecv” : ISODate(“2017-05-18T17:08:46.924Z”),
“pingMs” : NumberLong(0),
“syncingTo” : “node2.rmohan.com:27017”,
“configVersion” : 3
}
],
“ok” : 1
}
rs0:PRIMARY> db.isMaster()
{
“hosts” : [
“node1.rmohan.com:27017”,
“node2.rmohan.com:27017”,
“node3.rmohan.com:27017”
],
“setName” : “rs0”,
“setVersion” : 3,
“ismaster” : true,
“secondary” : false,
“primary” : “node1.rmohan.com:27017”,
“me” : “node1.rmohan.com:27017”,
“electionId” : ObjectId(“7fffffff0000000000000001”),
“lastWrite” : {
“opTime” : {
“ts” : Timestamp(1495127705, 1),
“t” : NumberLong(1)
},
“lastWriteDate” : ISODate(“2017-05-18T17:15:05Z”)
},
“maxBsonObjectSize” : 16777216,
“maxMessageSizeBytes” : 48000000,
“maxWriteBatchSize” : 1000,
“localTime” : ISODate(“2017-05-18T17:15:11.146Z”),
“maxWireVersion” : 5,
“minWireVersion” : 0,
“readOnly” : false,
“ok” : 1
}
rs0:PRIMARY> use testdb
rs0:PRIMARY> show collections
testcoll
rs0:PRIMARY> db.testcoll.find()
{ “_id” : ObjectId(“591dd9f965cc255a5373aefa”), “name” : “tom”, “age” : 25 }
node2:
rs0:SECONDARY> rs.slaveOk()
rs0:SECONDARY> show dbs
admin 0.000GB
local 0.000GB
testdb 0.000GB
rs0:SECONDARY> use testdb
switched to db testdb
rs0:SECONDARY> show collections
testcoll
rs0:SECONDARY> db.testcoll.find()
{ “_id” : ObjectId(“591dd9f965cc255a5373aefa”), “name” : “tom”, “age” : 25 }
rs0:SECONDARY>
node3:
rs0:SECONDARY> rs.slaveOk()
rs0:SECONDARY> show dbs
admin 0.000GB
local 0.000GB
testdb 0.000GB
rs0:SECONDARY> use testdb
switched to db testdb
rs0:SECONDARY> show collections
testcoll
rs0:SECONDARY> db.testcoll.find()
{ “_id” : ObjectId(“591dd9f965cc255a5373aefa”), “name” : “tom”, “age” : 25 }
rs0:SECONDARY>
rs0:PRIMARY> use local
switched to db local
rs0:PRIMARY> show collections
me
oplog.rs
replset.election
replset.minvalid
startup_log
system.replset
rs0:PRIMARY> db.oplog.rs.find()
{ “ts” : Timestamp(1495126824, 1), “h” : NumberLong(“3056083863196084673”), “v” : 2, “op” : “n”, “ns” : “”, “o” : { “msg” : “initiating set” } }
{ “ts” : Timestamp(1495126825, 1), “t” : NumberLong(1), “h” : NumberLong(“7195178065440751511”), “v” : 2, “op” : “n”, “ns” : “”, “o” : { “msg” : “new primary” } }
{ “ts” : Timestamp(1495126835, 1), “t” : NumberLong(1), “h” : NumberLong(“5723995478292318850”), “v” : 2, “op” : “n”, “ns” : “”, “o” : { “msg” : “periodic noop” } }
{ “ts” : Timestamp(1495126845, 1), “t” : NumberLong(1), “h” : NumberLong(“-3772304067699003381”), “v” : 2, “op” : “n”, “ns” : “”, “o”
rs0:PRIMARY> db.printReplicationInfo()
configured oplog size: 1024MB
log length start to end: 2541secs (0.71hrs)
oplog first event time: Fri May 19 2017 01:00:24 GMT+0800 (CST)
oplog last event time: Fri May 19 2017 01:42:45 GMT+0800 (CST)
now: Fri May 19 2017 01:42:48 GMT+0800 (CST)
rs0:PRIMARY>
db.oplog.rs.find()?
db.printReplicationInfo()?
db.printSlaveReplicationInfo()?
rs0:PRIMARY> db.printSlaveReplicationInfo()
source: node2.rmohan.com:27017
syncedTo: Fri May 19 2017 01:47:15 GMT+0800 (CST)
0 secs (0 hrs) behind the primary
source: node3.rmohan.com:27017
syncedTo: Fri May 19 2017 01:47:15 GMT+0800 (CST)
0 secs (0 hrs) behind the primary
db.system.replset.find()?
rs0:PRIMARY> db.system.replset.find()
{ “_id” : “rs0”, “version” : 3, “protocolVersion” : NumberLong(1), “members” : [ { “_id” : 0, “host” : “node1.rmohan.com:27017”, “arbiterOnly” : false, “buildIndexes” : true, “hidden” : false, “priority” : 1, “tags” : { }, “slaveDelay” : NumberLong(0), “votes” : 1 }, { “_id” : 1, “host” : “node2.rmohan.com:27017”, “arbiterOnly” : false, “buildIndexes” : true, “hidden” : false, “priority” : 1, “tags” : { }, “slaveDelay” : NumberLong(0), “votes” : 1 }, { “_id” : 2, “host” : “node3.rmohan.com:27017”, “arbiterOnly” : false, “buildIndexes” : true, “hidden” : false, “priority” : 1, “tags” : { }, “slaveDelay” : NumberLong(0), “votes” : 1 } ], “settings” : { “chainingAllowed” : true, “heartbeatIntervalMillis” : 2000, “heartbeatTimeoutSecs” : 10, “electionTimeoutMillis” : 10000, “catchUpTimeoutMillis” : 2000, “getLastErrorModes” : { }, “getLastErrorDefaults” : { “w” : 1, “wtimeout” : 0 }, “replicaSetId” : ObjectId(“591dd3284fc6957e660dc933”) } }
rs0:PRIMARY> db.system.replset.find().forEach(printjson)
1 node3
rs0:SECONDARY> rs.freeze(30)
{ “ok” : 1 }
2?node1 PRIMARY
rs0:PRIMARY> rs.stepDown(30)
2017-05-19T02:09:27.945+0800 E QUERY [thread1] Error: error doing query: failed: network error while attempting to run command ‘replSetStepDown’ on host ‘127.0.0.1:27017’ :
DB.prototype.runCommand@src/mongo/shell/db.js:132:1
DB.prototype.adminCommand@src/mongo/shell/db.js:150:16
rs.stepDown@src/mongo/shell/utils.js:1261:12
@(shell):1:1
2017-05-19T02:09:27.947+0800 I NETWORK [thread1] trying reconnect to 127.0.0.1:27017 (127.0.0.1) failed
2017-05-19T02:09:27.949+0800 I NETWORK [thread1] reconnect 127.0.0.1:27017 (127.0.0.1) ok
30
rs0:SECONDARY> rs.status()
{
“set” : “rs0”,
“date” : ISODate(“2017-05-18T18:12:09.732Z”),
“myState” : 2,
“term” : NumberLong(2),
“syncingTo” : “node2.rmohan.com:27017”,
“heartbeatIntervalMillis” : NumberLong(2000),
“optimes” : {
“lastCommittedOpTime” : {
“ts” : Timestamp(1495131128, 1),
“t” : NumberLong(2)
},
“appliedOpTime” : {
“ts” : Timestamp(1495131128, 1),
“t” : NumberLong(2)
},
“durableOpTime” : {
“ts” : Timestamp(1495131128, 1),
“t” : NumberLong(2)
}
},
“members” : [
{
“_id” : 0,
“name” : “node1.rmohan.com:27017”,
“health” : 1,
“state” : 2,
“stateStr” : “SECONDARY”,
“uptime” : 5519,
“optime” : {
“ts” : Timestamp(1495131128, 1),
“t” : NumberLong(2)
},
“optimeDate” : ISODate(“2017-05-18T18:12:08Z”),
“syncingTo” : “node2.rmohan.com:27017”,
“configVersion” : 3,
“self” : true
},
{
“_id” : 1,
“name” : “node2.rmohan.com:27017”,
“health” : 1,
“state” : 1,
“stateStr” : “PRIMARY”,
“uptime” : 3866,
“optime” : {
“ts” : Timestamp(1495131118, 1),
“t” : NumberLong(2)
},
“optimeDurable” : {
“ts” : Timestamp(1495131118, 1),
“t” : NumberLong(2)
},
“optimeDate” : ISODate(“2017-05-18T18:11:58Z”),
“optimeDurableDate” : ISODate(“2017-05-18T18:11:58Z”),
“lastHeartbeat” : ISODate(“2017-05-18T18:12:08.333Z”),
“lastHeartbeatRecv” : ISODate(“2017-05-18T18:12:08.196Z”),
“pingMs” : NumberLong(0),
“electionTime” : Timestamp(1495130977, 1),
“electionDate” : ISODate(“2017-05-18T18:09:37Z”),
“configVersion” : 3
},
{
“_id” : 2,
“name” : “node3.rmohan.com:27017”,
“health” : 1,
“state” : 2,
“stateStr” : “SECONDARY”,
“uptime” : 3857,
“optime” : {
“ts” : Timestamp(1495131118, 1),
“t” : NumberLong(2)
},
“optimeDurable” : {
“ts” : Timestamp(1495131118, 1),
“t” : NumberLong(2)
},
“optimeDate” : ISODate(“2017-05-18T18:11:58Z”),
“optimeDurableDate” : ISODate(“2017-05-18T18:11:58Z”),
“lastHeartbeat” : ISODate(“2017-05-18T18:12:08.486Z”),
“lastHeartbeatRecv” : ISODate(“2017-05-18T18:12:08.116Z”),
“pingMs” : NumberLong(0),
“syncingTo” : “node2.rmohan.com:27017”,
“configVersion” : 3
}
],
“ok” : 1
}
rs0:SECONDARY>
[root@node1 ~]# rsync -avH –progress ‘-e ssh -p 22’ /etc/hosts root@node2.rmohan.com:/etc/
[root@node1 ~]# rsync -avH –progress ‘-e ssh -p 22’ /mongodb/mongokey root@node4.rmohan.com:/mongodb/
[root@node1 ~]# rsync -avH –progress ‘-e ssh -p 22’ /etc/mongod.conf root@node4.rmohan.com:/etc/
[root@node4 ~]# iptables -I INPUT 4 -m state –state NEW -p tcp –dport 27017 -j ACCEPT
rs0:PRIMARY> rs.add(“node4.rmohan.com”)
{ “ok” : 1 }
rs0:PRIMARY> rs.status()
{
“set” : “rs0”,
“date” : ISODate(“2017-05-19T12:12:57.697Z”),
“myState” : 1,
“term” : NumberLong(8),
“heartbeatIntervalMillis” : NumberLong(2000),
“optimes” : {
“lastCommittedOpTime” : {
“ts” : Timestamp(1495195971, 1),
“t” : NumberLong(8)
},
“appliedOpTime” : {
“ts” : Timestamp(1495195971, 1),
“t” : NumberLong(8)
},
“durableOpTime” : {
“ts” : Timestamp(1495195971, 1),
“t” : NumberLong(8)
}
},
“members” : [
{
“_id” : 0,
“name” : “node1.rmohan.com:27017”,
“health” : 1,
“state” : 2,
“stateStr” : “SECONDARY”,
“uptime” : 159,
“optime” : {
“ts” : Timestamp(1495195971, 1),
“t” : NumberLong(8)
},
“optimeDurable” : {
“ts” : Timestamp(1495195971, 1),
“t” : NumberLong(8)
},
“optimeDate” : ISODate(“2017-05-19T12:12:51Z”),
“optimeDurableDate” : ISODate(“2017-05-19T12:12:51Z”),
“lastHeartbeat” : ISODate(“2017-05-19T12:12:56.111Z”),
“lastHeartbeatRecv” : ISODate(“2017-05-19T12:12:57.101Z”),
“pingMs” : NumberLong(0),
“syncingTo” : “node3.rmohan.com:27017”,
“configVersion” : 4
},
{
“_id” : 1,
“name” : “node2.rmohan.com:27017”,
“health” : 1,
“state” : 2,
“stateStr” : “SECONDARY”,
“uptime” : 189,
“optime” : {
“ts” : Timestamp(1495195971, 1),
“t” : NumberLong(8)
},
“optimeDurable” : {
“ts” : Timestamp(1495195971, 1),
“t” : NumberLong(8)
},
“optimeDate” : ISODate(“2017-05-19T12:12:51Z”),
“optimeDurableDate” : ISODate(“2017-05-19T12:12:51Z”),
“lastHeartbeat” : ISODate(“2017-05-19T12:12:56.111Z”),
“lastHeartbeatRecv” : ISODate(“2017-05-19T12:12:57.103Z”),
“pingMs” : NumberLong(0),
“syncingTo” : “node3.rmohan.com:27017”,
“configVersion” : 4
},
{
“_id” : 2,
“name” : “node3.rmohan.com:27017”,
“health” : 1,
“state” : 1,
“stateStr” : “PRIMARY”,
“uptime” : 191,
“optime” : {
“ts” : Timestamp(1495195971, 1),
“t” : NumberLong(8)
},
“optimeDate” : ISODate(“2017-05-19T12:12:51Z”),
“electionTime” : Timestamp(1495195800, 1),
“electionDate” : ISODate(“2017-05-19T12:10:00Z”),
“configVersion” : 4,
“self” : true
},
{
“_id” : 3,
“name” : “node4.rmohan.com:27017”,
“health” : 1,
“state” : 2,
“stateStr” : “SECONDARY”,
“uptime” : 71,
“optime” : {
“ts” : Timestamp(1495195971, 1),
“t” : NumberLong(8)
},
“optimeDurable” : {
“ts” : Timestamp(1495195971, 1),
“t” : NumberLong(8)
},
“optimeDate” : ISODate(“2017-05-19T12:12:51Z”),
“optimeDurableDate” : ISODate(“2017-05-19T12:12:51Z”),
“lastHeartbeat” : ISODate(“2017-05-19T12:12:56.122Z”),
“lastHeartbeatRecv” : ISODate(“2017-05-19T12:12:56.821Z”),
“pingMs” : NumberLong(1),
“syncingTo” : “node3.rmohan.com:27017”,
“configVersion” : 4
}
],
“ok” : 1
}
rs0:SECONDARY> rs.slaveOk()
rs0:SECONDARY> show dbs
admin 0.000GB
local 0.000GB
testdb 0.000GB
rs0:SECONDARY> use testdb
switched to db testdb
rs0:SECONDARY> show collections
testcoll
rs0:SECONDARY> db.testcoll.find()
{ “_id” : ObjectId(“591dd9f965cc255a5373aefa”), “name” : “tom”, “age” : 25 }
rs0:SECONDARY>
rs0:SECONDARY> db.isMaster()
{
“hosts” : [
“node1.rmohan.com:27017”,
“node2.rmohan.com:27017”,
“node3.rmohan.com:27017”,
“node4.rmohan.com:27017”
],
“setName” : “rs0”,
“setVersion” : 4,
“ismaster” : false,
“secondary” : true,
“primary” : “node3.rmohan.com:27017”,
“me” : “node4.rmohan.com:27017”,
“lastWrite” : {
“opTime” : {
“ts” : Timestamp(1495196261, 1),
“t” : NumberLong(8)
},
“lastWriteDate” : ISODate(“2017-05-19T12:17:41Z”)
},
“maxBsonObjectSize” : 16777216,
“maxMessageSizeBytes” : 48000000,
“maxWriteBatchSize” : 1000,
“localTime” : ISODate(“2017-05-19T12:17:44.104Z”),
“maxWireVersion” : 5,
“minWireVersion” : 0,
“readOnly” : false,
“ok” : 1
}
rs0:SECONDARY>
2?
rs0:PRIMARY> rs.remove(“node4.rmohan.com:27017”)
{ “ok” : 1 }
rs0:PRIMARY> db.isMaster()
{
“hosts” : [
“node1.rmohan.com:27017”,
“node2.rmohan.com:27017”,
“node3.rmohan.com:27017”
],
“setName” : “rs0”,
“setVersion” : 5,
“ismaster” : true,
“secondary” : false,
“primary” : “node3.rmohan.com:27017”,
“me” : “node3.rmohan.com:27017”,
“electionId” : ObjectId(“7fffffff0000000000000008”),
“lastWrite” : {
“opTime” : {
“ts” : Timestamp(1495196531, 1),
“t” : NumberLong(8)
},
“lastWriteDate” : ISODate(“2017-05-19T12:22:11Z”)
},
“maxBsonObjectSize” : 16777216,
“maxMessageSizeBytes” : 48000000,
“maxWriteBatchSize” : 1000,
“localTime” : ISODate(“2017-05-19T12:22:19.874Z”),
“maxWireVersion” : 5,
“minWireVersion” : 0,
“readOnly” : false,
“ok” : 1
}
rs0:PRIMARY>
Linux to achieve SSH password-free remote access server
Description
Usually use SSH login remote server, you need to use the input password, hoping to achieve through the key login and exemption from the input password,
which can be achieved for the future batch automatic deployment of the host to prepare.
The environment is as follows:
IP address operating system
Service-Terminal 192.168.1.10/24 CentOS 6.5 x86
Client 192.168.1.129/24 Ubuntu 16.04 x86
1. The client generates a key pair
Generate key pair:
rmohan@rmohan:~$ ssh-keygen -t rsa -b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/home/rmohan/.ssh/id_rsa):
Created directory ‘/home/rmohan/.ssh’.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/rmohan/.ssh/id_rsa.
Your public key has been saved in /home/rmohan/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:eLssyXJLzUCfSN5mu6nqNH9dB/gOyXSvWBwQdNssIYE rmohan@rmohan
The key’s randomart image is:
+—[RSA 2048]—-+
| o=oo |
| E .o = |
| o oo o |
| + = .o +. |
| = So = + |
| B o+ = o |
| o…=. * o |
| ..+=..+o o |
| .o++== |
+—-[SHA256]—–+
View the generated key pair:
Linuxidc @ rmohan: ~ $ ls .ssh
id_rsa id_rsa.pub
# id_rsa for the private key, this generally need to keep confidential; id_rsa.pub for the public key, this can be made public.
2. Upload the public key to the server
Use the scp command to:
rmohan@rmohan:~$ scp .ssh/id_rsa.pub root@192.168.1.129:/root
The authenticity of host ‘192.168.1.129(192.168.1.129)’ can’t be established.
RSA key fingerprint is SHA256:0Tpm11wruaQXyvOfEB1maIkEwxmjT2AklWb198Vrln0.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘10.0.0.128’ (RSA) to the list of known hosts.
root@10.0.0.128’s password:
id_rsa.pub 100% 393 0.4KB/s 00:00
3. Server-side operation
Add the public key from the client to .ssh / authorized_keys:
[root@rmohan ~]# cat id_rsa.pub >> .ssh/authorized_keys
[root@rmohan ~]# chmod 600 .ssh/authorized_keys
# authorized_keys 600
Modify the ssh configuration file /etc/ssh/sshd_config, find the following line:
PubkeyAuthentication no
change into:
PubkeyAuthentication yes
4. Test
Log on to the server using the key on the client:
rmohan@rmohan:~$ ssh -i .ssh/id_rsa root@192.168.1.129
Last login: Tue May 9 15:14:01 2017 from 192.168.1.129
[root@rmohan ~]#
5. Precautions
In the server side need to turn off selinux, or finally can not use the key for remote login;
The client uses the scp command, the server also need to install ssh client, or can not upload the public key to the server side,
you can also use ssh-copy-id root@192.168.1.129 instead of scp operation (so that the server Do not need to perform the operation. Ssh directory and other operations, that is equivalent to the order can help us complete the key upload and configuration work);
The following article on SSH related you may also like, may wish to refer to the following:
MariaDB Galera Cluster deployment (how quickly deploy MariaDB cluster)
MariaDB is a branch of Mysql, has been widely used in open source projects, such as hot openstack, therefore, in order to ensure high availability of services, while increasing the load capacity of the system, clustered deployment is essential.
MariaDB Galera Cluster Introduction
MariaDB MariaDB cluster is synchronous multi-master cluster. It only supports XtraDB / InnoDB storage engine (although experimental support for MyISAM – see wsrep_replicate_myisam system variable).
The main function:
Replication
True multi-master, that is, all nodes can read and write the database at the same time
Automatic control node members, the failed node is automatically cleared
The new node joins the data is automatically copied
True parallel copy, row-level
Users can directly connect to the cluster, use exactly the same experience with MySQL
Advantage:
Because it is multi-master, Slavelag so there is no (delayed)
There is no case of lost transactions
But also has the ability to read and write extended
Smaller client latencies
Data synchronization between nodes, and the Master / Slave mode is asynchronous, the binlog on different slave may be different
technology:
Galera Cluster replication based Galeralibrary achieve, in order to allow MySQL and Galera library communications, developed specifically for MySQL wsrep API.
Galera Cluster Synchronization Plug-assurance data, maintaining data consistency, can rely on certified copy, it works in the following figure:

When the client sends a commit command, before the transaction is committed, all changes to the database are collected write-set up, and sends the contents of write-set record to other nodes.
write-set will be certification testing at each node, the node test results determine whether to apply the write-set change data.
If the authentication test fails, the node will discard the write-set; if the certification test is successful, the transaction commits.
1. Installation Environment Preparation
Install MariaDB cluster requires at least 3 servers (if only two words requires special configuration, please refer to the official documentation)
Here, I list the configuration of the test machine:
Operating system version: CentOS 7
node4: 192.168.1.16 Node5: 192.168.1.17 Node6: 192.168.1.18
The first line as an example, node4 for the hostname, 192.168.1.16 for the ip, the three machines to modify / etc / hosts file, my file as follows:
192.168.1.16 Node4
192.168.1.17 Node5
192.168.1.18 Node6
In order to ensure mutual communication between nodes, need to disable the firewall settings (if you need a firewall, refer to the official website to increase the firewall settings)
Execute commands three nodes are:
systemctl STOP firewalld
Then the / etc / sysconfig / selinux selinux is set to disabled, so that the initialization is complete environment.
2. Install the Cluster Galera MariaDB
[Node4 the root @ ~] # yum the install -Y-MariaDB MariaDB MariaDB Galera-Server-Common-Galera Galera the rsync
[root@node5 ~]# yum install -y mariadb mariadb-galera-server mariadb-galera-common galera rsync
[root@node6 ~]# yum install -y mariadb mariadb-galera-server mariadb-galera-common galera rsync
3. MariaDB Galera Cluster
Initialize the database service, only one node
[root@node4 mariadb]# systemctl start mariadb
[root@node4 mariadb]# mysql_secure_installation
NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!
In order to log into MariaDB to secure it, we’ll need the current
password for the root user. If you’ve just installed MariaDB, and
you haven’t set the root password yet, the password will be blank,
so you should just press enter here.
Enter current password for root (enter for none):
OK, successfully used password, moving on…
Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.
Set root password? [Y/n]
New password:
Re-enter new password:
Password updated successfully!
Reloading privilege tables..
… Success!
By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them. This is intended only for testing, and to make the installation
go a bit smoother. You should remove them before moving into a
production environment.
Remove anonymous users? [Y/n] n
… skipping.
Normally, root should only be allowed to connect from ‘localhost’. This
ensures that someone cannot guess at the root password from the network.
Disallow root login remotely? [Y/n] y
… Success!
By default, MariaDB comes with a database named ‘test’ that anyone can
access. This is also intended only for testing, and should be removed
before moving into a production environment.
Remove test database and access to it? [Y/n] n
… skipping.
Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.
Reload privilege tables now? [Y/n] y
… Success!
Cleaning up…
All done! If you’ve completed all of the above steps, your MariaDB
installation should now be secure.
Thanks for using MariaDB!

/etc/my.cnf.d/galera.cnf
[root@node4 mariadb]# systemctl stop mariadb
[root@node4 ~]# vim /etc/my.cnf.d/galera.cnf
[mysqld]
……
wsrep_provider = /usr/lib64/galera/libgalera_smm.so
wsrep_cluster_address = “gcomm://node4,node5,node6″
wsrep_node_name = node4
wsrep_node_address=192.168.1.16
#wsrep_provider_options=”socket.ssl_key=/etc/pki/galera/galera.key; socket.ssl_cert=/etc/pki/galera/galera.crt;”
Tip: If you do not have a way ssl certification, please put wsrep_provider_options commented.
Copy this file to node5, node6, attention should wsrep_node_name and wsrep_node_address into the corresponding node hostname and ip.
4. Start MariaDB Galera Cluster Service
[root @ node4 ~] # / usr / libexec / mysqld –wsrep-new-cluster –user = root &
?????
[root@node4 ~]# tail -f /var/log/mariadb/mariadb.log
150701 19:54:17 [Note] WSREP: wsrep_load(): loading provider library ‘none’
150701 19:54:17 [Note] /usr/libexec/mysqld: ready for connections.
Version: ‘5.5.40-MariaDB-wsrep’ socket: ‘/var/lib/mysql/mysql.sock’ port: 3306 MariaDB Server, wsrep_25.11.r4026
Ready for connections appear to prove that we started successfully, continue to start another node:
[root @ Node5 ~] # systemctl Start MariaDB
[root @ Node6 ~] # systemctl Start MariaDB
You can view /var/log/mariadb/mariadb.log, the log can see the nodes are added to the cluster.
Warning ?: – wsrep-new-cluster This cluster initialization parameters can only be used, and can only be used in a node.
5. Check the cluster status
We can focus on a few key parameters:
wsrep_connected = on the link is on
wsrep_local_index = 1 the cluster index value
wsrep_cluster_size = the number of nodes in the cluster 3
wsrep_incoming_addresses = 192.168.1.17:3306,192.168.1.16:3306,192.168.1.18:3306 access nodes in the cluster address
6. verification data synchronization
Our new database on node4 galera_test, then a query on node5 and node6, if you can check galera_test library, a data synchronization is successful, the cluster is operating normally.
[root@node4 ~]# mysql -uroot -p root -e “create database galera_test”
[root@node5 ~]# mysql -uroot -p root -e “show databases”
+——————–+
| Database |
+——————–+
| information_schema |
| galera_test |
| mysql |
| performance_schema |
+——————–+
[root@node6 ~]# mysql -uroot -p root -e “show databases”
+——————–+
| Database |
+——————–+
| information_schema |
| galera_test |
| mysql |
| performance_schema |
+——————–+
At this point, our MariaDB Galera Cluster has been successfully deployed.
————————————–Dividing line———- —————————-
CentOS 6.9 compile and install Nginx1.4.7
[root@rmohan.com ~]# yum install -y openssl
[root@rmohan.com ~]# service iptables stop
iptables: Setting chains to policy ACCEPT: filter [ OK ]
iptables: Flushing firewall rules: [ OK ]
iptables: Unloading modules:
2. Download nginx source package to a local
[root@rmohan.com ~]# ll nginx-1.4.7.tar.gz
-rw-r–r–. 1 root root 769153 Jun 1 2017 nginx-1.4.7.tar.gz
3. Extract nginx source package
[root@rmohan.com ~]# tar -xf nginx-1.4.7.tar.gz
4. Go to the extracted directory
[root@rmohan.com ~]# cd nginx-1.4.7
5. Start compilation to generate the makefile
[root@rmohan.com nginx-1.4.7]# ./configure –prefix=/usr\ –sbin-path=/usr/sbin/nginx –conf-path=/etc/nginx/nginx.conf\ –error-log-path=/var/log/nginx/error.log\ –http-log-path=/var/log/nginx/access.log\ –pid-path=/var/run/nginx/nginx.pid –lock-path=/var/lock/nginx.lock\ –user=nginx –group=nginx –with-http_flv_module\ –with-http_stub_status_module –with-http_gzip_static_module\ –http-client-body-temp-path=/var/tmp/nginx/client\ –http-proxy-temp-path=/var/tmp/nginx/proxy\ –http-fastcgi-temp-path=/var/tmp/nginx/fcgi/\ –http-uwsgi-temp-path=/var/tmp/nginx/uwsgi\ –http-scgi-temp-path=/var/tmp/nginx/scgi –with-pcre\ –with-http_ssl_module\
…..
checking for socklen_t … found
checking for in_addr_t … found
checking for in_port_t … found
checking for rlim_t … found
checking for uintptr_t … uintptr_t found
checking for system byte ordering … little endian
checking for size_t size … 8 bytes
checking for off_t size … 8 bytes
checking for time_t size … 8 bytes
checking for setproctitle() … not found
checking for pread() … found
checking for pwrite() … found
checking for sys_nerr … found
checking for localtime_r() … found
checking for posix_memalign() … found
checking for memalign() … found
checking for mmap(MAP_ANON|MAP_SHARED) … found
checking for mmap(“/dev/zero”, MAP_SHARED) … found
checking for System V shared memory … found
checking for POSIX semaphores … not found
checking for POSIX semaphores in libpthread … found
checking for struct msghdr.msg_control … found
checking for ioctl(FIONBIO) … found
checking for struct tm.tm_gmtoff … found
checking for struct dirent.d_namlen … not found
checking for struct dirent.d_type … found
checking for sysconf(_SC_NPROCESSORS_ONLN) … found
checking for openat(), fstatat() … found
checking for getaddrinfo() … found
checking for PCRE library … found
checking for PCRE JIT support … not found
checking for OpenSSL library … found
checking for zlib library … found
creating objs/Makefile
Configuration summary
+ using system PCRE library
+ using system OpenSSL library
+ md5: using OpenSSL library
+ sha1: using OpenSSL library
+ using system zlib library
nginx path prefix: “/usr”
nginx binary file: “/usr/sbin/nginx”
nginx configuration prefix: “/etc/nginx”
nginx configuration file: “/etc/nginx/nginx.conf”
nginx pid file: “/var/run/nginx/nginx.pid”
nginx error log file: “/var/log/nginx/error.log”
nginx http access log file: “/var/log/nginx/access.log”
nginx http client request body temporary files: “/var/tmp/nginx/client”
nginx http proxy temporary files: “/var/tmp/nginx/proxy”
nginx http fastcgi temporary files: “/var/tmp/nginx/fcgi/”
nginx http uwsgi temporary files: “/var/tmp/nginx/uwsgi”
nginx http scgi temporary files: “/var/tmp/nginx/scgi”
Description
This article uses MySQL-5.7.18. The operating system is 64-bit CentOS Linux release 7.2.1511 (Core), installed as a desktop.
Uninstall MariaDB
CentOS7 defaults to installing MariaDB instead of MySQL, and the yum server also removes the MySQL-related packages. Because MariaDB and MySQL may conflict, so first uninstall MariaDB.
View the installed DebianDB related rpm package.
rpm -qa | grep mariadb
View the installed MariaDB-related yum package, package name to be rpmjudged by the results of the order.
yum list mariadb-libs
Remove the installed MariaDB-related yum package, the package name to yum listjudge the results according to the order. This step requires root privileges.
yum remove mariadb-libs
Download the MySQL rpm package
As the software package is large, you can first use other ways (such as Thunder) to download. Using rpm, but also can not be installed under the conditions of the network – this is yum can not do. To install other versions of MySQL, please go to the official website to search the corresponding rpm download link.
wget https://cdn.mysql.com//Downloads/MySQL-5.7/mysql-5.7.18-1.el7.x86_64.rpm-bundle.tar
Use the rpm package to install MySQL
The following steps require root privileges. And because of the dependencies between the packages, the rpmcommands must be executed in sequence.
mkdir mysql-5.7.18
tar -xv -f mysql-5.7.18-1.el7.x86_64.rpm-bundle.tar -C mysql-5.7.18
cd mysql-5.7.18/
rpm -ivh mysql-community-common-5.7.18-1.el7.x86_64.rpm
rpm -ivh mysql-community-libs-5.7.18-1.el7.x86_64.rpm
rpm -ivh mysql-community-client-5.7.18-1.el7.x86_64.rpm
rpm -ivh mysql-community-server-5.7.18-1.el7.x86_64.rpm
After the installation is successful, you can also delete the installation files and temporary files.
cd ..
rm -rf mysql-5.7.18
rm mysql-5.7.18-1.el7.x86_64.rpm-bundle.tar
Modify the MySQL initial password
The following steps require root privileges.
Since the beginning does not know the password, first modify the configuration file /etc/my.cnfso that MySQL skip the login when the permissions check. Add a line
skip-grant-tables
Restart MySQL.
service mysqld restart
Free password to log in to MySQL.
mysql
In the mysql client to execute the following order, modify the root password.
use mysql;
UPDATE user SET authentication_string = password(‘your-password’) WHERE host = ‘localhost’ AND user = ‘root’;
quit;
Modify the configuration file to /etc/my.cnfdelete the previous line skip-grant-tablesand restart MySQL. This step is very important and failure to perform can lead to serious security problems.
Log in using the password you just set.
mysql -u root -p
MySQL will force the need to re-modify the password, and can not be a simple rule password.
ALTER USER root@localhost IDENTIFIED BY ‘your-new-password’;
Steps may be a little trouble, have not yet thought of other ways, first use this.
Occasionally, rsync is too slow when transferring a very large number of files. This article discusses a workaround.
Recently I had to copy about 10 TByte of data from one server to another. Normally I would use rsync and just let it run for whatever time it takes, but for this particular system I could get only a transfer speed of 10 – 20 MByte per second. As both systems were connected with a 1 GBit network, I was expecting a performance of about 100 MByte per second.
It turned out that the bottleneck was not the network speed, but the fact that the source system contained a very large number of smaller files. Rsync doesn’t seem to be the optimal solution in this case. Also, the destination system was empty, so there was no benefit in choosing rsync over scp or tar.
After some experiments, I found a command that improved the overall file copy performance significantly. It looks like this:
root@s2200:/home/backup# tar cf – * | mbuffer -m 1024M | ssh 10.1.1.207 ‘(cd /home/backup; tar xf -)’
in @ 121.3 MB/s, out @ 85.4 MB/s, 841.3 GB total, buffer 78% full
Using this method, it is possible to transfer data with a speed near the network bandwidth. The trick is the mbuffer command. It allocates a very large buffer of 1024 MByte which sits between the tar command and the ssh command.
When there are a few large files available to transfer, the tar command would copy the data faster than it can be transferred over the network. So, the buffer fills up to 100% even though data is transmitted with the full network speed.
However when there is a directory with a large number of smaller files, reading those files from the storage is relatively slow so the buffer is emptied faster than it is refilled by the tar command. But until it is not completely empty, data is still transferred with the maximum network speed.
With a bit of luck there are enough large files to keep the buffer filled. If the buffer is always near 100% full, this means that the bottleneck is the network (or the destination system). In this case it is worth trying the -z option to both tar commands. This would compress the data before transmission. However if the buffer is mostly near 0% full, this means that the source system is the bottleneck. Data can’t be read from the local storage fast enough, and spending more CPU to compress it would probably not help.
Of course, the command above makes only sense if the destination server is empty. If some of the files already exist in the destination location, rsync would simply skip over them (if they are actually identical). There are two rsync options that can be used to speed up rsync somewhat: (todo)
|
|
Recent Comments