March 2025
M T W T F S S
 12
3456789
10111213141516
17181920212223
24252627282930
31  

Categories

March 2025
M T W T F S S
 12
3456789
10111213141516
17181920212223
24252627282930
31  

Installs MongoDB based on CentOS 6.5

Installs MongoDB based on CentOS 6.5

1. Add the MongoDB installation source

vim /etc/yum.repos.d/mongodb-enterprise.repo

Write the following configuration items to the file

[Mongodb-enterprise]
name = MongoDB Enterprise Repository
baseurl = https://repo.mongodb.com/yum/redhat/$releasever/mongodb-enterprise/stable/$basearch/gpgcheck = 0
enabled = 1

2. Install MongoDB with yum

yum install mongodb-enterprise

3. Set the MongoDB process to boot
chkconfig mongod on

Detailed information:
1. Install mongodb-enterprise, will install the following packages
mongodb-enterprise-server, including Mongod process, configuration files and initialization scripts;
mongodb-enterprise-mongos, including mongos process;
mongodb-enterprise-shell, including mongo shell process;
MongoDB-Enterprise-tools, including mongoimport bsondump, mongodump, mongoexport, mongofiles , mongoimport, mongooplog, mongoperf, mongorestore, mongostat, and mongotop tools;
After 2.yun MongoDB configuration file is /etc/mongodb, initialization script /etc/rc.d/init.d/mongod;

3. Install other versions of MongoDB, such as installing version 2.6.1
yum install mongodb-enterprise-2.6.1

mongodb backup data before a month ago

mongodb backup data before a month ago

I would like to ask for big take, there is no way to back up a month before the data written into the script.
Use mongodump method or other methods, I find a look on the Internet, probably as follows:

#?/bin/bash
daybak=`date -d ‘-35 day’ +%F`
Day=`date -d ‘-35 day’ +%s`
mkdir -p /data/mongodb_backup/mongodb_backup_$daybak
SBtring=”‘{\”createTime\”:{\$gte:Date($Day)}}'”
/usr/local/mongodb/bin/mongodump –port 27017 -d test_mongodb -c testCarPositionRecord -q $SBtring -o /data/mongodb_backup/mongodb_backup_$daybak

tcpdump to see Oracle errors

ot all exceptions are created equally, and most you can ignore (the one below you can, in general). However, if you have to troubleshoot on JBOSS (or anywhere a Linux application connects to an Oracle database), what is below is a good “quicky” command the root user can run to quickly dump the Oracle exceptions being thrown back over the wire.

tcpdump -i eth1 tcp port 1521 -A -s1500 | awk ‘$1 ~ “ORA-” {i=1;split($1,t,”ORA-“);while (i <= NF) {if (i == 1) {printf("%s","ORA-"t[2])}else {printf("%s ",$i)};i++}printf("\n")}' …with the output below… [root@rmohan~]# tcpdump -i eth1 tcp port 1521 -A -s1500 | awk '$1 ~ "ORA-" {i=1;split($1,t,"ORA-");while (i <= NF) {if (i == 1) {printf("%s","ORA-"t[2])}else {printf("%s ",$i)};i++}printf("\n")}' tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth1, link-type EN10MB (Ethernet), capture size 1500 bytes ORA-01403:no data found ORA-01403:no data found ORA-01403:no data found ORA-01403:no data found ORA-01403:no data found ORA-01403:no data found ORA-01403:no data found ORA-01403:no data found ORA-01403:no data found Just something to put in your toolkit.

CentOS 7.0 Redis

1. Introduction to the redis

Redis cluster at the time of the start automatically in a number of nodes between the good film. While providing the availability between slices: when a part of the redis node failure or network interruption, the cluster can continue to work. However, when a large area of ??node failure or network interruption (such as most of the main nodes are not available), the cluster can not be used.
Therefore, from the practical point of view, Redis cluster provides the following functions:
? automatically cut the data into multiple redis nodes
? When a part of the node hung up or unreachable, the cluster can continue to work

2.redis cluster data fragmentation

The Redis cluster does not use a consistent hash, but uses a hash slot. The entire redis cluster has 16,384 hash slots, and the algorithm that determines that a key should be assigned to that slot is: Calculate the CRC16 of the key.
If each node in the cluster is responsible for a part of the hash slot, such as three nodes in the cluster, then:
? The hash slot stored by node A is: 0 – 5500
? The hash slot for node B stores: 5501 – 11000
? The hash slot stored by node C is: 11001 – 16384
This distribution is convenient for adding and deleting nodes. For example, you need to add a node D, only need to A, B, C part of the hash slot data to D node. Similarly, if you want to remove the A node in the cluster, you only need to move the data of the hash slot of node A to the B and C nodes. Once the data of the A node is removed, the A node can be completely removed from the cluster.
Because the hash slot from one node to another node is not required to stop, so increase or delete the node, or change the hash slot on the node, it is not necessary to stop.
If multiple keys belong to a hash slot, the cluster supports these keys through a command (or transaction, or lua script). Through the “hash tag” concept, the user can let multiple keys assigned to the same hash slot. The hash tag is described in the detailed documentation of the cluster. Here’s a brief introduction: If the key contains curly braces “{}”, only the strings in the braces will participate in the hash, such as “this {foo}” and “another {Foo} “The two keys are assigned to the same hash slot, so they can be manipulated in a command at the same time.

3.redis master-slave mode

In order to ensure that the cluster is still working properly when some nodes fail or the network is unreasonable, the cluster uses a master-slave model with one (master node) to N replicas (N-1 slave nodes) for each hash slot. In our previous cluster example, there are three nodes, A, B, and C, if the B node fails the cluster will not work properly, because the B node in the hash slot data can not operate. However, if we add a slave node to each node, it becomes: A, B, C three nodes are the main node, A1, B1, C1 are their slave nodes, when the B node downtime, Our cluster can work properly. B1 node is a copy of the B node, if the B node failure, the cluster will enhance the B1 as the main node, so that the cluster continues to work properly. However, if B and B1 at the same time failure, the cluster can not continue to work.
Redis cluster consistency ensures that
Redis clusters can not guarantee strong consistency. Some have been confirmed to the client to write a successful operation, will be lost in some uncertain circumstances.
The first reason for the write loss is that the master and slave nodes use asynchronous methods to synchronize data.
A write operation is a process:
1) the client initiates a write operation to the master node B 2) the master node B responds to the client write operation successfully 3) the master node B synchronizes the write operation to its slave nodes B1, B2, B3
From the above process can be seen, the main node B and so do not wait from the nodes B1, B2, B3 after the completion of the client to reply to the results of this operation. Therefore, if the master node B fails after notifying the client that the write operation is successful but before the slave node is synchronized to the slave node, one of the slave nodes that has not received the write operation will be promoted to the master node, and the write operation Forever lost.
Like the traditional database, it does not involve distributed cases, it is written back to disk every second. In order to improve consistency, you can return to the client after the completion of the write, but this will lose performance. This way is equal to the Redis cluster using synchronous replication.
Basically, there is a trade-off between performance and consistency.

4. Create and use redis clusters

4.1. Download the redis file

[root@test dtadmin]# wget http://download.redis.io/releases/redis-3.2.9.tar.gz
4.2. Unzip the redis to /opt/ directory

[root@test dtadmin]# tar -zxvf redis-3.2.9.tar.gz -C /opt/
4.3. Compile redis

[root@test dtadmin]# cd /opt/redis-3.2.9/
[root@app1 redis-3.2.9]# make && make install
[root@app1 redis-cluster]# yum -y install ruby ruby-devel rubygems rpm-build gcc

# hostname ip software port notes
1 test.rmohan.com 192.168.1.181 redis 7000
7001
7002
2 app1.rmohan.com 192.168.1.182 redis 7003
7004
7005
3 app2.rmohan.com 192.168.1.183 redis 7006
7007
7008

4.4.2 /opt/redis-3.2.9/ redis-cluster

redis-cluster
[root@test redis-3.2.9]# mkdir redis-cluster
redis-cluster
[root@test redis-cluster]# mkdir -p 7000 7001 7002
/opt/redis-3.2.9 redis.conf 7000, 7001 7002 ?
[root@test redis-cluster]# cp /opt/redis-3.2.9/redis.conf /opt/redis-3.2.9/redis-cluster/7000
[root@test redis-cluster]# cp /opt/redis-3.2.9/redis.conf /opt/redis-3.2.9/redis-cluster/7001
[root@test redis-cluster]# cp /opt/redis-3.2.9/redis.conf /opt/redis-3.2.9/redis-cluster/7002
4.4.3. /opt/redis-3.2.9/redis-cluster/ 7000, 7001 7002 redis.conf

[root@test redis-cluster]# vim 7000/redis.conf
[root@test redis-cluster]# vim 7001/redis.conf
[root@test redis-cluster]# vim 7002/redis.conf
#######################################
bind 192.168.1.181 #
port 7000 #
daemonize yes
pidfile /var/run/redis_7000.pid #
logfile “/var/log/redis-7000.log” #
appendonly yes
cluster-enabled yes
cluster-config-file nodes-7000.conf #
cluster-node-timeout 15000
4.4.4.

7003,7004,7005,7006,7007,7008
5 redis

5.1. redis

[root@test redis-cluster]# redis-server 7001/redis.conf
[root@test redis-cluster]# redis-server 7002/redis.conf
[root@test redis-cluster]# redis-server 7003/redis.conf
5.2. redis

[root@app1 redis-cluster]# redis-server 7003/redis.conf
[root@app1 redis-cluster]# redis-server 7004/redis.conf
[root@app1 redis-cluster]# redis-server 7005/redis.conf
5.3 redis

[root@app2 redis-cluster]# redis-server 7006/redis.conf
[root@app2 redis-cluster]# redis-server 7007/redis.conf
[root@app2 redis-cluster]# redis-server 7008/redis.conf
6. redis

6.1

[root@test redis-cluster]# ps -ef | grep redis
root 18313 1 0 16:44 ? 00:00:00 redis-server 192.168.1.181:7001 [cluster]
root 18325 1 0 16:44 ? 00:00:00 redis-server 192.168.1.181:7002 [cluster]
root 18371 1 0 16:45 ? 00:00:00 redis-server 192.168.1.181:7000 [cluster]
root 18449 2564 0 16:46 pts/0 00:00:00 grep –color=auto redis

[root@test redis-cluster]# netstat -tnlp | grep redis
tcp 0 0 192.168.1.181:7001 0.0.0.0:* LISTEN 18313/redis-server
tcp 0 0 192.168.1.181:7002 0.0.0.0:* LISTEN 18325/redis-server
tcp 0 0 192.168.1.181:17000 0.0.0.0:* LISTEN 18371/redis-server
tcp 0 0 192.168.1.181:17001 0.0.0.0:* LISTEN 18313/redis-server
tcp 0 0 192.168.1.181:17002 0.0.0.0:* LISTEN 18325/redis-server
tcp 0 0 192.168.1.181:7000 0.0.0.0:* LISTEN 18371/redis-server
6.2.

[root@app1 redis-cluster]# ps -ef | grep redis
root 5351 1 0 16:45 ? 00:00:00 redis-server 192.168.1.182:7003 [cluster]
root 5355 1 0 16:45 ? 00:00:00 redis-server 192.168.1.182:7004 [cluster]
root 5359 1 0 16:46 ? 00:00:00 redis-server 192.168.1.182:7005 [cluster]

[root@app1 redis-cluster]# netstat -tnlp | grep redis
tcp 0 0 192.168.1.182:7003 0.0.0.0:* LISTEN 5351/redis-server 1
tcp 0 0 192.168.1.182:7004 0.0.0.0:* LISTEN 5355/redis-server 1
tcp 0 0 192.168.1.182:7005 0.0.0.0:* LISTEN 5359/redis-server 1
tcp 0 0 192.168.1.182:17003 0.0.0.0:* LISTEN 5351/redis-server 1
tcp 0 0 192.168.1.182:17004 0.0.0.0:* LISTEN 5355/redis-server 1
tcp 0 0 192.168.1.182:17005 0.0.0.0:* LISTEN 5359/redis-server 1
6.3.

[root@app2 redis-cluster]# ps -ef | grep redis
root 21138 1 0 16:46 ? 00:00:00 redis-server 192.168.1.183:7006 [cluster]
root 21156 1 0 16:46 ? 00:00:00 redis-server 192.168.1.183:7008 [cluster]
root 21387 1 0 16:50 ? 00:00:00 redis-server 192.168.1.183:7007 [cluster]
root 21394 9287 0 16:50 pts/0 00:00:00 grep –color=auto redis

[root@app2 redis-cluster]# netstat -tnlp | grep redis
tcp 0 0 192.168.1.183:7006 0.0.0.0:* LISTEN 2959/redis-server 1
tcp 0 0 192.168.1.183:7007 0.0.0.0:* LISTEN 2971/redis-server 1
tcp 0 0 192.168.1.183:7008 0.0.0.0:* LISTEN 2982/redis-server 1
tcp 0 0 192.168.1.183:17006 0.0.0.0:* LISTEN 2959/redis-server 1
tcp 0 0 192.168.1.183:17007 0.0.0.0:* LISTEN 2971/redis-server 1
tcp 0 0 192.168.1.183:17008 0.0.0.0:* LISTEN 2982/redis-server 1

7.redis

[root@test src]# ./redis-trib.rb create –replicas 1 192.168.1.181:7000 192.168.1.181:7001 192.168.1.181:7002 192.168.1.182:7003 192.168.1.182:7004 192.168.1.182:7005 192.168.1.183:7006 192.168.1.183:7007 192.168.1.183:7008

>>> Creating cluster
>>> Performing hash slots allocation on 9 nodes…
Using 4 masters:
192.168.1.181:7000
192.168.1.182:7003
192.168.1.183:7006
192.168.1.181:7001
Adding replica 192.168.1.182:7004 to 192.168.1.181:7000
Adding replica 192.168.1.183:7007 to 192.168.1.182:7003
Adding replica 192.168.1.181:7002 to 192.168.1.183:7006
Adding replica 192.168.1.182:7005 to 192.168.1.181:7001
Adding replica 192.168.1.183:7008 to 192.168.1.181:7000
M: 4d007a1e8efdc43ca4ec3db77029709b4e8413d0 192.168.1.181:7000
slots:0-4095 (4096 slots) master
M: 0d0b4528f32db0111db2a78b8451567086b66d97 192.168.1.181:7001
slots:12288-16383 (4096 slots) master
S: e7b8ba7a800683ba017401bde9a72bb34ad252d8 192.168.1.181:7002
replicates 3b9056e1c92ee9b94870a4100b89f6dc474ec1fa
M: 4b34dcec53e46ad990b0e6bc36d5cd7b7f3f4cce 192.168.1.182:7003
slots:4096-8191 (4096 slots) master
S: 13863d63aa323fd58e7ceeba1ccc91b6304d0539 192.168.1.182:7004
replicates 4d007a1e8efdc43ca4ec3db77029709b4e8413d0
S: da3556753efe388a64fafc259338ea420a795163 192.168.1.182:7005
replicates 0d0b4528f32db0111db2a78b8451567086b66d97
M: 3b9056e1c92ee9b94870a4100b89f6dc474ec1fa 192.168.1.183:7006
slots:8192-12287 (4096 slots) master
S: ab90ee3ff9834a88416da311011e9bdfaa9a831f 192.168.1.183:7007
replicates 4b34dcec53e46ad990b0e6bc36d5cd7b7f3f4cce
S: b0dda91a2527f71fe555cdd28fa8be4b571a4bed 192.168.1.183:7008
replicates 4d007a1e8efdc43ca4ec3db77029709b4e8413d0
Can I set the above configuration? (type ‘yes’ to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join……..
>>> Performing Cluster Check (using node 192.168.1.181:7000)
M: 4d007a1e8efdc43ca4ec3db77029709b4e8413d0 192.168.1.181:7000
slots:0-4095 (4096 slots) master
2 additional replica(s)
S: e7b8ba7a800683ba017401bde9a72bb34ad252d8 192.168.1.181:7002
slots: (0 slots) slave
replicates 3b9056e1c92ee9b94870a4100b89f6dc474ec1fa
S: ab90ee3ff9834a88416da311011e9bdfaa9a831f 192.168.1.183:7007
slots: (0 slots) slave
replicates 4b34dcec53e46ad990b0e6bc36d5cd7b7f3f4cce
M: 4b34dcec53e46ad990b0e6bc36d5cd7b7f3f4cce 192.168.1.182:7003
slots:4096-8191 (4096 slots) master
1 additional replica(s)
M: 0d0b4528f32db0111db2a78b8451567086b66d97 192.168.1.181:7001
slots:12288-16383 (4096 slots) master
1 additional replica(s)
M: 3b9056e1c92ee9b94870a4100b89f6dc474ec1fa 192.168.1.183:7006
slots:8192-12287 (4096 slots) master
1 additional replica(s)
S: b0dda91a2527f71fe555cdd28fa8be4b571a4bed 192.168.1.183:7008
slots: (0 slots) slave
replicates 4d007a1e8efdc43ca4ec3db77029709b4e8413d0
S: 13863d63aa323fd58e7ceeba1ccc91b6304d0539 192.168.1.182:7004
slots: (0 slots) slave
replicates 4d007a1e8efdc43ca4ec3db77029709b4e8413d0
S: da3556753efe388a64fafc259338ea420a795163 192.168.1.182:7005
slots: (0 slots) slave
replicates 0d0b4528f32db0111db2a78b8451567086b66d97
[OK] All nodes agree about slots configuration.
>>> Check for open slots…
>>> Check slots coverage…
[OK] All 16384 slots covered.

wild card ssl certifcate

Generating and Installing Wildcard and Multi-Domain SSL Certificates

Generate a CSR (Cert Signing Request) For a Wildcard Domain

Normally, to generate a certificate for a wildcard domain such as *.example.com, all you have to do (when generating the CSR) is specify in the “Common Name” field:
*.example.com

The problem is that that:

This will only wildcard 1 sub-domain level (i.e., it will not work for www.subdomain.example.com, https://www.subdomain.example.com).
And it will not cover the root domain (i.e., “example.com”, https://example.com).
To cover additional domains and wildcards, you have to use openssl’s SAN (subjectAltName) extension…

1. Edit file openssl.cnf (open via Notepad) –
File C:\WampDeveloper\Config\Apache\openssl.cnf

2. Uncomment (remove starting ‘#’) line:
# req_extensions = v3_req # The extensions to add to a certificate request

req_extensions = v3_req # The extensions to add to a certificate request
3. Update the “[ v3_req ]” section with line:
subjectAltName = @alt_names

[ v3_req ]

# Extensions to add to a certificate request

basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
4. Create file named “alt-names.txt” and place the entire list of all domains and wildcards into it (including the previously entered “Common Name”):

[ alt_names ]
DNS.1 = www.example.com
DNS.2 = example.com
DNS.3 = *.example.com
DNS.4 = *.*.example.com
Note that entry “*.*.example.com” wildcards on multiple level sub-domains. This entry might, or might not work, depending on how different Browsers decide to handle this and if the CA (Certificate Authority) allows this.

5. Follow the exact instruction on generating a CSR, except make sure to add the “alt-names.txt” file into the CSR generation command…

openssl genrsa -out example_com.key 2048
openssl req -new -sha256 -key example_com.key -out example_com.csr -config C:\WampDeveloper\Config\Apache\openssl.cnf
The first line generates your private key. The next line generates the CSR, using the additional entries from the alt-names.txt file. At this point you can either input the contents of CSR file into the CA’s certificate purchasing process, or self-sign the cert…

Self-Signing a CSR (Certificate Signing Request) For a Wildcard Domain

If you are going to self-sign this certificate, you will need to tell the CA configuration to allow and use the SAN extension, by uncommenting in file openssl.cnf, line:
# copy_extensions = copy

[ CA_default ]
# Extension copying option: use with caution.
copy_extensions = copy
Then create the self-signed wildcard certificate the exact same way as in all other cases:

openssl x509 -req -sha256 -days 365 -in example_com.csr -signkey example_com.key -out example_com.crt -extfile C:\WampDeveloper\Config\Apache\alt-names.txt
Installing Wildcard and Multi-Domain Certificates

There is no difference between how Apache (nor any other web-server such as IIS, Nginx, Tomcat) treats normal and wildcard certs.

You would install the certificates the regular way, with a separate update to each website’s SSL VirtualHost file, on the location/path to the: cert, bundle file (if exists), and private key (all of which can point to the same locations for each website, or can be duplicated into each websites’ certs\ folder)…

For example see Installing Comodo PositiveSSL Certificate Bundled with Root and Intermediate CA Certificates on Apache.

Note that if you self-signed the certificate:

There will be no bundle file (don’t use “SSLCertificateChainFile” directive).
And if you want your local OS and Browser to actually accept and pass this certificate (without blocking website access as “untrusted”), you are going to have to install it into Windows Trusted Root Certification Authorities store. *Some Browsers do not use this store and have their own “trust exception” process.

Tomcat+Nginx+Memcached

?Tomcat+Nginx+Memcached

??Ubuntu 16.04 64 bit test pass

??Movement classification, load balancing, clustering, Javolution serialized, high-performance, high availability

Configuration environment (both current latest stable version):
??the JDK-Linux-x64-8u131
??the Apache-Tomcat-8.5.14
??nginx-1.12.0
??memcached-1.4.36

EDITORIAL:

??Originally intended configuration kryo serialization framework, but also how to get unsuccessful, with only a Javolution
??If the last unsuccessful general problems will find out in Tomcat, look at the log resolved
??under tomcat can deploy multiple projects, and the movement is still classified
??nginx configured to jsp, servlet, do the file extension to the Tomcat process, can be added according to the situation
??if you do not use the root account, then run nginx, nginx is useless to the user to configure
??Tomcat, nginx are optimized configuration, without modifying the original project Nothing, directly into the Tomcat / webapps where you can

process:

??Long process, carefully

# Sudo passwd
# Use administrator to configure
Sudo Su
# Update software list
apt-get update
# Install the required dependencies
apt-get install gcc zlib1g zlib1g-dev openssl libssl-dev libpcre3 libpcre3-dev libevent-dev
# Reboot (recommended)
reboot

sudo su
http://www.Oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
tar -zxvf jdk-8u131-linux-x64.tar.gz
mv jdk1.8.0_131 /usr/local/jdk

# Configure JDK environment variables
sed -i ‘$a ulimit -n 65535’ /etc/profile
sed -i ‘$a export JAVA_HOME=/usr/local/jdk’ /etc/profile
sed -i ‘$a export JRE_HOME=$JAVA_HOME/jre’ /etc/profile
sed -i ‘$a export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH’ /etc/profile
sed -i ‘$a export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH’ /etc/profile
source /etc/profile

rm jdk-8u131-linux-x64.tar.gz

# Installation configuration memcached
wget http://www.memcached.org/files/memcached-1.4.36.tar.gz
tar -zxvf memcached-1.4.36.tar.gz
cd memcached-1.4.36
./configure –prefix=/usr/local/memcached
make && make install
cd .. && rm -rf memcached-1.4.36 && rm memcached-1.4.36.tar.gz

# Configure Tomcat installation
wget http://apache.fayea.com/tomcat/tomcat-8/v8.5.14/bin/apache-tomcat-8.5.14.tar.gz
tar -zxvf apache-tomcat-8.5.14.tar.gz

# Download the added lib file to support the sharing session
cd apache-tomcat-8.5.14/lib
wget http://central.maven.org/maven2/de/javakaffee/msm/memcached-session-manager/2.1.1/memcached-session-manager-2.1.1.jar
wget http://central.maven.org/maven2/de/javakaffee/msm/memcached-session-manager-tc8/2.1.1/memcached-session-manager-tc8-2.1.1.jar
wget http://central.maven.org/maven2/net/spy/spymemcached/2.11.1/spymemcached-2.11.1.jar
wget http://central.maven.org/maven2/de/javakaffee/msm/msm-javolution-serializer/2.1.1/msm-javolution-serializer-2.1.1.jar
wget http://central.maven.org/maven2/javolution/javolution/5.4.5/javolution-5.4.5.jar
cd .. && cd ..

# Disable scan for new TLDs added to the jar package
sed -i ‘134c xom-*.jar,javolution-5.4.5.jar,memcached-session-manager-2.1.1.jar,memcached-session-manager-tc8-2.1.1.jar,msm-javolution-serializer-2.1.1.jar,spymemcached-2.11.1.jar’ apache-tomcat-8.5.14/conf/catalina.properties
Optimization of #tomcat, an insert in the line at 102
sed -i ‘102c export JAVA_OPTS=”-server -Xms1000M -Xmx1000M -Xss512k -XX:+AggressiveOpts -XX:+UseBiasedLocking -XX:+DisableExplicitGC -XX:MaxTenuringThreshold=15 -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+CMSParallelRemarkEnabled -XX:+UseCMSCompactAtFullCollection -XX:LargePageSizeInBytes=128m -XX:+UseFastAccessorMethods -XX:+UseCMSInitiatingOccupancyOnly -Djava.awt.headless=true”‘ apache-tomcat-8.5.14/bin/catalina.sh

rm -rf apache-tomcat-8.5.14/webapps
mkdir -vp apache-tomcat-8.5.14/webapps/ROOT
cp -r apache-tomcat-8.5.14 /usr/local/tomcat
mv apache-tomcat-8.5.14 /usr/local/tomcat2
chown ubuntu.ubuntu -R /usr/local/tomcat
chown ubuntu.ubuntu -R /usr/local/tomcat2
rm apache-tomcat-8.5.14.tar.gz

# Create a test page
touch /usr/local/tomcat/webapps/ROOT/index.jsp
echo ‘<%@ page language="java" contentType="text/html; charset=UTF-8" pageEncoding="UTF-8"%>Tomcat1<%=session.getId()%>‘ >/usr/local/tomcat/webapps/ROOT/index.jsp
touch /usr/local/tomcat2/webapps/ROOT/index.jsp
echo ‘<%@ page language="java" contentType="text/html; charset=UTF-8" pageEncoding="UTF-8"%>Tomcat2<%=session.getId()%>‘ >/usr/local/tomcat2/webapps/ROOT/index.jsp

# Configure sharing session
# Since we handed over static files nginx process, so no configuration requestUriIgnorePattern
# We configured to use Javolution serialization framework
vim /usr/local/tomcat/conf/context.xml
# In was added the following content within the tag
##################################################

##################################################

# Tomcat2 also need to add the same content, the only difference is that instead n2 failoverNodes
vim /usr/local/tomcat2/conf/context.xml
##################################################

##################################################

# Modify port configuration, modify the following content
vim /usr/local/tomcat/conf/server.xml
##################################################
# Since our tomcat is running on a single server, so you need to add in jvmRoute Engine node respectively = ” tomcat ” and = jvmRoute ” tomcat2 ”
# Still optimized configuration, and does not turn gzip, has opened because nginx

……..

##################################################

# The same port configuration modifications tomcat2
vim /usr/local/tomcat2/conf/server.xml
##################################################

……..

##################################################
# While Tomcat2 profile all changed 8105,8009 8180,8005 8080 port to 8109 instead, because it is interactive and so do not configure SSL nginx

# Configure the following content to optimize tomcat
# Installation configuration apr
wget http://mirror.bit.edu.cn/apache//apr/apr-1.5.2.tar.gz
tar -zxvf apr-1.5.2.tar.gz
cd apr-1.5.2 && ./configure –prefix=/usr/local/apr
make && make install
cd .. && rm -rf apr-1.5.2 && rm apr-1.5.2.tar.gz

apr-util
wget http://mirror.bit.edu.cn/apache//apr/apr-util-1.5.4.tar.gz
tar -zxvf apr-util-1.5.4.tar.gz
cd apr-util-1.5.4 && ./configure –prefix=/usr/local/apr-util –with-apr=/usr/local/apr
make && make install
cd .. && rm -rf apr-util-1.5.4 && rm apr-util-1.5.4.tar.gz

tomcat-native
wget https://mirrors.tuna.tsinghua.edu.cn/apache/tomcat/tomcat-connectors/native/1.2.12/source/tomcat-native-1.2.12-src.tar.gz
tar -zxvf tomcat-native-1.2.12-src.tar.gz
cd tomcat-native-1.2.12-src/native && ./configure –with-apr=/usr/local/apr
make && make install
cd .. && cd .. && rm -rf tomcat-native-1.2.12-src && rm tomcat-native-1.2.12-src.tar.gz

tomcat-native
sed -i ‘$a export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/apr/lib’ /etc/profile
source /etc/profile

# The last installation configuration Nginx
wget http://nginx.org/download/nginx-1.12.0.tar.gz
tar -zxvf nginx-1.12.0.tar.gz
cd nginx-1.12.0 && ./configure –user=ubuntu –group=ubuntu –prefix=/usr/local/nginx –with-http_ssl_module –with-http_gzip_static_module
make && make install
chown ubuntu.ubuntu -R /usr/local/nginx
cd .. && rm -rf nginx-1.12.0 && rm nginx-1.12.0.tar.gz

# First ssl certificates in the / usr / local / nginx / conf / directory, respectively cert.crt and cert.key file, if you do not configure SSL skip
vim /usr/local/nginx/conf/nginx.conf
# Set nginx.conf, configuration optimization has been done, if you do not need to change the corresponding configuration to SSL
##################################################
user ubuntu ubuntu;
worker_processes auto;
worker_rlimit_nofile 65535 ;
error_log logs/error.log warn;
pid logs/nginx.pid;

events {
use epoll;
worker_connections 65500;
}

http {
server_tokens off;
include mime.types;
default_type application/octet-stream;
charset utf-8;

log_format main ‘$remote_addr – $remote_user [$time_local] “$request” ‘
‘$status $body_bytes_sent “$http_referer” ‘
‘”$http_user_agent” “$http_x_forwarded_for”‘;
access_log logs/access.log main;

sendfile on;
tcp_nopush on;
reset_timedout_connection on;
keepalive_timeout 30;

open_file_cache max=65535 inactive=20s;
open_file_cache_min_uses 1;
open_file_cache_valid 30s;

gzip on;
gzip_comp_level 5;
gzip_min_length 256;
gzip_proxied any;
gzip_vary on;
gzip_http_version 1.0;
gzip_buffers 4 16k;
gzip_types
text/plain text/css text/xml application/xml text/x-json application/json
image/svg+xml image/png image/jpeg image/x-icon image/gif
text/javascript application/javascript application/x-javascript
application/x-font-truetype application/x-font-woff application/vnd.ms-fontobject;
gzip_disable “MSIE [1-6]\.”;

proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 32k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;

upstream tomcat_server {
server localhost:8080 weight=1;
server localhost:8180 weight=1;
}

server {
listen 80;
server_name localhost;
return 301 https://$host$request_uri;
}

server {
listen 443 ssl;
server_name localhost;
ssl_certificate cert.crt;
ssl_certificate_key cert.key;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_ciphers HIGH: ! Anull :! MD5;
ssl_prefer_server_ciphers on;
location / {
root /usr/local/tomcat/webapps/ROOT;
index index.html index.jsp index.htm;
expires 30d;
}
location ~ \.(jsp|servlet|do)$ {
index index.html index.jsp index.htm;
proxy_pass http://tomcat_server;
}
error_page 400 404 414 500 502 503 504 /error.html;
}
}
##################################################
Are you the type of need does not exist # recommended to check the type gzip
# If your tomcat is configured with multiple projects, only you need to add the following content to the nginx.conf
LOCATION / your project name {
root /usr/local/tomcat/webapps;
index index.html index.jsp index.htm;
expires 30d;
}
##################################################
After # Save, enter the following command to check the configuration
/usr/local/nginx/sbin/nginx -t
nginx ?sudo /usr/local/nginx/sbin/nginx -s reload

Switching to the normal user #
your ubuntu

memcached?ps -ef | grep memcached
/usr/local/memcached/bin/memcached -d -m 64M -u ubuntu -l 127.0.0.1 -p 11211 -c 32750 -P /tmp/memcached-n1.pid
/usr/local/memcached/bin/memcached -d -m 64M -u ubuntu -l 127.0.0.1 -p 11311 -c 32750 -P /tmp/memcached-n2.pid

# Start Tomcat, be sure to use ordinary users to run Tomcat
/usr/local/tomcat/bin/startup.sh && /usr/local/tomcat2/bin/startup.sh

# Start nginx, be sure to use administrator privileges to run nginx
sudo /usr/local/nginx/sbin/nginx

Deferral in qmail: CNAME_lookup_failed_temporarily. _ (# 4.4.3)

I tried to send e-mail from the system to realize fax transmission.
However, I attempted to send to eFax’s e-mail address (domain part efaxsend.com), but in the
qmail mail log

Deferral: CNAME_lookup_failed_temporarily. _ (# 4.4.3)

It is recorded as a differer and can not be sent.

As a cause,

I can not resolve the destination host in DNS.

Although it can be considered, it was confirmed that it can be normally dragged with the dig command.
There is another reason.

Qmail gives an error of DNS response of 512 bytes or more

It seems to be the cause. At the time I saw efaxsend.com MX records, there were 16 MX records.
The patch has been released.

Http://www.ckdhr.com/ckd/qmail-103.patch

You can apply this patch or directly by specifying the destination SMTP server addressed to the given domain in qmail’s smtproutes configuration file as shown below.

# Vi / var / qmail / control / smtproutes
rmohan.com: mail.yahoo.com

However, since the other SMTP server may be changed, it is better to apply patches as well.

Redis is a shallow introductory tutorial

Redis is a shallow introductory tutorial

1. Introduction to Redis

Redis is an open source using ANSI C language-based, memory-based Key-Value database.

It supports storing more value types, including string, list, set, zset (sorted collection), and hash (hash type).

Redis supports master-slave synchronization, and data can be synchronized from the master server to any number of slave servers. Since the publish / subscribe mechanism is fully implemented so that the tree can be synchronized from anywhere in the database, a channel can be subscribed and a complete message is received by the master server Release record.

Compared to memcached, Rdeis has the following advantages:

1. redis native support for more data types.
2. redis has a persistence mechanism, you can regularly store the data in memory to the hard disk.
3. redis support master-slave mode of data backup.

4. performance. Redis author’s argument is to average the performance of a single core, in a single data is not the case of Redis better.

Why do you say that the reason is that Redis is a single-threaded operation. Because it is single-threaded operation, so and Memcached multi-threaded compared to the overall performance will be low. Because it is a single-threaded operation, IO is serialized, network IO and memory IO, so when a single piece of data is too large, due to the need to wait for a command all IO to complete the subsequent order, so the performance will be affected.

2. Install Redis

2. Install Redis

2.1 Redis installation is very simple, with yum or apt-get can be installed directly

# yum install epel-release
# yum install redis

2.2 start / stop Redis

# redis-server /etc/redis.conf

# systemctl start redis
# systemctl stop redis

3. use Redis

3.1 Redis-cli Command Line Operation KV

Connect to Redis

# redis-cli -p port
# redis-cli
Ping

127.0.0.1:6379> ping
PONG
Set key

127.0.0.1:6379> set testkey “hello”
OK
Query key

127.0.0.1:6379> get testkey
“hello”
Delete key

127.0.0.1:6379> del testkey
(integer) 1
Set the validity period

127.0.0.1:6379> setex test 10 111
OK
Use EXPIRE key s to set the expiration time milliseconds with PEXPIRE

127.0.0.1:6379> EXPIRE test11 300
(integer) 1
Use the TTL key to view the expiration time in milliseconds with PTTL

127.0.0.1:6379> TTL test11
(integer) 288
Cancel expiration time with PERSIST key

127.0.0.1:6379> PERSIST test11
(integer) 1

3.2 Advanced features

3.2.1 self-increment, self-decreasing, INCR, DECR, INCRBY, SORT

127.0.0.1:6379> set counter 100
OK
127.0.0.1:6379> incr counter
(integer) 101
127.0.0.1:6379> incr counter
(integer) 102
127.0.0.1:6379> decr counter
(integer) 101

3.2.2 Transactions

127.0.0.1:6379> MULTI
OK
127.0.0.1:6379> set test11 111111
QUEUED
127.0.0.1:6379> set test12 121212
QUEUED
127.0.0.1:6379> incr counter
QUEUED
127.0.0.1:6379> EXEC
1) OK
2) OK
3) (integer) 102

3.2.3 HyperLogLogs

Redis adds the HyperLogLog algorithm to version 2.8.9.

3.2.4 publish / subscribe function

Redis publish a subscription (pub / sub) is a message communication mode: the sender sends a message and the subscriber receives the message. Redis clients can subscribe to any number of channels.

Subscribe to the channel redisChat on a client

127.0.0.1:6379> SUBSCRIBE redisChat
Reading messages… (press Ctrl-C to quit)
1) “subscribe”
2) “redisChat”
3) (integer) 1
On another client, send a message to the channel redisChat, the subscriber will be able to receive the message

Release:

127.0.0.1:6379> PUBLISH redisChat “redis haha”
(integer) 1
Subscriptions:

127.0.0.1:6379> SUBSCRIBE redisChat
Reading messages… (press Ctrl-C to quit)
1) “subscribe”
2) “redisChat”
3) (integer) 1
1) “message”
2) “redisChat”
3) “redis haha”
3. 3 View the Redis status

Info output a lot of information, you can specify the output part

1
127.0.0.1:6379> info stats

1
127.0.0.1:6379> info memory

Used_memory: The total amount of memory allocated by the Redis allocator, in bytes.

Used_memory_rss: Returns the amount of memory that Redis has allocated from the operating system (commonly known as resident set size). This value and top, ps and other command output consistent.

Rss> used, and the difference between the two values ??is large, that there is (internal or external) memory fragmentation.

The ratio of memory fragmentation can be seen by the value of mem_fragmentation_ratio.

Used> rss, that part of the memory of Redis is replaced by the operating system swap space, and in this case, the operation may produce significant delay.

Used_memory_peak: peak, set the maximum memory to be greater than the peak

3. 4 other orders

View the number of records

127.0.0.1:6379> dbsize

View all KEYs

127.0.0.1:6379> KEYS *

List all client connections

127.0.0.1:6379> CLIENT LIST

Close the ip: port client

127.0.0.1:6379> CLIENT KILL 127.0.0.1:11902
Clear all keys for all databases

127.0.0.1:6379> FLUSHALL
Empty all keys in the current database

127.0.0.1:6379> FLUSHDB
Returns the last time the data was successfully saved to disk, expressed in UNIX timestamp format

127.0.0.1:6379> LASTSAVE
Returns the current server time, expressed in UNIX timestamp format

127.0.0.1:6379> TIME
Connect to other databases (default database is 0 )

127.0.0.1:6379> SELECT 1
OK
Moves the key of the current database to the specified database

127.0.0.1:6379> MOVE test2 1
(integer) 1

4. Set the file

4.1 /etc/redis.conf

daemonize no
timeout 0
maxclients 10000

maxmemory
maxmemory-policy volatile-lru
maxmemory-samples 3
hash-max-ziplist-entries 512 Map
hash-max-ziplist-value 64 Map
list-max-ziplist-entries 512?list-max-ziplist-value 64
slowlog-log-slower-than 10000 slow log ?microseconds?1000000?
slowlog-max-len 128 slow log

4.2 View the maximum number of connections

127.0.0.1:6379> config get maxclients
1) “maxclients”
2) “10000”
Adjust the parameters during operation

127.0.0.1:6379> config set maxclients 10001
4.3 View slow log

127.0.0.1:6379> SLOWLOG get
127.0.0.1:6379> SLOWLOG get 10
1) 1) (integer) 0
2) (integer) 1448413479
3) (integer) 124211
4) 1) “FLUSHALL”
Confirm the number of slow log settings

127.0.0.1:6379> SLOWLOG len
Clear the slow log

127.0.0.1:6379> SLOWLOG reset
5. Data persistence

5.1 Snapshot ( Snapshot )

5.1.1 Setting up a snapshot in the settings file

save
rdbcompression yes/no
dbfilename dump.rdb ?Append Only File?
dir /var/lib/redis/
5.1.2 Manually create snapshots

Execute the save or bgsave command at the command line

127.0.0.1:6379> SAVE
OK
5.2 Log Backup ( Append Only File )

Similar to the mysql binlog, the operation will be recorded in the log Lane. Snapshot does not require the need to save the accuracy, and snapshots in combination, is not recommended to use alone. The default interval is 1 second and can be modified.

5.2.1 Setting AOF in the setup file

appendonly yes Append Only File
appendfilename “appendonly.aof”
appendfsync always/everysec/no
no-appendfsync-on-rewrite no
bgsave
AOF ppendfsync
none

5.3 Restore

To restore Redis data simply move Redis’s backup file (dump.rdb, appendonly.aof) to the Redis directory and then start the server.

To get your Redis directory, use the command as follows:

127.0.0.1:6379> config get dir
1) “dir”
2) “/var/lib/redis”
6. Postscript

This article briefly describes the installation and use of Redis, followed by master and slave synchronization, load dispersion.

Tomcat+Nginx+Memcached

?Tomcat+Nginx+Memcached

??Ubuntu 16.04 64 bit test pass

??Movement classification, load balancing, clustering, Javolution serialized, high-performance, high availability

Configuration environment (both current latest stable version):
??the JDK-Linux-x64-8u131
??the Apache-Tomcat-8.5.14
??nginx-1.12.0
??memcached-1.4.36

EDITORIAL:

??Originally intended configuration kryo serialization framework, but also how to get unsuccessful, with only a Javolution
??If the last unsuccessful general problems will find out in Tomcat, look at the log resolved
??under tomcat can deploy multiple projects, and the movement is still classified
??nginx configured to jsp, servlet, do the file extension to the Tomcat process, can be added according to the situation
??if you do not use the root account, then run nginx, nginx is useless to the user to configure
??Tomcat, nginx are optimized configuration, without modifying the original project Nothing, directly into the Tomcat / webapps where you can

process:

??Long process, carefully

# Sudo passwd
# Use administrator to configure
Sudo Su
# Update software list
apt-get update
# Install the required dependencies
apt-get install gcc zlib1g zlib1g-dev openssl libssl-dev libpcre3 libpcre3-dev libevent-dev
# Reboot (recommended)
reboot

sudo su
http://www.Oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
tar -zxvf jdk-8u131-linux-x64.tar.gz
mv jdk1.8.0_131 /usr/local/jdk

# Configure JDK environment variables
sed -i ‘$a ulimit -n 65535’ /etc/profile
sed -i ‘$a export JAVA_HOME=/usr/local/jdk’ /etc/profile
sed -i ‘$a export JRE_HOME=$JAVA_HOME/jre’ /etc/profile
sed -i ‘$a export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH’ /etc/profile
sed -i ‘$a export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH’ /etc/profile
source /etc/profile

rm jdk-8u131-linux-x64.tar.gz

# Installation configuration memcached
wget http://www.memcached.org/files/memcached-1.4.36.tar.gz
tar -zxvf memcached-1.4.36.tar.gz
cd memcached-1.4.36
./configure –prefix=/usr/local/memcached
make && make install
cd .. && rm -rf memcached-1.4.36 && rm memcached-1.4.36.tar.gz

# Configure Tomcat installation
wget http://apache.fayea.com/tomcat/tomcat-8/v8.5.14/bin/apache-tomcat-8.5.14.tar.gz
tar -zxvf apache-tomcat-8.5.14.tar.gz

# Download the added lib file to support the sharing session
cd apache-tomcat-8.5.14/lib
wget http://central.maven.org/maven2/de/javakaffee/msm/memcached-session-manager/2.1.1/memcached-session-manager-2.1.1.jar
wget http://central.maven.org/maven2/de/javakaffee/msm/memcached-session-manager-tc8/2.1.1/memcached-session-manager-tc8-2.1.1.jar
wget http://central.maven.org/maven2/net/spy/spymemcached/2.11.1/spymemcached-2.11.1.jar
wget http://central.maven.org/maven2/de/javakaffee/msm/msm-javolution-serializer/2.1.1/msm-javolution-serializer-2.1.1.jar
wget http://central.maven.org/maven2/javolution/javolution/5.4.5/javolution-5.4.5.jar
cd .. && cd ..

# Disable scan for new TLDs added to the jar package
sed -i ‘134c xom-*.jar,javolution-5.4.5.jar,memcached-session-manager-2.1.1.jar,memcached-session-manager-tc8-2.1.1.jar,msm-javolution-serializer-2.1.1.jar,spymemcached-2.11.1.jar’ apache-tomcat-8.5.14/conf/catalina.properties
Optimization of #tomcat, an insert in the line at 102
sed -i ‘102c export JAVA_OPTS=”-server -Xms1000M -Xmx1000M -Xss512k -XX:+AggressiveOpts -XX:+UseBiasedLocking -XX:+DisableExplicitGC -XX:MaxTenuringThreshold=15 -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+CMSParallelRemarkEnabled -XX:+UseCMSCompactAtFullCollection -XX:LargePageSizeInBytes=128m -XX:+UseFastAccessorMethods -XX:+UseCMSInitiatingOccupancyOnly -Djava.awt.headless=true”‘ apache-tomcat-8.5.14/bin/catalina.sh

rm -rf apache-tomcat-8.5.14/webapps
mkdir -vp apache-tomcat-8.5.14/webapps/ROOT
cp -r apache-tomcat-8.5.14 /usr/local/tomcat
mv apache-tomcat-8.5.14 /usr/local/tomcat2
chown ubuntu.ubuntu -R /usr/local/tomcat
chown ubuntu.ubuntu -R /usr/local/tomcat2
rm apache-tomcat-8.5.14.tar.gz

# Create a test page
touch /usr/local/tomcat/webapps/ROOT/index.jsp
echo ‘<%@ page language="java" contentType="text/html; charset=UTF-8" pageEncoding="UTF-8"%>Tomcat1<%=session.getId()%>‘ >/usr/local/tomcat/webapps/ROOT/index.jsp
touch /usr/local/tomcat2/webapps/ROOT/index.jsp
echo ‘<%@ page language="java" contentType="text/html; charset=UTF-8" pageEncoding="UTF-8"%>Tomcat2<%=session.getId()%>‘ >/usr/local/tomcat2/webapps/ROOT/index.jsp

# Configure sharing session
# Since we handed over static files nginx process, so no configuration requestUriIgnorePattern
# We configured to use Javolution serialization framework
vim /usr/local/tomcat/conf/context.xml
# In was added the following content within the tag
##################################################

##################################################

# Tomcat2 also need to add the same content, the only difference is that instead n2 failoverNodes
vim /usr/local/tomcat2/conf/context.xml
##################################################

##################################################

# Modify port configuration, modify the following content
vim /usr/local/tomcat/conf/server.xml
##################################################
# Since our tomcat is running on a single server, so you need to add in jvmRoute Engine node respectively = ” tomcat ” and = jvmRoute ” tomcat2 ”
# Still optimized configuration, and does not turn gzip, has opened because nginx

……..

##################################################

# The same port configuration modifications tomcat2
vim /usr/local/tomcat2/conf/server.xml
##################################################

……..

##################################################
# While Tomcat2 profile all changed 8105,8009 8180,8005 8080 port to 8109 instead, because it is interactive and so do not configure SSL nginx

# Configure the following content to optimize tomcat
# Installation configuration apr
wget http://mirror.bit.edu.cn/apache//apr/apr-1.5.2.tar.gz
tar -zxvf apr-1.5.2.tar.gz
cd apr-1.5.2 && ./configure –prefix=/usr/local/apr
make && make install
cd .. && rm -rf apr-1.5.2 && rm apr-1.5.2.tar.gz

apr-util
wget http://mirror.bit.edu.cn/apache//apr/apr-util-1.5.4.tar.gz
tar -zxvf apr-util-1.5.4.tar.gz
cd apr-util-1.5.4 && ./configure –prefix=/usr/local/apr-util –with-apr=/usr/local/apr
make && make install
cd .. && rm -rf apr-util-1.5.4 && rm apr-util-1.5.4.tar.gz

tomcat-native
wget https://mirrors.tuna.tsinghua.edu.cn/apache/tomcat/tomcat-connectors/native/1.2.12/source/tomcat-native-1.2.12-src.tar.gz
tar -zxvf tomcat-native-1.2.12-src.tar.gz
cd tomcat-native-1.2.12-src/native && ./configure –with-apr=/usr/local/apr
make && make install
cd .. && cd .. && rm -rf tomcat-native-1.2.12-src && rm tomcat-native-1.2.12-src.tar.gz

tomcat-native
sed -i ‘$a export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/apr/lib’ /etc/profile
source /etc/profile

# The last installation configuration Nginx
wget http://nginx.org/download/nginx-1.12.0.tar.gz
tar -zxvf nginx-1.12.0.tar.gz
cd nginx-1.12.0 && ./configure –user=ubuntu –group=ubuntu –prefix=/usr/local/nginx –with-http_ssl_module –with-http_gzip_static_module
make && make install
chown ubuntu.ubuntu -R /usr/local/nginx
cd .. && rm -rf nginx-1.12.0 && rm nginx-1.12.0.tar.gz

# First ssl certificates in the / usr / local / nginx / conf / directory, respectively cert.crt and cert.key file, if you do not configure SSL skip
vim /usr/local/nginx/conf/nginx.conf
# Set nginx.conf, configuration optimization has been done, if you do not need to change the corresponding configuration to SSL
##################################################
user ubuntu ubuntu;
worker_processes auto;
worker_rlimit_nofile 65535 ;
error_log logs/error.log warn;
pid logs/nginx.pid;

events {
use epoll;
worker_connections 65500;
}

http {
server_tokens off;
include mime.types;
default_type application/octet-stream;
charset utf-8;

log_format main ‘$remote_addr – $remote_user [$time_local] “$request” ‘
‘$status $body_bytes_sent “$http_referer” ‘
‘”$http_user_agent” “$http_x_forwarded_for”‘;
access_log logs/access.log main;

sendfile on;
tcp_nopush on;
reset_timedout_connection on;
keepalive_timeout 30;

open_file_cache max=65535 inactive=20s;
open_file_cache_min_uses 1;
open_file_cache_valid 30s;

gzip on;
gzip_comp_level 5;
gzip_min_length 256;
gzip_proxied any;
gzip_vary on;
gzip_http_version 1.0;
gzip_buffers 4 16k;
gzip_types
text/plain text/css text/xml application/xml text/x-json application/json
image/svg+xml image/png image/jpeg image/x-icon image/gif
text/javascript application/javascript application/x-javascript
application/x-font-truetype application/x-font-woff application/vnd.ms-fontobject;
gzip_disable “MSIE [1-6]\.”;

proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 32k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;

upstream tomcat_server {
server localhost:8080 weight=1;
server localhost:8180 weight=1;
}

server {
listen 80;
server_name localhost;
return 301 https://$host$request_uri;
}

server {
listen 443 ssl;
server_name localhost;
ssl_certificate cert.crt;
ssl_certificate_key cert.key;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_ciphers HIGH: ! Anull :! MD5;
ssl_prefer_server_ciphers on;
location / {
root /usr/local/tomcat/webapps/ROOT;
index index.html index.jsp index.htm;
expires 30d;
}
location ~ \.(jsp|servlet|do)$ {
index index.html index.jsp index.htm;
proxy_pass http://tomcat_server;
}
error_page 400 404 414 500 502 503 504 /error.html;
}
}
##################################################
Are you the type of need does not exist # recommended to check the type gzip
# If your tomcat is configured with multiple projects, only you need to add the following content to the nginx.conf
LOCATION / your project name {
root /usr/local/tomcat/webapps;
index index.html index.jsp index.htm;
expires 30d;
}
##################################################
After # Save, enter the following command to check the configuration
/usr/local/nginx/sbin/nginx -t
nginx???sudo /usr/local/nginx/sbin/nginx -s reload

Switching to the normal user #
your ubuntu

memcached?ps -ef | grep memcached
/usr/local/memcached/bin/memcached -d -m 64M -u ubuntu -l 127.0.0.1 -p 11211 -c 32750 -P /tmp/memcached-n1.pid
/usr/local/memcached/bin/memcached -d -m 64M -u ubuntu -l 127.0.0.1 -p 11311 -c 32750 -P /tmp/memcached-n2.pid

# Start Tomcat, be sure to use ordinary users to run Tomcat
/usr/local/tomcat/bin/startup.sh && /usr/local/tomcat2/bin/startup.sh

# Start nginx, be sure to use administrator privileges to run nginx
sudo /usr/local/nginx/sbin/nginx

ActiveMQ standalone installation and use tutorial

ActiveMQ standalone installation and use tutorial

First of all, simply introduce the MQ, MQ English name MessageQueue, Chinese name is that we use the message queue, why use it, that white is a message to accept and forward the container can be used for message push.

ActiveMQ is produced by Apache, one of the most popular, powerful open source message bus. ActiveMQ is a JMS provider implementation that fully supports JMS 1.1 and J2EE 1.4 specifications. It is very fast and supports multiple languages ??of clients and protocols, and can be easily embedded into enterprise application environments with many advanced features, Here we have to install ActiveMQ standalone version.

1. In the official website to download ActiveMQ, and upload to the server

2. Unpack the installation

# tar -zxvf apache-activemq-5.11.1-bin.tar.gz
3. If the startup script activemq does not have executable permissions, then you need authorization

# chmod 755 /opt/activeMQ/apache-activemq-5.11.1/bin/activemq
4. Configure the port

ActiveMQ(???61616)
( 8161) conf/jetty.xml

5. Start ActiveMQ

# /opt/activeMQ/apache-activemq-5.11.1/bin/activemq start

6. Access ActiveMQ

Click manage ActiveMQ account password verification interface, the default account password: admin / admin

7. Security
configuration ActiveMQ If you do not join the security mechanism, anyone who knows the specific address of the message service (including IP, port, message address [queue or subject address]), can be unscrupulous to send and receive messages. So we have to configure ActiveMQ security, ActiveMQ message security configuration strategy has a variety of, we take a simple authorization configuration as an example.

7.1 In the conf/activemq.xml file, add the following at the end of the broker tag:

# vim /opt/activeMQ/apache-activemq-5.11.1/conf/activemq.xml





roberto roberto,users,admins
7.2 Make sure that authentication is enabled

authenticate true

7.3 Console Login User Name The password is stored in the conf/jetty-realm.properties file, as follows:

# vim /opt/activeMQ/apache-activemq-5.11.1/conf/jetty-realm.properties

# Defines users that can access the web (console, demo, etc.)
# username: password [,rolename …]
admin: roberto, admin
user: user, user
Modify the admin user’s password to roberto

7.4 Restart ActiveMQ
# /opt/activeMQ/apache-activemq-5.11.1/bin/activemq restart