1. Introduction to the redis
Redis cluster at the time of the start automatically in a number of nodes between the good film. While providing the availability between slices: when a part of the redis node failure or network interruption, the cluster can continue to work. However, when a large area of ??node failure or network interruption (such as most of the main nodes are not available), the cluster can not be used.
Therefore, from the practical point of view, Redis cluster provides the following functions:
? automatically cut the data into multiple redis nodes
? When a part of the node hung up or unreachable, the cluster can continue to work
2.redis cluster data fragmentation
The Redis cluster does not use a consistent hash, but uses a hash slot. The entire redis cluster has 16,384 hash slots, and the algorithm that determines that a key should be assigned to that slot is: Calculate the CRC16 of the key.
If each node in the cluster is responsible for a part of the hash slot, such as three nodes in the cluster, then:
? The hash slot stored by node A is: 0 – 5500
? The hash slot for node B stores: 5501 – 11000
? The hash slot stored by node C is: 11001 – 16384
This distribution is convenient for adding and deleting nodes. For example, you need to add a node D, only need to A, B, C part of the hash slot data to D node. Similarly, if you want to remove the A node in the cluster, you only need to move the data of the hash slot of node A to the B and C nodes. Once the data of the A node is removed, the A node can be completely removed from the cluster.
Because the hash slot from one node to another node is not required to stop, so increase or delete the node, or change the hash slot on the node, it is not necessary to stop.
If multiple keys belong to a hash slot, the cluster supports these keys through a command (or transaction, or lua script). Through the “hash tag” concept, the user can let multiple keys assigned to the same hash slot. The hash tag is described in the detailed documentation of the cluster. Here’s a brief introduction: If the key contains curly braces “{}”, only the strings in the braces will participate in the hash, such as “this {foo}” and “another {Foo} “The two keys are assigned to the same hash slot, so they can be manipulated in a command at the same time.
3.redis master-slave mode
In order to ensure that the cluster is still working properly when some nodes fail or the network is unreasonable, the cluster uses a master-slave model with one (master node) to N replicas (N-1 slave nodes) for each hash slot. In our previous cluster example, there are three nodes, A, B, and C, if the B node fails the cluster will not work properly, because the B node in the hash slot data can not operate. However, if we add a slave node to each node, it becomes: A, B, C three nodes are the main node, A1, B1, C1 are their slave nodes, when the B node downtime, Our cluster can work properly. B1 node is a copy of the B node, if the B node failure, the cluster will enhance the B1 as the main node, so that the cluster continues to work properly. However, if B and B1 at the same time failure, the cluster can not continue to work.
Redis cluster consistency ensures that
Redis clusters can not guarantee strong consistency. Some have been confirmed to the client to write a successful operation, will be lost in some uncertain circumstances.
The first reason for the write loss is that the master and slave nodes use asynchronous methods to synchronize data.
A write operation is a process:
1) the client initiates a write operation to the master node B 2) the master node B responds to the client write operation successfully 3) the master node B synchronizes the write operation to its slave nodes B1, B2, B3
From the above process can be seen, the main node B and so do not wait from the nodes B1, B2, B3 after the completion of the client to reply to the results of this operation. Therefore, if the master node B fails after notifying the client that the write operation is successful but before the slave node is synchronized to the slave node, one of the slave nodes that has not received the write operation will be promoted to the master node, and the write operation Forever lost.
Like the traditional database, it does not involve distributed cases, it is written back to disk every second. In order to improve consistency, you can return to the client after the completion of the write, but this will lose performance. This way is equal to the Redis cluster using synchronous replication.
Basically, there is a trade-off between performance and consistency.
4. Create and use redis clusters
4.1. Download the redis file
[root@test dtadmin]# wget http://download.redis.io/releases/redis-3.2.9.tar.gz
4.2. Unzip the redis to /opt/ directory
[root@test dtadmin]# tar -zxvf redis-3.2.9.tar.gz -C /opt/
4.3. Compile redis
[root@test dtadmin]# cd /opt/redis-3.2.9/
[root@app1 redis-3.2.9]# make && make install
[root@app1 redis-cluster]# yum -y install ruby ruby-devel rubygems rpm-build gcc
# hostname ip software port notes
1 test.rmohan.com 192.168.1.181 redis 7000
7001
7002
2 app1.rmohan.com 192.168.1.182 redis 7003
7004
7005
3 app2.rmohan.com 192.168.1.183 redis 7006
7007
7008
4.4.2 /opt/redis-3.2.9/ redis-cluster
redis-cluster
[root@test redis-3.2.9]# mkdir redis-cluster
redis-cluster
[root@test redis-cluster]# mkdir -p 7000 7001 7002
/opt/redis-3.2.9 redis.conf 7000, 7001 7002 ?
[root@test redis-cluster]# cp /opt/redis-3.2.9/redis.conf /opt/redis-3.2.9/redis-cluster/7000
[root@test redis-cluster]# cp /opt/redis-3.2.9/redis.conf /opt/redis-3.2.9/redis-cluster/7001
[root@test redis-cluster]# cp /opt/redis-3.2.9/redis.conf /opt/redis-3.2.9/redis-cluster/7002
4.4.3. /opt/redis-3.2.9/redis-cluster/ 7000, 7001 7002 redis.conf
[root@test redis-cluster]# vim 7000/redis.conf
[root@test redis-cluster]# vim 7001/redis.conf
[root@test redis-cluster]# vim 7002/redis.conf
#######################################
bind 192.168.1.181 #
port 7000 #
daemonize yes
pidfile /var/run/redis_7000.pid #
logfile “/var/log/redis-7000.log” #
appendonly yes
cluster-enabled yes
cluster-config-file nodes-7000.conf #
cluster-node-timeout 15000
4.4.4.
7003,7004,7005,7006,7007,7008
5 redis
5.1. redis
[root@test redis-cluster]# redis-server 7001/redis.conf
[root@test redis-cluster]# redis-server 7002/redis.conf
[root@test redis-cluster]# redis-server 7003/redis.conf
5.2. redis
[root@app1 redis-cluster]# redis-server 7003/redis.conf
[root@app1 redis-cluster]# redis-server 7004/redis.conf
[root@app1 redis-cluster]# redis-server 7005/redis.conf
5.3 redis
[root@app2 redis-cluster]# redis-server 7006/redis.conf
[root@app2 redis-cluster]# redis-server 7007/redis.conf
[root@app2 redis-cluster]# redis-server 7008/redis.conf
6. redis
6.1
[root@test redis-cluster]# ps -ef | grep redis
root 18313 1 0 16:44 ? 00:00:00 redis-server 192.168.1.181:7001 [cluster]
root 18325 1 0 16:44 ? 00:00:00 redis-server 192.168.1.181:7002 [cluster]
root 18371 1 0 16:45 ? 00:00:00 redis-server 192.168.1.181:7000 [cluster]
root 18449 2564 0 16:46 pts/0 00:00:00 grep –color=auto redis
[root@test redis-cluster]# netstat -tnlp | grep redis
tcp 0 0 192.168.1.181:7001 0.0.0.0:* LISTEN 18313/redis-server
tcp 0 0 192.168.1.181:7002 0.0.0.0:* LISTEN 18325/redis-server
tcp 0 0 192.168.1.181:17000 0.0.0.0:* LISTEN 18371/redis-server
tcp 0 0 192.168.1.181:17001 0.0.0.0:* LISTEN 18313/redis-server
tcp 0 0 192.168.1.181:17002 0.0.0.0:* LISTEN 18325/redis-server
tcp 0 0 192.168.1.181:7000 0.0.0.0:* LISTEN 18371/redis-server
6.2.
[root@app1 redis-cluster]# ps -ef | grep redis
root 5351 1 0 16:45 ? 00:00:00 redis-server 192.168.1.182:7003 [cluster]
root 5355 1 0 16:45 ? 00:00:00 redis-server 192.168.1.182:7004 [cluster]
root 5359 1 0 16:46 ? 00:00:00 redis-server 192.168.1.182:7005 [cluster]
[root@app1 redis-cluster]# netstat -tnlp | grep redis
tcp 0 0 192.168.1.182:7003 0.0.0.0:* LISTEN 5351/redis-server 1
tcp 0 0 192.168.1.182:7004 0.0.0.0:* LISTEN 5355/redis-server 1
tcp 0 0 192.168.1.182:7005 0.0.0.0:* LISTEN 5359/redis-server 1
tcp 0 0 192.168.1.182:17003 0.0.0.0:* LISTEN 5351/redis-server 1
tcp 0 0 192.168.1.182:17004 0.0.0.0:* LISTEN 5355/redis-server 1
tcp 0 0 192.168.1.182:17005 0.0.0.0:* LISTEN 5359/redis-server 1
6.3.
[root@app2 redis-cluster]# ps -ef | grep redis
root 21138 1 0 16:46 ? 00:00:00 redis-server 192.168.1.183:7006 [cluster]
root 21156 1 0 16:46 ? 00:00:00 redis-server 192.168.1.183:7008 [cluster]
root 21387 1 0 16:50 ? 00:00:00 redis-server 192.168.1.183:7007 [cluster]
root 21394 9287 0 16:50 pts/0 00:00:00 grep –color=auto redis
[root@app2 redis-cluster]# netstat -tnlp | grep redis
tcp 0 0 192.168.1.183:7006 0.0.0.0:* LISTEN 2959/redis-server 1
tcp 0 0 192.168.1.183:7007 0.0.0.0:* LISTEN 2971/redis-server 1
tcp 0 0 192.168.1.183:7008 0.0.0.0:* LISTEN 2982/redis-server 1
tcp 0 0 192.168.1.183:17006 0.0.0.0:* LISTEN 2959/redis-server 1
tcp 0 0 192.168.1.183:17007 0.0.0.0:* LISTEN 2971/redis-server 1
tcp 0 0 192.168.1.183:17008 0.0.0.0:* LISTEN 2982/redis-server 1
7.redis
[root@test src]# ./redis-trib.rb create –replicas 1 192.168.1.181:7000 192.168.1.181:7001 192.168.1.181:7002 192.168.1.182:7003 192.168.1.182:7004 192.168.1.182:7005 192.168.1.183:7006 192.168.1.183:7007 192.168.1.183:7008
>>> Creating cluster
>>> Performing hash slots allocation on 9 nodes…
Using 4 masters:
192.168.1.181:7000
192.168.1.182:7003
192.168.1.183:7006
192.168.1.181:7001
Adding replica 192.168.1.182:7004 to 192.168.1.181:7000
Adding replica 192.168.1.183:7007 to 192.168.1.182:7003
Adding replica 192.168.1.181:7002 to 192.168.1.183:7006
Adding replica 192.168.1.182:7005 to 192.168.1.181:7001
Adding replica 192.168.1.183:7008 to 192.168.1.181:7000
M: 4d007a1e8efdc43ca4ec3db77029709b4e8413d0 192.168.1.181:7000
slots:0-4095 (4096 slots) master
M: 0d0b4528f32db0111db2a78b8451567086b66d97 192.168.1.181:7001
slots:12288-16383 (4096 slots) master
S: e7b8ba7a800683ba017401bde9a72bb34ad252d8 192.168.1.181:7002
replicates 3b9056e1c92ee9b94870a4100b89f6dc474ec1fa
M: 4b34dcec53e46ad990b0e6bc36d5cd7b7f3f4cce 192.168.1.182:7003
slots:4096-8191 (4096 slots) master
S: 13863d63aa323fd58e7ceeba1ccc91b6304d0539 192.168.1.182:7004
replicates 4d007a1e8efdc43ca4ec3db77029709b4e8413d0
S: da3556753efe388a64fafc259338ea420a795163 192.168.1.182:7005
replicates 0d0b4528f32db0111db2a78b8451567086b66d97
M: 3b9056e1c92ee9b94870a4100b89f6dc474ec1fa 192.168.1.183:7006
slots:8192-12287 (4096 slots) master
S: ab90ee3ff9834a88416da311011e9bdfaa9a831f 192.168.1.183:7007
replicates 4b34dcec53e46ad990b0e6bc36d5cd7b7f3f4cce
S: b0dda91a2527f71fe555cdd28fa8be4b571a4bed 192.168.1.183:7008
replicates 4d007a1e8efdc43ca4ec3db77029709b4e8413d0
Can I set the above configuration? (type ‘yes’ to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join……..
>>> Performing Cluster Check (using node 192.168.1.181:7000)
M: 4d007a1e8efdc43ca4ec3db77029709b4e8413d0 192.168.1.181:7000
slots:0-4095 (4096 slots) master
2 additional replica(s)
S: e7b8ba7a800683ba017401bde9a72bb34ad252d8 192.168.1.181:7002
slots: (0 slots) slave
replicates 3b9056e1c92ee9b94870a4100b89f6dc474ec1fa
S: ab90ee3ff9834a88416da311011e9bdfaa9a831f 192.168.1.183:7007
slots: (0 slots) slave
replicates 4b34dcec53e46ad990b0e6bc36d5cd7b7f3f4cce
M: 4b34dcec53e46ad990b0e6bc36d5cd7b7f3f4cce 192.168.1.182:7003
slots:4096-8191 (4096 slots) master
1 additional replica(s)
M: 0d0b4528f32db0111db2a78b8451567086b66d97 192.168.1.181:7001
slots:12288-16383 (4096 slots) master
1 additional replica(s)
M: 3b9056e1c92ee9b94870a4100b89f6dc474ec1fa 192.168.1.183:7006
slots:8192-12287 (4096 slots) master
1 additional replica(s)
S: b0dda91a2527f71fe555cdd28fa8be4b571a4bed 192.168.1.183:7008
slots: (0 slots) slave
replicates 4d007a1e8efdc43ca4ec3db77029709b4e8413d0
S: 13863d63aa323fd58e7ceeba1ccc91b6304d0539 192.168.1.182:7004
slots: (0 slots) slave
replicates 4d007a1e8efdc43ca4ec3db77029709b4e8413d0
S: da3556753efe388a64fafc259338ea420a795163 192.168.1.182:7005
slots: (0 slots) slave
replicates 0d0b4528f32db0111db2a78b8451567086b66d97
[OK] All nodes agree about slots configuration.
>>> Check for open slots…
>>> Check slots coverage…
[OK] All 16384 slots covered.
Recent Comments