012 redis-集群安装及动态扩容

        Redis ClusterRedis的分布式解决方案,在Redis 3.0版本正式推出的,有效解决了Redis分布式方面的需求。当遇到单机内存、并发、流量等瓶颈时,可以采用Cluster架构达到负载均衡的目的。

顺序分区:

 数据分布理论: 分布式数据库首要解决把整个数据集按照分区规则映射到多个节点的问题,即把数据集划分到多个节点上,每个节点负责整个数据的一个子集。常见的分区规则有哈希分区和顺序分区。Redis Cluster采用哈希分区规则,因此接下来会讨论哈希分区规则。常见的哈希分区有以下几种:

节点取余分区

一致性哈希分区

虚拟槽分区:

预设槽,每一个槽映射一个数据子集,根据CRC16的哈希函数对槽取模,计算keys落点;节点数据共享;

节点伸缩:添加节点的时候不会影响整个集群;

配置redis集群:

建立六个节点,三主三从;

applet@ZhongDome:~/servers$ mkdir -pv cluster

applet@ZhongDome:~/servers/cluster$ mkdir -pv {6380,6381,6382,6383,6384,6385}/{conf,data,logs}
mkdir: created directory '6380'
mkdir: created directory '6380/conf'
mkdir: created directory '6380/data'
mkdir: created directory '6380/logs'
mkdir: created directory '6381'
mkdir: created directory '6381/conf'
mkdir: created directory '6381/data'
mkdir: created directory '6381/logs'
mkdir: created directory '6382'
mkdir: created directory '6382/conf'
mkdir: created directory '6382/data'
mkdir: created directory '6382/logs'
mkdir: created directory '6383'
mkdir: created directory '6383/conf'
mkdir: created directory '6383/data'
mkdir: created directory '6383/logs'
mkdir: created directory '6384'
mkdir: created directory '6384/conf'
mkdir: created directory '6384/data'
mkdir: created directory '6384/logs'
mkdir: created directory '6385'
mkdir: created directory '6385/conf'
mkdir: created directory '6385/data'
mkdir: created directory '6385/logs'

applet@ZhongDome:~/servers/cluster$ ls
6380  6381  6382  6383  6384  6385
applet@ZhongDome:~/servers$ 

复制redis配置文件进行修改;

applet@ZhongDome:~/servers/cluster$ cp ~/servers/master-slave/10190/conf/redis.conf 6380/conf/
applet@ZhongDome:~/servers/cluster$ vim 6380/conf/redis.conf

------------------------------------------------------------------------------------------------------------------------------------------------

daemonize yes

pidfile /home/applet/servers/cluster/6380/redis.pid

port 6380

#bind 127.0.0.1

loglevel notice

protected-mode no

logfile "/home/applet/servers/cluster/6380/logs/redis.log"

dir /home/applet/servers/cluster/6380/data

##集群配置

cluster-enabled yes
cluster-node-timeout 15000
cluster-config-file /home/applet/servers/cluster/6380/nodes.conf
 

------------------------------------------------------------------------------------------------------------------------------------------------

配置完成后将配置文件copy都其他节点继续路径及端口修改;

applet@ZhongDome:~/servers/cluster$ cp ~/servers/master-slave/10190/conf/redis.conf 6380/conf/
applet@ZhongDome:~/servers/cluster$ cp 6380/conf/redis.conf 6381/conf/
applet@ZhongDome:~/servers/cluster$ cp 6380/conf/redis.conf 6382/conf/
applet@ZhongDome:~/servers/cluster$ cp 6380/conf/redis.conf 6383/conf/
applet@ZhongDome:~/servers/cluster$ cp 6380/conf/redis.conf 6384/conf/
applet@ZhongDome:~/servers/cluster$ cp 6380/conf/redis.conf 6385/conf/
applet@ZhongDome:~/servers/cluster$ sed -i 's/6380/6381/g' 6381/conf/redis.conf 
applet@ZhongDome:~/servers/cluster$ sed -i 's/6380/6382/g' 6382/conf/redis.conf 
applet@ZhongDome:~/servers/cluster$ sed -i 's/6380/6383/g' 6383/conf/redis.conf 
applet@ZhongDome:~/servers/cluster$ sed -i 's/6380/6384/g' 6384/conf/redis.conf 
applet@ZhongDome:~/servers/cluster$ sed -i 's/6380/6385/g' 6385/conf/redis.conf 
applet@ZhongDome:~/servers/cluster$ 

使用原生命令的方式配置集群---------------------------------------------------------------------------------------------------------

启动集群:

applet@ZhongDome:~/servers/cluster$ ../redis/bin/redis-server 6380/conf/redis.conf 
applet@ZhongDome:~/servers/cluster$ ../redis/bin/redis-server 6381/conf/redis.conf 
applet@ZhongDome:~/servers/cluster$ ../redis/bin/redis-server 6382/conf/redis.conf 
applet@ZhongDome:~/servers/cluster$ ../redis/bin/redis-server 6383/conf/redis.conf 
applet@ZhongDome:~/servers/cluster$ ../redis/bin/redis-server 6384/conf/redis.conf 
applet@ZhongDome:~/servers/cluster$ ../redis/bin/redis-server 6385/conf/redis.conf 
applet@ZhongDome:~/servers/cluster$ ps -ef|grep redis
applet    5112     1  0 09:23 ?        00:00:00 ../redis/bin/redis-server 172.16.65.150:6380 [cluster]  //以集群启动会多出[cluster]
applet    5117     1  0 09:24 ?        00:00:00 ../redis/bin/redis-server 172.16.65.150:6381 [cluster]
applet    5122     1  0 09:24 ?        00:00:00 ../redis/bin/redis-server 172.16.65.150:6382 [cluster]
applet    5127     1  0 09:24 ?        00:00:00 ../redis/bin/redis-server 172.16.65.150:6383 [cluster]
applet    5132     1  0 09:24 ?        00:00:00 ../redis/bin/redis-server 172.16.65.150:6384 [cluster]
applet    5137     1  0 09:24 ?        00:00:00 ../redis/bin/redis-server 172.16.65.150:6385 [cluster]
applet    5142  4873  0 09:24 pts/0    00:00:00 grep --color=auto redis
applet@ZhongDome:~/servers/cluster$ 

截止目前为止6380~6385是6个相互的独立节点;

连接其中一个结点,查看状态;

applet@ZhongDome:~/servers/cluster$ ../redis/bin/redis-cli -p 6380 
127.0.0.1:6380> info
# Server
redis_version:4.0.14
redis_git_sha1:00000000
redis_git_dirty:0
redis_build_id:3f609e0bdb4a299
redis_mode:cluster
os:Linux 4.4.0-151-generic x86_64
arch_bits:64
multiplexing_api:epoll
atomicvar_api:atomic-builtin
gcc_version:5.4.0
process_id:5171
run_id:371ab0faa6815b4f751a07746916be9802a509a8
tcp_port:6380
uptime_in_seconds:525
uptime_in_days:0
hz:10
lru_clock:1837001
executable:/home/applet/servers/redis/bin/redis-server
config_file:/home/applet/servers/cluster/6380/conf/redis.conf

# Clients
connected_clients:1
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0

# Memory
used_memory:1431286
used_memory_human:1.36M
used_memory_rss:4362240
used_memory_rss_human:4.16M
used_memory_peak:1431286
used_memory_peak_human:1.36M
used_memory_peak_perc:100.08%
used_memory_overhead:1429754
used_memory_startup:1380116
used_memory_dataset:1532
used_memory_dataset_perc:2.99%
total_system_memory:2097041408
total_system_memory_human:1.95G
used_memory_lua:37888
used_memory_lua_human:37.00K
maxmemory:0
maxmemory_human:0B
maxmemory_policy:noeviction
mem_fragmentation_ratio:3.05
mem_allocator:libc
active_defrag_running:0
lazyfree_pending_objects:0

# Persistence
loading:0
rdb_changes_since_last_save:0
rdb_bgsave_in_progress:0
rdb_last_save_time:1562117564
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:-1
rdb_current_bgsave_time_sec:-1
rdb_last_cow_size:0
aof_enabled:0
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:-1
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
aof_last_write_status:ok
aof_last_cow_size:0

# Stats
total_connections_received:2
total_commands_processed:2
instantaneous_ops_per_sec:0
total_net_input_bytes:48
total_net_output_bytes:20326
instantaneous_input_kbps:0.00
instantaneous_output_kbps:0.00
rejected_connections:0
sync_full:0
sync_partial_ok:0
sync_partial_err:0
expired_keys:0
expired_stale_perc:0.00
expired_time_cap_reached_count:0
evicted_keys:0
keyspace_hits:0
keyspace_misses:0
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:0
migrate_cached_sockets:0
slave_expires_tracked_keys:0
active_defrag_hits:0
active_defrag_misses:0
active_defrag_key_hits:0
active_defrag_key_misses:0

# Replication
role:master
connected_slaves:0
master_replid:dca17bec4189e286f237b3db638da34aee7030d4
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:0
second_repl_offset:-1
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0

# CPU
used_cpu_sys:0.24
used_cpu_user:0.25
used_cpu_sys_children:0.00
used_cpu_user_children:0.00

# Cluster
cluster_enabled:1

# Keyspace
127.0.0.1:6380> CLUSTER nodes
b13a5e68f983a5447b907013110c2b77f958fb28 :6380@16380 myself,master - 0 0 0 connected
127.0.0.1:6380> 

b13a5e68f983a5447b907013110c2b77f958fb28 此为节点集群ID,此ID是不会改变的;

可以看到目前集群中只有一个结点cluster_enabled:1,那么接下来叫各个结点直接进行感知;

127.0.0.1:6380> cluster meet 172.16.65.150 6381
OK
127.0.0.1:6380> cluster meet 172.16.65.150 6382
OK
127.0.0.1:6380> cluster meet 172.16.65.150 6383
OK
127.0.0.1:6380> cluster meet 172.16.65.150 6384
OK
127.0.0.1:6380> cluster meet 172.16.65.150 6385
OK
127.0.0.1:6380> cluster nodes
2212885d7193f50808a10055e2d3fa2a5d60c763 172.16.65.150:6385@16385 master - 0 1562118459000 5 connected
7f91df56262fca7bce156ce2cd54aa7a050b18b2 172.16.65.150:6384@16384 master - 0 1562118458891 4 connected
6fdaadb18d3d784e8f897a6836b18b44cd4bf81a 172.16.65.150:6383@16383 master - 0 1562118460894 3 connected
f7df779a8380b46a54f4163c263e969b396e076b 172.16.65.150:6382@16382 master - 0 1562118459000 0 connected
b13a5e68f983a5447b907013110c2b77f958fb28 172.16.65.150:6380@16380 myself,master - 0 1562118459000 2 connected
a47fbd167b83b111cd176a52ac3b4b50cb4a2fb5 172.16.65.150:6381@16381 master - 0 1562118459892 1 connected
127.0.0.1:6380> 
使用cluster meet命令进行结点会晤,结果可见已经拥有6个结点;

redis特点:

1.高可用:主从概念,在集群中一个mater至少会分配一个从结点做数据的备份副本,如果mater宕机,从节点会接管,从而实现高可用;

2:数据分片:数据可以均匀的分布在每个结点;

分配槽:6个结点是配置cluster基本要求,其中3个主3个从,那么要给其中3个主结点分配槽;

[槽的数量是16383个]

applet@ZhongDome:~/servers/cluster$ ../redis/bin/redis-cli -p 6380 cluster addslots {0..5461}
OK
applet@ZhongDome:~/servers/cluster$ ../redis/bin/redis-cli -p 6381 cluster addslots {5462..10922}
OK
applet@ZhongDome:~/servers/cluster$ ../redis/bin/redis-cli -p 6382 cluster addslots {10923..16383}
OK
applet@ZhongDome:~/servers/cluster$ ../redis/bin/redis-cli -p 6380 cluster nodes
2212885d7193f50808a10055e2d3fa2a5d60c763 172.16.65.150:6385@16385 master - 0 1562119247000 5 connected
7f91df56262fca7bce156ce2cd54aa7a050b18b2 172.16.65.150:6384@16384 master - 0 1562119247000 4 connected
6fdaadb18d3d784e8f897a6836b18b44cd4bf81a 172.16.65.150:6383@16383 master - 0 1562119247378 3 connected
f7df779a8380b46a54f4163c263e969b396e076b 172.16.65.150:6382@16382 master - 0 1562119244373 0 connected 10923-16383
b13a5e68f983a5447b907013110c2b77f958fb28 172.16.65.150:6380@16380 myself,master - 0 1562119244000 2 connected 0-5461
a47fbd167b83b111cd176a52ac3b4b50cb4a2fb5 172.16.65.150:6381@16381 master - 0 1562119248379 1 connected 5462-10922
applet@ZhongDome:~/servers/cluster$ 

结点槽分配完成,单看到6个结点的角色都是master,接下来配置主从,6380/6381/6382作为主节点,将6383/6384/6385做从;

配置主从;

applet@ZhongDome:~/servers/cluster$ ../redis/bin/redis-cli -p 6383 cluster replicate b13a5e68f983a5447b907013110c2b77f958fb28
OK
applet@ZhongDome:~/servers/cluster$ ../redis/bin/redis-cli -p 6384 cluster replicate a47fbd167b83b111cd176a52ac3b4b50cb4a2fb5
OK
applet@ZhongDome:~/servers/cluster$ ../redis/bin/redis-cli -p 6385 cluster replicate f7df779a8380b46a54f4163c263e969b396e076b
OK
applet@ZhongDome:~/servers/cluster$ ../redis/bin/redis-cli -p 6380 cluster nodes
2212885d7193f50808a10055e2d3fa2a5d60c763 172.16.65.150:6385@16385 slave f7df779a8380b46a54f4163c263e969b396e076b 0 1562119762000 5 connected
7f91df56262fca7bce156ce2cd54aa7a050b18b2 172.16.65.150:6384@16384 slave a47fbd167b83b111cd176a52ac3b4b50cb4a2fb5 0 1562119762324 4 connected
6fdaadb18d3d784e8f897a6836b18b44cd4bf81a 172.16.65.150:6383@16383 slave b13a5e68f983a5447b907013110c2b77f958fb28 0 1562119760319 3 connected
f7df779a8380b46a54f4163c263e969b396e076b 172.16.65.150:6382@16382 master - 0 1562119761322 0 connected 10923-16383
b13a5e68f983a5447b907013110c2b77f958fb28 172.16.65.150:6380@16380 myself,master - 0 1562119758000 2 connected 0-5461
a47fbd167b83b111cd176a52ac3b4b50cb4a2fb5 172.16.65.150:6381@16381 master - 0 1562119763325 1 connected 5462-10922
applet@ZhongDome:~/servers/cluster$ 
分配完成:6383作为6380的从节点,6384作为6381的从节点,6385作为6382的从节点;

动态添加节点:

1.复制6385节点创建6386
applet@ZhongDome:~/servers/cluster$ cp -r 6385 6386
applet@ZhongDome:~/servers/cluster$ ls
6380  6381  6382  6383  6384  6385  6386
applet@ZhongDome:~/servers/cluster$ cd 6386
applet@ZhongDome:~/servers/cluster/6386$ ls
conf  data  logs  nodes.conf  redis.pid
applet@ZhongDome:~/servers/cluster/6386$ rm -rf nodes.conf redis.pid 
applet@ZhongDome:~/servers/cluster/6386$ ls
conf  data  logs
applet@ZhongDome:~/servers/cluster/6386$ rm -rf logs/*
applet@ZhongDome:~/servers/cluster/6386$ rm -rf data/*
applet@ZhongDome:~/servers/cluster/6386$ sed -i 's/6385/6386/g' conf/redis.conf 
applet@ZhongDome:~/servers/cluster/6386$ 
将新建的节点加入集群:

applet@ZhongDome:~/servers/cluster$ ../redis/bin/redis-cli -p 6380 cluster meet 172.16.65.150 6386
OK
applet@ZhongDome:~/servers/cluster$ ../redis/bin/redis-cli -p 6380 cluster nodes
2212885d7193f50808a10055e2d3fa2a5d60c763 172.16.65.150:6385@16385 slave f7df779a8380b46a54f4163c263e969b396e076b 0 1562125343738 5 connected
7f91df56262fca7bce156ce2cd54aa7a050b18b2 172.16.65.150:6384@16384 slave a47fbd167b83b111cd176a52ac3b4b50cb4a2fb5 0 1562125341734 4 connected
356f1b8d7f9cff0ff4fc88a2abc783ca8bbc7ffd 172.16.65.150:6386@16386 master - 0 1562125341000 0 connected
6fdaadb18d3d784e8f897a6836b18b44cd4bf81a 172.16.65.150:6383@16383 slave b13a5e68f983a5447b907013110c2b77f958fb28 0 1562125341000 3 connected
f7df779a8380b46a54f4163c263e969b396e076b 172.16.65.150:6382@16382 master - 0 1562125342736 0 connected 10923-16383
b13a5e68f983a5447b907013110c2b77f958fb28 172.16.65.150:6380@16380 myself,master - 0 1562125340000 2 connected 0-5461
a47fbd167b83b111cd176a52ac3b4b50cb4a2fb5 172.16.65.150:6381@16381 master - 0 1562125342000 1 connected 5462-10922
applet@ZhongDome:~/servers/cluster$ 

原生命令搭建集群完成!!!

通过工具管理redis集群:

1、安装ruby环境(工具是由ruby开发,ruby版本要求2.2以上)

root@ZhongDome:/# apt-get install ruby       //ruby环境
Reading package lists... Done
Building dependency tree       
Reading state information... Done
.....................................
Setting up rake (10.5.0-2) ...
Processing triggers for libc-bin (2.23-0ubuntu11) ...
root@ZhongDome:/# ruby -v
ruby 2.3.1p112 (2016-04-26) [x86_64-linux-gnu]
root@ZhongDome:/# 

2、安装rubygems及redis依赖包(ruby操作redis的架包):

root@ZhongDome:/# apt-get install rubygems
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Note, selecting 'ruby' instead of 'rubygems'
ruby is already the newest version (1:2.3.0+1).
The following packages were automatically installed and are no longer required:
  libopts25 ssl-cert
Use 'apt autoremove' to remove them.
0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded.
root@ZhongDome:/# gem install redis
Fetching: redis-4.1.2.gem (100%)
Successfully installed redis-4.1.2
Parsing documentation for redis-4.1.2
Installing ri documentation for redis-4.1.2
Done installing documentation for redis after 1 seconds
1 gem installed
root@ZhongDome:/# 

3、安装完成后可以使用redis-trib.rb操作集群:

(1)关闭集群:

applet@ZhongDome:~/servers/cluster$ pkill redis
applet@ZhongDome:~/servers/cluster$ ps -ef|grep reids
applet    6252  5567  0 12:00 pts/2    00:00:00 grep --color=auto reids
applet@ZhongDome:~/servers/cluster$

(2)清除原有集群配置:(所有结点都清除)

applet@ZhongDome:~/servers/cluster$ rm -rf 6381/data/* 6381/logs/* 
applet@ZhongDome:~/servers/cluster$ rm -rf 6381/nodes.conf 

(3)启动集群结点查看状态:

applet@ZhongDome:~/servers/cluster$ ../redis/bin/redis-server 6380/conf/redis.conf 
applet@ZhongDome:~/servers/cluster$ ../redis/bin/redis-server 6381/conf/redis.conf 
applet@ZhongDome:~/servers/cluster$ ../redis/bin/redis-server 6382/conf/redis.conf 
applet@ZhongDome:~/servers/cluster$ ../redis/bin/redis-server 6383/conf/redis.conf 
applet@ZhongDome:~/servers/cluster$ ../redis/bin/redis-server 6384/conf/redis.conf 
applet@ZhongDome:~/servers/cluster$ ../redis/bin/redis-server 6385/conf/redis.conf 
applet@ZhongDome:~/servers/cluster$ ps -ef|grep redis
applet    6318     1  0 12:07 ?        00:00:00 ../redis/bin/redis-server *:6380 [cluster]
applet    6323     1  0 12:07 ?        00:00:00 ../redis/bin/redis-server *:6381 [cluster]
applet    6328     1  0 12:07 ?        00:00:00 ../redis/bin/redis-server *:6382 [cluster]
applet    6333     1  0 12:07 ?        00:00:00 ../redis/bin/redis-server *:6383 [cluster]
applet    6338     1  0 12:07 ?        00:00:00 ../redis/bin/redis-server *:6384 [cluster]
applet    6343     1  0 12:07 ?        00:00:00 ../redis/bin/redis-server *:6385 [cluster]
applet    6348  5567  0 12:08 pts/2    00:00:00 grep --color=auto redis
applet@ZhongDome:~/servers/cluster$ ../redis/bin/redis-cli -p 6380 cluster info
cluster_state:fail
cluster_slots_assigned:0
cluster_slots_ok:0
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:1
cluster_size:0
cluster_current_epoch:0
cluster_my_epoch:0
cluster_stats_messages_sent:0
cluster_stats_messages_received:0
applet@ZhongDome:~/servers/cluster$ ../redis/bin/redis-cli -p 6380 cluster nodes
0247b2d461dca4f77b829ab3de92fa47d9d3b07c :6380@16380 myself,master - 0 0 0 connected    //恢复开始状态互相独立
applet@ZhongDome:~/servers/cluster$ 

(4)使用redis-tred.rd创建管理集群:

(创建  |  --每个主节点对应1个从节点 | 输入节点,前面3个为主节点)

applet@ZhongDome:~/servers/cluster$ ../redis/bin/redis-trib.rb create --replicas 1 172.16.65.150:6380 172.16.65.150:6381 172.16.65.150:6382 172.16.65.150:6383 172.16.65.150:6384 172.16.65.150:6385
>>> Creating cluster
>>> Performing hash slots allocation on 6 nodes...
Using 3 masters:
172.16.65.150:6380
172.16.65.150:6381
172.16.65.150:6382
Adding replica 172.16.65.150:6384 to 172.16.65.150:6380
Adding replica 172.16.65.150:6385 to 172.16.65.150:6381
Adding replica 172.16.65.150:6383 to 172.16.65.150:6382
>>> Trying to optimize slaves allocation for anti-affinity
[WARNING] Some slaves are in the same host as their master
M: 0247b2d461dca4f77b829ab3de92fa47d9d3b07c 172.16.65.150:6380
   slots:0-5460 (5461 slots) master
M: 8d1481c5549f18cb276fdbd61b98723a9a51eaaa 172.16.65.150:6381
   slots:5461-10922 (5462 slots) master
M: e84e8f70c71965fc24a8df554fa3a3913e2dda4f 172.16.65.150:6382
   slots:10923-16383 (5461 slots) master
S: ee7c1816839eb46a8b99e6c084e8fb1f8d1129b1 172.16.65.150:6383
   replicates e84e8f70c71965fc24a8df554fa3a3913e2dda4f
S: 5ee68cbdd8a5322b90600ee2b09fcfeab61e8962 172.16.65.150:6384
   replicates 0247b2d461dca4f77b829ab3de92fa47d9d3b07c
S: 07a1dae5a9f868cb794b08d966d44bdc9fe65036 172.16.65.150:6385
   replicates 8d1481c5549f18cb276fdbd61b98723a9a51eaaa
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join.....
>>> Performing Cluster Check (using node 172.16.65.150:6380)
M: 0247b2d461dca4f77b829ab3de92fa47d9d3b07c 172.16.65.150:6380
   slots:0-5460 (5461 slots) master
   1 additional replica(s)
M: 8d1481c5549f18cb276fdbd61b98723a9a51eaaa 172.16.65.150:6381
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
S: 07a1dae5a9f868cb794b08d966d44bdc9fe65036 172.16.65.150:6385
   slots: (0 slots) slave
   replicates 8d1481c5549f18cb276fdbd61b98723a9a51eaaa
S: 5ee68cbdd8a5322b90600ee2b09fcfeab61e8962 172.16.65.150:6384
   slots: (0 slots) slave
   replicates 0247b2d461dca4f77b829ab3de92fa47d9d3b07c
M: e84e8f70c71965fc24a8df554fa3a3913e2dda4f 172.16.65.150:6382
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
S: ee7c1816839eb46a8b99e6c084e8fb1f8d1129b1 172.16.65.150:6383
   slots: (0 slots) slave
   replicates e84e8f70c71965fc24a8df554fa3a3913e2dda4f
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
applet@ZhongDome:~/servers/cluster$ ../redis/bin/redis-cli -p 6380 cluster nodes
8d1481c5549f18cb276fdbd61b98723a9a51eaaa 172.16.65.150:6381@16381 master - 0 1562128168000 2 connected 5461-10922
07a1dae5a9f868cb794b08d966d44bdc9fe65036 172.16.65.150:6385@16385 slave 8d1481c5549f18cb276fdbd61b98723a9a51eaaa 0 1562128171000 6 connected
0247b2d461dca4f77b829ab3de92fa47d9d3b07c 172.16.65.150:6380@16380 myself,master - 0 1562128169000 1 connected 0-5460
5ee68cbdd8a5322b90600ee2b09fcfeab61e8962 172.16.65.150:6384@16384 slave 0247b2d461dca4f77b829ab3de92fa47d9d3b07c 0 1562128170317 5 connected
e84e8f70c71965fc24a8df554fa3a3913e2dda4f 172.16.65.150:6382@16382 master - 0 1562128171318 3 connected 10923-16383
ee7c1816839eb46a8b99e6c084e8fb1f8d1129b1 172.16.65.150:6383@16383 slave e84e8f70c71965fc24a8df554fa3a3913e2dda4f 0 1562128172318 4 connected
applet@ZhongDome:~/servers/cluster$ 

将6386添加进集群:
applet@ZhongDome:~/servers/cluster$ ../redis/bin/redis-trib.rb add-node 172.16.65.150:6386 172.16.65.150:6380
>>> Adding node 172.16.65.150:6386 to cluster 172.16.65.150:6380
>>> Performing Cluster Check (using node 172.16.65.150:6380)
M: 0247b2d461dca4f77b829ab3de92fa47d9d3b07c 172.16.65.150:6380
   slots:0-5460 (5461 slots) master
   1 additional replica(s)
M: 8d1481c5549f18cb276fdbd61b98723a9a51eaaa 172.16.65.150:6381
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
S: 07a1dae5a9f868cb794b08d966d44bdc9fe65036 172.16.65.150:6385
   slots: (0 slots) slave
   replicates 8d1481c5549f18cb276fdbd61b98723a9a51eaaa
S: 5ee68cbdd8a5322b90600ee2b09fcfeab61e8962 172.16.65.150:6384
   slots: (0 slots) slave
   replicates 0247b2d461dca4f77b829ab3de92fa47d9d3b07c
M: e84e8f70c71965fc24a8df554fa3a3913e2dda4f 172.16.65.150:6382
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
S: ee7c1816839eb46a8b99e6c084e8fb1f8d1129b1 172.16.65.150:6383
   slots: (0 slots) slave
   replicates e84e8f70c71965fc24a8df554fa3a3913e2dda4f
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 172.16.65.150:6386 to make it join the cluster.
[OK] New node added correctly.
applet@ZhongDome:~/servers/cluster$ ../redis/bin/redis-cli -p 6380 cluster nodes
8d1481c5549f18cb276fdbd61b98723a9a51eaaa 172.16.65.150:6381@16381 master - 0 1562128441826 2 connected 5461-10922
07a1dae5a9f868cb794b08d966d44bdc9fe65036 172.16.65.150:6385@16385 slave 8d1481c5549f18cb276fdbd61b98723a9a51eaaa 0 1562128440000 6 connected
0247b2d461dca4f77b829ab3de92fa47d9d3b07c 172.16.65.150:6380@16380 myself,master - 0 1562128438000 1 connected 0-5460
570ade43c540c6ee3e7c32756f5d691659d2f965 172.16.65.150:6386@16386 master - 0 1562128440823 0 connected
5ee68cbdd8a5322b90600ee2b09fcfeab61e8962 172.16.65.150:6384@16384 slave 0247b2d461dca4f77b829ab3de92fa47d9d3b07c 0 1562128438000 5 connected
e84e8f70c71965fc24a8df554fa3a3913e2dda4f 172.16.65.150:6382@16382 master - 0 1562128439000 3 connected 10923-16383
ee7c1816839eb46a8b99e6c084e8fb1f8d1129b1 172.16.65.150:6383@16383 slave e84e8f70c71965fc24a8df554fa3a3913e2dda4f 0 1562128437814 4 connected
applet@ZhongDome:~/servers/cluster$ 
(5)重新分配槽(动态配置):

applet@ZhongDome:~/servers/cluster$ ../redis/bin/redis-trib.rb add-node 172.16.65.150:6386 172.16.65.150:6380

--1--要迁移多少数量到新节点,此处输入4000;

--2--要迁移到那个节点,此处新节点是6386,所以输入6386的ID

--3--要从那个节点迁移:有两种选择,第一种all,从所有节点进行迁移;第二种,可以指定特定的点进行迁移,最后输入done;

redis集群功能限制:

key批量操作支持有限。如:MSET``MGET,目前只支持具有相同slot值的key执行批量操作。

key事务操作支持有限。支持多key在同一节点上的事务操作,不支持分布在多个节点的事务功能。

key作为数据分区的最小粒度,因此不能将一个大的键值对象映射到不同的节点。如:hashlist

不支持多数据库空间。单机下Redis支持16个数据库,集群模式下只能使用一个数据库空间,即db 0

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值