集群扩容说明
扩容过程中要保证原zookeeper集群还是能提供服务,新zookeeper集群同步老集群的数据,后续将zookeeper域名指向新曾集群的4个节点IP。
扩容步骤
原有集群zookeeper的配置:
# cat zoo.cfg
zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper
dataLogDir=/data/zookeeper/log
clientPort=2181
maxClientCnxns=0
server.1=192.168.1.101:2888:3888
server.2=192.168.1.102:2888:3888
server.3=192.168.1.103:2888:3888
查看当前节点的角色:
# echo srvr | nc HOST_IP 2181
# ./zkServer.sh status
查看Mode
# echo mntr |nc 192.168.1.101 2181
zk_followers 3#followers节点数
zk_synced_followers 3#数据已同步的followers节点数
zookeeper新增节点确保版本一致(zookeeper-3.4.11),避免不必要的问题。
查看服务器配置 echo conf | nc ip port
查看客户端信息 echo cons | nc ip port
查看环境变量 echo envi | nc ip port
监控zk健康信息 echo mntr | nc ip port
1、扩容节点4(192.168.104)
查看配置:
# cat zoo.cfg
zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper
dataLogDir=/data/zookeeper/log
clientPort=2181
maxClientCnxns=0
server.1=192.168.1.101:2888:3888
server.2=192.168.1.102:2888:3888
server.3=192.168.1.103:2888:3888
server.4=192.168.1.104:2888:3888
写入myid编号
echo 4 >/data/zookeeper/myid
启动服务:
${ZOOKEEPER_HOME}/bin/zkServer.sh start
检查节点Mode
# echo srvr | nc HOST_IP 2181
确定只有一个leader节点,且为192.168.1.101
# echo mntr |nc 192.168.1.101 2181
zk_followers 4#followers节点数
zk_synced_followers 4#数据已同步的followers节点数
如果出现俩个leader,需要还原整个集群。
2、扩容节点5(19168.1.105)
# cat zoo.cfg
zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper
dataLogDir=/data/zookeeper/log
clientPort=2181
maxClientCnxns=0
server.1=192.168.1.101:2888:3888
server.2=192.168.1.102:2888:3888
server.3=192.168.1.103:2888:3888
server.4=192.168.1.104:2888:3888
server.5=192.168.1.105:2888:3888
写入myid编号
echo 5 >/data/zookeeper/myid
启动服务:
${ZOOKEEPER_HOME}/bin/zkServer.sh start
检查节点Mode
# echo srvr | nc HOST_IP 2181
确定只有一个leader节点,且为192.168.1.101
# echo mntr |nc 192.168.1.101 2181
zk_followers 5#followers节点数
zk_synced_followers 5#数据已同步的followers节点数
3、扩容节点6(192.168.1.106)
# cat zoo.cfg
zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper
dataLogDir=/data/zookeeper/log
clientPort=2181
maxClientCnxns=0
server.1=192.168.1.101:2888:3888
server.2=192.168.1.102:2888:3888
server.3=192.168.1.103:2888:3888
server.4=192.168.1.104:2888:3888
server.5=192.168.1.105:2888:3888
server.6=192.168.1.106:2888:3888
写入myid编号
echo 6 >/data/zookeeper/myid
启动服务:
${ZOOKEEPER_HOME}/bin/zkServer.sh start
检查节点Mode
# echo srvr | nc HOST_IP 2181
确定只有一个leader节点,且为192.168.1.101
# echo mntr |nc 192.168.1.101 2181
zk_followers 6#followers节点数
zk_synced_followers 6#数据已同步的followers节点数
4、扩容节点7(192.168.1.106)
# cat zoo.cfg
zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper
dataLogDir=/data/zookeeper/log
clientPort=2181
maxClientCnxns=0
server.1=192.168.1.101:2888:3888
server.2=192.168.1.102:2888:3888
server.3=192.168.1.103:2888:3888
server.4=192.168.1.104:2888:3888
server.5=192.168.1.105:2888:3888
server.6=192.168.1.106:2888:3888
server.7=192.168.1.107:2888:3888
写入myid编号
echo 7 >/data/zookeeper/myid
启动服务:
${ZOOKEEPER_HOME}/bin/zkServer.sh start
检查节点Mode
# echo srvr | nc HOST_IP 2181
确定只有一个leader节点,且为192.168.1.101
# echo mntr |nc 192.168.1.101 2181
zk_followers 7#followers节点数
zk_synced_followers 7#数据已同步的followers节点数
5、依次调整(4、5、6)节点配置
# cat zoo.cfg
zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper
dataLogDir=/data/zookeeper/log
clientPort=2181
maxClientCnxns=0
server.1=192.168.1.101:2888:3888
server.2=192.168.1.102:2888:3888
server.3=192.168.1.103:2888:3888
server.4=192.168.1.104:2888:3888
server.5=192.168.1.105:2888:3888
server.6=192.168.1.106:2888:3888
server.7=192.168.1.107:2888:3888
重启服务:
${ZOOKEEPER_HOME}/bin/zkServer.sh restart
检查节点Mode
# echo srvr | nc HOST_IP 2181
确定只有一个leader节点,且为192.168.1.101
通过下面命令查看当前集群的状态:
# echo mntr |nc 192.168.1.101 2181
zk_followers 7#followers节点数
zk_synced_followers 7#数据已同步的followers节点数
6、调整老节点(2、3)配置,滚动重启
# cat zoo.cfg
zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper
dataLogDir=/data/zookeeper/log
clientPort=2181
maxClientCnxns=0
server.1=192.168.1.101:2888:3888
server.2=192.168.1.102:2888:3888
server.3=192.168.1.103:2888:3888
server.4=192.168.1.104:2888:3888
server.5=192.168.1.105:2888:3888
server.6=192.168.1.106:2888:3888
server.7=192.168.1.107:2888:3888
重启服务:
${ZOOKEEPER_HOME}/bin/zkServer.sh restart
检查节点Mode
# echo srvr | nc HOST_IP 2181
确定只有一个leader节点,且为192.168.1.101
# echo mntr |nc 192.168.1.101 2181
zk_followers 7#followers节点数
zk_synced_followers 7#数据已同步的followers节点数
7、调整leader节点(1)配置
# cat zoo.cfg
zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper
dataLogDir=/data/zookeeper/log
clientPort=2181
maxClientCnxns=0
server.1=192.168.1.101:2888:3888
server.2=192.168.1.102:2888:3888
server.3=192.168.1.103:2888:3888
server.4=192.168.1.104:2888:3888
server.5=192.168.1.105:2888:3888
server.6=192.168.1.106:2888:3888
server.7=192.168.1.107:2888:3888
重启服务:
${ZOOKEEPER_HOME}/bin/zkServer.sh restart
检查节点Mode
# echo srvr | nc HOST_IP 2181
Leader应该会在server.7节点上。
# echo mntr |nc 192.168.1.107 2181
zk_followers 7#followers节点数
zk_synced_followers 7#数据已同步的followers节点数
至此,zookeeper集群由3节点已扩容为7节点。
数据已经从老节点迁移至新节点,如果要剔除老节点,用同样的思路去操作处理即可。