一、linux机器集群环境:接上一篇单节点,再准备两台机器,和刚才单机的机器加一起构成一个集群环境(如果电脑跑不动就改为一台机器跑三个进程的方式)。如我总共准备了三个节点:126机器的2181、126机器的2182节点、32机器的2181节点
1、安装:三台机器分别执行以下操作:
mkdir -p /usr/local/zookeeper
#解压
tar -zxvf apache-zookeeper-3.6.1-bin.tar.gz -C /usr/local/zookeeper/
#创建数据目录、日志目录
mkdir -p /usr/local/zookeeper/apache-zookeeper-3.6.1-bin/data
mkdir -p /usr/local/zookeeper/apache-zookeeper-3.6.1-bin/log
因为我126上有两个节点,所以新建了一个目录zookeeper1:
2、myid 文件:在 data 目录下创建 myid 文件,文件中就只写个 1
即可,其他两个机器分别写 2
和 3
。
cd /usr/local/zookeeper/apache-zookeeper-3.6.1-bin/data/
vi myid
3、修改配置文件:
# 进入配置文件目录
cd /usr/local/zookeeper/apache-zookeeper-3.6.1-bin/conf/
# zookeeper 启动默认加载名为 zoo.cfg 的配置文件,所以复制一份命名为 zoo.cfg
cp zoo_sample.cfg zoo.cfg
# 修改配置文件
vi zoo.cfg
主要修改:
- 数据目录
dataDir
- 日志目录
dataLogDir
- 端口
clientPort
(如果是一台机器的伪集群,需要修改 2181 端口,比如:2181、2182、2183) - 集群配置(如果是一台机器的伪集群,需要修改 2888 和 3888 的端口,比如:2888、2889、2890 和 3888、3889、3890)
我这里1、3两个节点修改后内容如下:
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/usr/local/zookeeper/apache-zookeeper-3.6.1-bin/data
dataLogDir=/usr/local/zookeeper/apache-zookeeper-3.6.1-bin/log
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
## Metrics Providers
#
# https://prometheus.io Metrics Exporter
#metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider
#metricsProvider.httpPort=7000
#metricsProvider.exportJvmInfo=true
# 集群配置
# server.1 中的 1 是 myid 文件中的内容,2888 用于集群内部通信,3888 用于选择 leader
server.1=xxx.xx.xxx.126:2888:3888
server.2=xxx.xx.xxx.126:2889:3889
server.3=xxx.xx.xxx.32:2888:3888
126的第二个节点和上面不同的是:
dataDir=/usr/local/zookeeper1/apache-zookeeper-3.6.1-bin/data
dataLogDir=/usr/local/zookeeper1/apache-zookeeper-3.6.1-bin/log
# the port at which the clients will connect
clientPort=2182
4、分别启动三个节点的zk:
cd /usr/local/zookeeper/apache-zookeeper-3.6.1-bin/
bin/zkServer.sh start
5、集群状态查看:
cd /usr/local/zookeeper/apache-zookeeper-3.6.1-bin/
bin/zkServer.sh status
到此,ZooKeeper 集群环境已搭建成功。
二、代码测试:
provider.xml:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:dubbo="http://code.alibabatech.com/schema/dubbo"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd
http://code.alibabatech.com/schema/dubbo
http://code.alibabatech.com/schema/dubbo/dubbo.xsd">
<dubbo:application name="mysercurity-service"/>
<dubbo:provider timeout="3000" retries="0"/>
<!-- register改为false不注册到注册中心 -->
<dubbo:registry protocol="zookeeper" address="xxx.xx.xxx.32:2181,xxx.xx.126:2181,xxx.xx.xxx.126:2182"
register="true" check="false"/>
<dubbo:protocol name="dubbo" port="20881"/>
<dubbo:service interface="com.demo.service.UserService" ref="userService"></dubbo:service>
<dubbo:service interface="com.demo.service.RoleService" ref="roleService"></dubbo:service>
<dubbo:service interface="com.demo.service.NavMenuService" ref="navMenuService"></dubbo:service>
</beans>
consumer.xml:
<?xml version="1.0" encoding="UTF-8"?>
<beans
xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:dubbo="http://code.alibabatech.com/schema/dubbo" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.1.xsd http://code.alibabatech.com/schema/dubbo http://code.alibabatech.com/schema/dubbo/dubbo.xsd">
<dubbo:application name="mysercurity-test" />
<dubbo:consumer timeout="3000" retries="0" check="false"/>
<dubbo:registry protocol="zookeeper" address="xxx.xx.xxx.32:2181,xxx.xx.xxx.126:2181,xxx.xx.xxx.126:2182" register="true" timeout="100000"/>
<!-- 加上 url="dubbo://127.0.0.1:20881" 表示直连本地-->
<dubbo:reference interface="com.demo.service.UserService" id="userService"
/>
<dubbo:reference interface="com.demo.service.RoleService" id="roleService"
/>
<dubbo:reference interface="com.demo.service.NavMenuService" id="navMenuService"
/>
</beans>
三、主从切换:如上,126的2182节点为主,其他两个为从,现停止126的2182节点,发现32机器的2181节点自动成为了主节点