搭建 zk 单机与集群
本文记录如何使用单机搭建 zk 集群,使用的是 mac 系统, 本文下载的当前zk的最新的稳定版本3.4.13
官网地址:https://zookeeper.apache.org/doc/r3.4.13/zookeeperStarted.html
单机搭建过程
-
下载地址
https://archive.apache.org/dist/zookeeper/
-
解压缩
-
生成 zoo.cfg 文件
进入到文件目录 /zookeeper-3.4.13/conf 拷贝文件 cp ./zoo_sample.cfg ./zoo.cfg
-
启动 zk server
./bin/zkServer.sh start
启动日志如下:ZooKeeper JMX enabled by default Using config: /Users/wangyang/Documents/software/zookeeper/zookeeper-3.4.13/bin/../conf/zoo.cfg Mode: standalone
查看服务器状态
./bin/zkServer.sh status
结果
Mode: standalone
连接服务器./bin/zkCli.sh -server localhost:2180
集群搭建
集群模式的配置与单机的稍微有些差别,但集群中的每台机器的配置完全一致。
-
先看下单机 zoo.cfg 文件的内容
# The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 初始化时的同步时间 10 * 2 s = 20 s # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 集群间的内部数据同步时间 5 * 2 s = 10 s # the directory where the snapshot is stored. # do not use /tmp for storage, /tmp here is just # example sakes. dataDir=/tmp/zookeeper 这个目录是 zk 的数据存放目录 # the port at which the clients will connect clientPort=2180 开放给客户端连接的端口号 # the maximum number of client connections. # increase this if you need to handle more clients #maxClientCnxns=60 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir #autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature #autopurge.purgeInterval=1
-
拷贝一份zk目录同时修改 zoo.cfg 文件
cp -r ./zookeeper-3.4.9 ./zookeeper-3.4.9-back0
复制后的文件目录如下图
修改 zoo.cfg 文件如下# The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. # do not use /tmp for storage, /tmp here is just # example sakes. dataDir=/tmp/zookeeper # the port at which the clients will connect clientPort=2181 # the maximum number of client connections. # increase this if you need to handle more clients #maxClientCnxns=60 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir #autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature #autopurge.purgeInterval=1 server.1=127.0.0.1:2888:3888 server.2=127.0.0.1:2889:3889 server.3=127.0.0.1:2890:3890
添加的集群配置如下:
server.1=127.0.0.1:2888:3888 server.2=127.0.0.1:2889:3889 server.3=127.0.0.1:2890:3890 server.id=ip:port1:port2 id 是集群的编号,范围 (0-255)对应的在 dataDir 所在的目录下面会有一个 myid 文件,内容只有一行就是 id 值 ip 是我们zk服务器的ip port1 是 zk 集群中 server 数据通信的端口号 port2 是 zk 集群中 server leader 选举的端口号
-
在 dataDir 目录下 创建 myid 文件
内容只有 当前server 对应的 id
-
同样的操作复制三份
包括zk目录,以及 myid 文件
注意
由于是在单机上配置集群,所以三个server的端口号
不能重复,包括clientPort
dataDir
也不能重复最终的 clientPort 和 dataDir 以及 myid 分别为:
zookeeper-3.4.13-back0 clientPort = 2181 dataDir = /tmp/zookeeper0 myid:1 zookeeper-3.4.13-back1 clientPort = 2182 dataDir = /tmp/zookeeper1 myid:2 zookeeper-3.4.13-back2 clientPort = 2183 dataDir = /tmp/zookeeper2 myid:3
-
分别启动三台服务器
脚本如下:./zookeeper-3.4.13-back0/bin/zkServer.sh start ./zookeeper-3.4.13-back1/bin/zkServer.sh start ./zookeeper-3.4.13-back2/bin/zkServer.sh start
查看服务状态
./zookeeper-3.4.13-back0/bin/zkServer.sh status ./zookeeper-3.4.13-back1/bin/zkServer.sh status ./zookeeper-3.4.13-back2/bin/zkServer.sh status
停止服务
./zookeeper-3.4.13-back0/bin/zkServer.sh stop ./zookeeper-3.4.13-back1/bin/zkServer.sh stop ./zookeeper-3.4.13-back2/bin/zkServer.sh stop