一,单机服务
下载地址:Apache ZooKeeper
安装目录为 /usr/local/zookeeper, 数据目录为 /var/lib/zookeeper
# tar -zxf zookeeper-3.8.0.tar.gz
# mv zookeeper-3.8.0.tar.gz /usr/local/zookeeper
# mkdir -p /var/lib/zookeeper
# cat > /usr/local/zookeeper/conf/zoo.cfg << EOF
> ticketTime=2000
>dataDir=/var/lib/zookeeper
>clientPort=2181
>admin.serverPort=8081
>EOF
# /usr/local/zookeeper/bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
现在可以连接到Zookeeper端口上,通过发送srvr来验证是否安装正确
#telnet localhost 2181
二,集群服务
真实的集群是需要部署在不同的服务器上的,但是在我们测试时同时启动很多个虚拟机内存会吃不消,所以我们通常会搭建伪集群,也就是把所有的服务都搭建在一台虚拟机上,用端口进行区分。
我们这里要求搭建一个三个节点的Zookeeper集群(伪集群)。
将Zookeeper解压 ,建立/usr/local/zookeeper-cluster目录,将解压后的Zookeeper复制到以下三个目录
[root@localhost ~]# mkdir /usr/local/zookeeper-cluster
[root@localhost ~]# cp -r apache-zookeeper-3.8.0-bin /usr/local/zookeeper-cluster/zookeeper-1
[root@localhost ~]# cp -r apache-zookeeper-3.8.0-bin /usr/local/zookeeper-cluster/zookeeper-2
[root@localhost ~]# cp -r apache-zookeeper-3.8.0-bin /usr/local/zookeeper-cluster/zookeeper-3
/usr/local/zookeeper-cluster/zookeeper-1
/usr/local/zookeeper-cluster/zookeeper-2
/usr/local/zookeeper-cluster/zookeeper-3
创建data目录 ,并且将 conf下zoo_sample.cfg 文件改名为 zoo.cfg
[root@localhost ~]# mkdir /usr/local/zookeeper-cluster/zookeeper-1/data
[root@localhost ~]# mkdir /usr/local/zookeeper-cluster/zookeeper-2/data
[root@localhost ~]# mkdir /usr/local/zookeeper-cluster/zookeeper-3/data
[root@localhost ~]# mv /usr/local/zookeeper-cluster/zookeeper-1/conf/zoo_sample.cfg /usr/local/zookeeper-cluster/zookeeper-1/conf/zoo.cfg
[root@localhost ~]# mv /usr/local/zookeeper-cluster/zookeeper-2/conf/zoo_sample.cfg /usr/local/zookeeper-cluster/zookeeper-2/conf/zoo.cfg
[root@localhost ~]# mv /usr/local/zookeeper-cluster/zookeeper-3/conf/zoo_sample.cfg /usr/local/zookeeper-cluster/zookeeper-3/conf/zoo.cfg
配置每一个Zookeeper 的dataDir 和 clientPort 分别为2181 2182 2183
修改/usr/local/zookeeper-cluster/zookeeper-1/conf/zoo.cfg
[root@localhost ~]# vim /usr/local/zookeeper-cluster/zookeeper-1/conf/zoo.cfg
clientPort=2181
dataDir=/usr/local/zookeeper-cluster/zookeeper-1/data
修改/usr/local/zookeeper-cluster/zookeeper-2/conf/zoo.cfg
[root@localhost ~]# vim /usr/local/zookeeper-cluster/zookeeper-2/conf/zoo.cfg
clientPort=2182
dataDir=/usr/local/zookeeper-cluster/zookeeper-2/data
修改/usr/local/zookeeper-cluster/zookeeper-3/conf/zoo.cfg
[root@localhost ~]# vim /usr/local/zookeeper-cluster/zookeeper-3/conf/zoo.cfg
clientPort=2183
dataDir=/usr/local/zookeeper-cluster/zookeeper-3/data
在每个zookeeper的 data 目录下创建一个 myid 文件,内容分别是1、2、3 。这个文件就是记录每个服务器的ID
[root@localhost ~]# echo 1 >/usr/local/zookeeper-cluster/zookeeper-1/data/myid
[root@localhost ~]# echo 2 >/usr/local/zookeeper-cluster/zookeeper-2/data/myid
[root@localhost ~]# echo 3 >/usr/local/zookeeper-cluster/zookeeper-3/data/myid
在每一个zookeeper 的 zoo.cfg配置客户端访问端口(clientPort)和集群服务器IP列表
[root@localhost ~]# vim /usr/local/zookeeper-cluster/zookeeper-1/conf/zoo.cfg
[root@localhost ~]# vim /usr/local/zookeeper-cluster/zookeeper-2/conf/zoo.cfg
[root@localhost ~]# vim /usr/local/zookeeper-cluster/zookeeper-3/conf/zoo.cfg
server.1=192.168.149.135:2881:3881
server.2=192.168.149.135:2882:3882
server.3=192.168.149.135:2883:3883
server.服务器ID=服务器IP地址:服务器之间通信端口:服务器之间投票选举端口
客户端和服务器通信默认端口:2181
服务器之间通信默认端口:2881
服务器之间投票选举默认端口:3881
ZooKeeper中的AdminService服务默认占用端口:8080 注意防止8080被占用,如果被占用需要修改zoo.conf中 admin.serverPort=8081 修改成一个没有被占用的端口号,如8081
根据这里的服务器ID编号的大小进行选举
启动集群就是分别启动每个实例。
/usr/local/zookeeper-cluster/zookeeper-1/bin/zkServer.sh start
/usr/local/zookeeper-cluster/zookeeper-2/bin/zkServer.sh start
/usr/local/zookeeper-cluster/zookeeper-3/bin/zkServer.sh start
启动后我们查询一下每个实例的运行状态
/usr/local/zookeeper-cluster/zookeeper-1/bin/zkServer.sh status
/usr/local/zookeeper-cluster/zookeeper-2/bin/zkServer.sh status
/usr/local/zookeeper-cluster/zookeeper-3/bin/zkServer.sh status
3个节点的集群,2个从服务器都挂掉,主服务器也无法运行。因为可运行的机器没有超过集群总数量的半数。当集群中的主服务器挂了,集群中的其他服务器会自动进行选举状态,然后产生新得leader ,当领导者产生后,再次有新服务器加入集群,不会影响到现任领导者