目录
三台服务器集群搭建:
服务器: master, slave2, slave1
1:主节点配置
[root@Machine2 conf]# cp zoo_sample.cfg zoo.cfg
[root@Machine2 conf]# vim zoo.cfg
zoo.cfg文件
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/root/app/zookeeper-3.4.5-cdh5.7.0/data
dataLogDir=/root/app/zookeeper-3.4.5-cdh5.7.0/logs
# the port at which the clients will connect
clientPort=2181
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=slave1:2888:3888
server.2=slave2:2888:3888
server.3=master:2888:3888
注意:
上图的配置中master,slave1分别为主机名。
在上面的配置文件中"server.id=host:port:port":
第一个port是从机器(follower)连接到主机器(leader)的端口号,
第二个port是进行leadership选举的端口号。
2:创建dataDir指定的目录
[root@Machine3 zookeeper-3.4.5-cdh5.7.0]# mkdir data
接下来在dataDir所指定的目录下创建一个文件名为myid的文件,文件中的内容只有一行,为本主机对应的id值,也就是上图中server.id中的id。例如:server.1=slave1:2888:3888 在服务器slave1中的myid的内容应该写入1。
3:远程复制分发安装文件
接下来将上面的安装文件拷贝到集群中的其他机器上对应的目录下:
haduser@master:~/zookeeper$ scp -r zookeeper-3.4.5/ slave1:/home/haduser/zookeeper/zookeeper-3.4.5
haduser@master:~/zookeeper$ scp -r zookeeper-3.4.5/ slave2:/home/haduser/zookeeper/zookeeper-3.4.5
4.修改myid
拷贝完成后修改对应的机器上的myid,。例如修改slave1中的myid如下:
haduser@slave1:~/zookeeper/zookeeper-3.4.5$ echo "1" > data/myid
haduser@slave1:~/zookeeper/zookeeper-3.4.5$ cat data/myid
1
5.启动ZooKeeper集群
在ZooKeeper集群的每个结点上,执行启动ZooKeeper服务的脚本,如下所示:
haduser@master:~/zookeeper/zookeeper-3.4.5$ bin/zkServer.sh start
haduser@slave1:~/zookeeper/zookeeper-3.4.5$ bin/zkServer.sh start
haduser@slave2:~/zookeeper/zookeeper-3.4.5$ bin/zkServer.sh start
6.查看部署
其中,QuorumPeerMain是zookeeper进程,启动正常。
slave1(主节点):
[root@slave1 zookeeper-3.4.5-cdh5.7.0]# bin/zkServer.sh status
JMX enabled by default
Using config: /root/hadoop/zookeeper-3.4.5-cdh5.7.0/bin/../conf/zoo.cfg
Mode: leader
slave2(从节点):
[root@slave2 zookeeper-3.4.5-cdh5.7.0]# bin/zkServer.sh status
JMX enabled by default
Using config: /root/hadoop/zookeeper-3.4.5-cdh5.7.0/bin/../conf/zoo.cfg
Mode: follower
通过上面状态查询结果可见,slave1是集群的Leader,其余的两个结点是Follower。
注意如果启动失败:
可尝试删除dataDir所指定的目录下的version-2,zookeeper_server.pid 以及/tmp下的zookeeper相关目录, 然后再重启一下,可解决一部分问题。