Centos7+zookeeper单机部署集群

前述:

近日需要用到zookeeper集群,硬件缺少只能以软件来补。在一台Centos7的机器上部署多个zookeeper的实例,这里进行记  录,或许以后再遇到集群搭建,不必耗费时间去搜罗资料。

环境:

Centos7+java1.8。

准备:

zookeeper-3.4.14.tar.gz安装包,下载地址http://archive.apache.org/dist/zookeeper/zookeeper-3.4.14/

文件结构

|----opt

|---------zookeeper

|-----------------------zk1

|-----------------------zk2

|-----------------------zk3

操作一:


1、使用管理员权限操作,避免出现权限问题。

[administrator@localhost ~]$ su root

2、进入opt目录,如果没有此文件请自行创建

[root@localhost administrator]# cd /opt/

3、创建zookeeper目录

[root@localhost opt]# mkdir zookeeper

 4、进入到zookeeper目录

[root@localhost opt]# cd zookeeper

5、将zookeeper-3.4.14.tar.gz文件放置到zookeeper路径下,并解压

[root@localhost zookeeper]# tar -xvf zookeeper-3.4.14.tar.gz

6、把解压后的文件zookeeper-3.4.14重命名为zk1

[root@localhost zookeeper]# mv zookeeper-3.4.14 ./zk1

7、在zk1的目录下创建zkdata存放实例的myid

[root@localhost zookeeper]# cd zk1
[root@localhost zk1]#mkdir zkdata
[root@localhost zk1]#vi ./zkdata/myid

myid的文件内容输入数字1,保存即可。此数值在集群中是唯一的不可重复(数值范围1-255)

8、建立zoo.cfg配置文件

[root@localhost zookeeper]# cp /opt/zookeeper/zk1/conf/zoo_sample.cfg /opt/zookeeper/zk1/conf/zoo.cfg

9、配置zoo.cfg文件

[root@localhost zookeeper]# vi /opt/zookeeper/zk1/conf/zoo.cfg

 

文件内容如下:

 


# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/opt/zookeeper/zk1/zkdata
# the port at which the clients will connect
clientPort=2191
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=127.0.0.1:20881:30881
server.2=127.0.0.1:20882:30882
server.3=127.0.0.1:20883:30883


释:

dataDir=/opt/zookeeper/zk1/zkdata  #表示zookeeper服务的ID数据。

clientPort=2191  #客户端的端口号。

server.1=127.0.0.1:20881:30881
server.2=127.0.0.1:20882:30882
server.3=127.0.0.1:20883:30883

集群服务至少为三个,这里三个服务的(服务名称=服务地址:监听端口:选举端口)

此处服务地址全部为127.0.0.1,因为是在一台机器上部署3个实例;如果在多台机器部署,要根据实际ip地址配置。

 

操作二:

前面已经配置好一个实例,在此基础上配置第二、第三个实例就会轻松些。具体操作这里简化一些。

1、根据zk1复制出zk2和zk3

[root@localhost zk1]# cd /opt/zookeeper
[root@localhost zookeeper]# cp  /opt/zookeeper/zk1 /opt/zookeeper/zk2
[root@localhost zookeeper]# cp  /opt/zookeeper/zk1 /opt/zookeeper/zk3

2、修改zk2和zk3的zoo.cfg文件

[root@localhost zookeeper]# vi /opt/zookeeper/zk2/conf/zoo.cfg

具体变更部分为红色部分


# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/opt/zookeeper/zk2/zkdata
# the port at which the clients will connect
clientPort=2192
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=127.0.0.1:20881:30881
server.2=127.0.0.1:20882:30882
server.3=127.0.0.1:20883:30883


 

配置zk3的zoo.cfg

[root@localhost zookeeper]# vi /opt/zookeeper/zk3/conf/zoo.cfg

变更内容为红色部分

 # The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/opt/zookeeper/zk3/zkdata
# the port at which the clients will connect
clientPort=2193
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=127.0.0.1:20881:30881
server.2=127.0.0.1:20882:30882
server.3=127.0.0.1:20883:30883

 3、修改zk2和zk3的myid文件

[root@localhost zookeeper]# vi /opt/zookeeper/zk2/zkdata/myid

 将数值变更为2

[root@localhost zookeeper]# vi /opt/zookeeper/zk3/zkdata/myid

  将数值变更为3

截止当前,三个zookeeper实例已经配置完毕,但还有后续工作需要完成。

防火墙设置

1、zookeeper集群的端口较多,如果防火墙不开启所涉及的端口,服务会出现问题。在配置过程中我们已经得知已经涉及9个端口,下面我们逐个开启这些端口。

[root@localhost zookeeper]# firewall-cmd --permanent --zone=public --add-port=2191-2193/tcp
[root@localhost zookeeper]# firewall-cmd --permanent --zone=public --add-port=20881-20883/tcp
[root@localhost zookeeper]# firewall-cmd --permanent --zone=public --add-port=30881-30883/tcp

2、重新加载防火墙
 

[root@localhost zookeeper]#firewall-cmd --reload 

3、 查看防火墙中这9个端口是否状态

[root@localhost zookeeper]# firewall-cmd --list-ports
20881-20883/tcp 30881-30883/tcp 2191-2193/tcp

批处理启动

1、为日常操作方便,特意编写一个批处理文件启动实例

[root@localhost zookeeper]# vi zkStart.sh

内容如下:


#!/bin/sh
cd /opt/zookeeper/zk1/bin
./zkServer.sh start ../conf/zoo.cfg

cd /opt/zookeeper/zk2/bin
./zkServer.sh start ../conf/zoo.cfg

cd /opt/zookeeper/zk3/bin
./zkServer.sh start ../conf/zoo.cfg



2、运行批处理文件来启动这三个实例

[root@localhost zookeeper]# vi ./zkStart.sh

 3、运行的结果

ZooKeeper JMX enabled by default
Using config: ../conf/zoo.cfg
Starting zookeeper ... STARTED
ZooKeeper JMX enabled by default
Using config: ../conf/zoo.cfg
Starting zookeeper ... STARTED
ZooKeeper JMX enabled by default
Using config: ../conf/zoo.cfg
Starting zookeeper ... STARTED

可以看出这三个zookeeper实例已经启动成功。

4、再来看看启动集群状态

[root@localhost zookeeper]# ps -ef|grep zookeeper|grep -v grep|wc -l

能看到控制台会出现数字3

 

至此已经配置成功,完结!

 

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值