规划节点
IP | 主机名 | 节点 |
192.168.222.183 | zookeeper1 | 集群节点 |
192.168.222.184 | zookeeper2 | 集群节点 |
192.168.222.182 | zookeeper3 | 集群节点 |
基础准备
使用CentOS-7-x86_64-DVD-1804.iso镜像,虚拟机2vCPU/4GB内存/50GB硬盘。
案例实施
1. 基础环境配置
(1)主机名配置
zookeeper1节点:
[root@localhost ~]# hostnamectl set-hostname zookeeper1
zookeeper2节点:
[root@localhost ~]# hostnamectl set-hostname zookeeper2
zookeeper3节点:
[root@localhost ~]# hostnamectl set-hostname zookeeper3
修改完成后查看主机名:
zookeeper1节点:
[root@zookeeper1 ~]# hostnamectl
Static hostname: zookeeper1
Icon name: computer-vm
Chassis: vm
Machine ID: 3571224c1e6742ec91206bf5882f5b4a
Boot ID: cc776c60183b407087ba9489f099a7fc
Virtualization: vmware
Operating System: CentOS Linux 7 (Core)
CPE OS Name: cpe:/o:centos:centos:7
Kernel: Linux 3.10.0-862.el7.x86_64
Architecture: x86-64
zookeeper2节点:
[root@zookeeper2 ~]# hostnamectl
Static hostname: zookeeper2
Icon name: computer-vm
Chassis: vm
Machine ID: 3571224c1e6742ec91206bf5882f5b4a
Boot ID: f011b5430f7749f0a77e4cf47b746be5
Virtualization: vmware
Operating System: CentOS Linux 7 (Core)
CPE OS Name: cpe:/o:centos:centos:7
Kernel: Linux 3.10.0-862.el7.x86_64
Architecture: x86-64
zookeeper3节点:
[root@zookeeper3 ~]# hostnamectl
Static hostname: zookeeper3
Icon name: computer-vm
Chassis: vm
Machine ID: 3571224c1e6742ec91206bf5882f5b4a
Boot ID: 19e294d005c84871b6dda7334ef53cc4
Virtualization: vmware
Operating System: CentOS Linux 7 (Core)
CPE OS Name: cpe:/o:centos:centos:7
Kernel: Linux 3.10.0-862.el7.x86_64
Architecture: x86-64
(2)配置hosts文件
3个节点修改/etc/hosts文件,3个节点均添加如下内容:
# vi /etc/hosts
192.168.222.183 zookeeper1
192.168.222.184 zookeeper2
192.168.222.182 zookeeper3
(3)挂载镜像
将3个节点的镜像挂载到/opt/centos下,命令如下:
# mkdir /opt/centos
# mount /dev/cdrom /opt/centos
(3)配置YUM源
将gpmall-repo目录上传至3个节点的/opt目录下,首先将3个节点/etc/yum.repo.d目录下的文件移动到/media目录下,命令如下:
# mv /etc/yum.repos.d/* /media/
在3个节点上创建/etc/yum.repo.d/local.repo,文件内容如下:
# cat /etc/yum.repos.d/local.repo
[gpmall]
name=gpmall
baseurl=file:///opt/gpmall-repo
gpgcheck=0
enabled=1
[centos]
name=centos
baseurl=file:///opt/centos
gpgcheck=0
enabled=1
# yum clean all
# yum list
2. 搭建ZooKeeper集群
(1)安装JDK环境
3个节点安装Java JDK环境,3个节点均执行命令如下:
# yum install -y java-1.8.0-openjdk java-1.8.0-openjdk-devel
# java -version
openjdk version "1.8.0_222"
OpenJDK Runtime Environment (build 1.8.0_222-b10)
OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode)
(2)解压ZooKeeper软件包
将zookeeper-3.4.14.tar.gz软件包上传至3个节点的/root目录下,进行解压操作,3个节点均执行命令如下:
# tar -zxvf zookeeper-3.4.14.tar.gz
(3)修改3个节点配置文件
以zookeeper1节点为例,进入zookeeper-3.4.14/conf目录下,修改zoo_sample.cfg文件为zoo.cfg,并编辑该文件内容如下:
[root@zookeeper1 conf]# vi zoo.cfg
[root@zookeeper1 conf]# grep -n '^'[a-Z] zoo.cfg
2:tickTime=2000
5:initLimit=10
8:syncLimit=5
12:dataDir=/tmp/zookeeper
14:clientPort=2181
29:server.1=172.16.51.23:2888:3888
30:server.2=172.16.51.32:2888:3888
31:server.3=172.16.51.41:2888:3888
注意:zookeeper2和zookeeper3节点的操作与修改的配置和zookeeper1节点一样。
(4)创建myid文件
在3台机器dataDir目录(此处为/tmp/zookeeper)下,分别创建一个myid文件,文件内容分别只有一行,其内容为1,2,3。即文件中只有一个数字,这个数字即为上面zoo.cfg配置文件中指定的值。ZooKeeper是根据该文件来决定ZooKeeper集群各个机器的身份分配。
创建myid文件,命令如下:
zookeeper1节点:
[root@zookeeper1 ~]# mkdir /tmp/zookeeper
[root@zookeeper1 ~]# vi /tmp/zookeeper/myid
[root@zookeeper1 ~]# cat /tmp/zookeeper/myid
1
zookeeper2节点:
[root@zookeeper2 ~]# mkdir /tmp/zookeeper
[root@zookeeper2 ~]# vi /tmp/zookeeper/myid
[root@zookeeper2 ~]# cat /tmp/zookeeper/myid
2
zookeeper3节点:
[root@zookeeper3 ~]# mkdir /tmp/zookeeper
[root@zookeeper3 ~]# vi /tmp/zookeeper/myid
[root@zookeeper3 ~]# cat /tmp/zookeeper/myid
3
(5)启动ZooKeeper服务
在3台机器的zookeeper-3.4.14/bin目录下执行命令如下:
zookeeper1节点:
[root@zookeeper1 bin]# ./zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /root/zookeeper-3.4.14/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@zookeeper1 bin]# ./zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /root/zookeeper-3.4.14/bin/../conf/zoo.cfg
Mode: follower
zookeeper2节点:
[root@zookeeper2 bin]# ./zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /root/zookeeper-3.4.14/bin/../conf/zoo.cfg
Starting zookeeper ... already running as process 10175.
[root@zookeeper2 bin]# ./zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /root/zookeeper-3.4.14/bin/../conf/zoo.cfg
Mode: leader
zookeeper3节点:
[root@zookeeper3 bin]# ./zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /root/zookeeper-3.4.14/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@zookeeper3 bin]# ./zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /root/zookeeper-3.4.14/bin/../conf/zoo.cfg
Mode: follower
可以看到,3个节点,zookeeper2为leader,其他的都是follower。
至此,ZooKeeper集群配置完毕。