zookeeper集群搭建及分布式锁场景学习
1. Zookeeper功能简介
ZooKeeper 是一个开源的分布式协调服务,由雅虎创建,是 Google Chubby 的开源实现。
分布式应用程序可以基于 ZooKeeper 实现诸如数据发布/订阅、负载均衡、命名服务、分布式协
调/通知、集群管理、Master 选举、配置维护,名字服务、分布式同步、分布式锁和分布式队列
等功能。
简而言之:zookeeper=文件系统+通知机制
2. ZooKeeper基本概念
2.1 zookeeper的集群角色
zookeeper中共有三种角色:(1) Leader (2)Follower(3)Observer,但是ZooKeeper 默认只有 Leader 和 Follower 两种角色,没有 Observer 角色。
2.2 zookeeper节点分工
1.ZooKeeper 集群的所有机器通过一个 Leader 选举过程来选定一台被称为『Leader』 的机器,Leader服务器为客户端提供读和写服务。
2.Follower 和 Observer 都能提供读服务,不能提供写服务。两者唯一的区别在于,Observer机器不参与 Leader 选举过程,也不参与写操作的『过半写成功』策略,因此 Observer 可以在不影响写性能的情况下提升集群的读性能。
2.3 session
Session 是指客户端会话,在讲解客户端会话之前,我们先来了解下客户端连接。在 ZooKeeper 中,一个客户端连接是指客户端和 ZooKeeper 服务器之间的TCP长连接。
ZooKeeper 对外的服务端口默认是2181,客户端启动时,首先会与服务器建立一个TCP 连接,从第一次连接建立开始,客户端会话的生命周期也开始了,通过这个连接,客户端能够通过心跳检测和服务器保持有效的会话,也能够向 ZooKeeper 服务器发送请求并接受响应,同时还能通过该连接接收来自服务器的 Watch 事件通知。
Session 的 SessionTimeout 值用来设置一个客户端会话的超时时间。当由于服务器压力太大、网络故障或是客户端主动断开连接等各种原因导致客户端连接断开时,只要在SessionTimeout规定的时间内能够重新连接上集群中任意一台服务器,那么之前创建的会话仍然有效。
2.4 数据节点
zookeeper有四种类型的数据节点:
-
PERSISTENT-持久化目录节点:客户端与zookeeper断开连接后,该节点依旧存在
-
PERSISTENT_SEQUENTIAL-持久化顺序编号目录节点:客户端与zookeeper断开连接后,该节点依旧存在,只是Zookeeper给该节点名称进行顺序编号
-
EPHEMERAL-临时目录节点:客户端与zookeeper断开连接后,该节点被删除
-
EPHEMERAL_SEQUENTIAL-临时顺序编号目录节点:客户端与zookeeper断开连接后,该节点被删除,只是Zookeeper给该节点名称进行顺序编号
2.5 状态信息
每个节点除了存储数据内容之外,还存储了节点本身的一些状态信息。
用get
命令可以同时获得某个节点的内容和状态信息;
[zk: 192.168.147.130:2183(CONNECTED) 3] get /search
bim
cZxid = 0x100000018
ctime = Mon Sep 03 04:34:20 EDT 2018
mZxid = 0x100000018
mtime = Mon Sep 03 04:34:20 EDT 2018
pZxid = 0x100000018
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 3
numChildren = 0
在 ZooKeeper 中,version 属性是用来实现乐观锁机制中的写入校验的(保证分布
式数据原子性操作)。
2.6 事务操作
在ZooKeeper中,能改变ZooKeeper服务器状态的操作称为事务操作。一般包括数据节点
创建与删除、数据内容更新和客户端会话创建与失效等操作。对应每一个事务请求,ZooKeeper
都会为其分配一个全局唯一的事务ID,用 ZXID 表示,通常是一个64位的数字。每一个 ZXID
对应一次更新操作,从这些 ZXID 中可以间接地识别出 ZooKeeper 处理这些事务操作请求的
全局顺序。
2.7 Watcher(事件监听器)
是 ZooKeeper 中一个很重要的特性。ZooKeeper允许用户在指定节点上注册一些 Watcher,
并且在一些特定事件触发的时候,ZooKeeper 服务端会将事件通知到感兴趣的客户端上去。该
机制是 ZooKeeper 实现分布式协调服务的重要特性。
3. Zookeeper应用场景
注:3.1-3.4的应用场景还没有细看,暂且先不写
3.5 zookeeper的应用场景之—分布式锁
原理:zookeeper集群中所有的节点维护一棵共同的树,树的结构和liunx的目录类似,以/为根节点,下面可以扩展任意的节点和叶子节点,每个节点可以写入数据;基于zookeeper的分布式锁服务正是基于这一原理实现的,我们相信每时每刻我们访问zookeeper树时,相同节点返回的数据都是一致的(选举机制,内部原理是poxis一致性算法);既然zookeeper集群的每一个节点的数据都是一致的,那么就可以通过这些节点来作为分布式锁的标志。
锁服务可以分为两类,一个是排它锁,另一个是共享锁。
对于第一类,我们将zookeeper上的一个znode看作是一把锁,通过createznode的方式来实现。所有客户端都去创建 /distribute_lock 节点,最终成功创建的那个客户端也即拥有了这把锁。用完删除掉自己创建的distribute_lock 节点就释放出锁。
对于第二类, /distribute_lock 已经预先存在,所有客户端在它下面创建临时顺序编号目录节点,和选master一样,编号最小的获得锁,用完删除,依次方便。
过程如下:
- 给锁设置不同的API,包含:LOCK(上锁)、UNLOCK(解锁)、ISLOCKED(是否锁住)三个方法
- 创建一个工厂(LockFactory)来生成锁
- 为不同的锁指定一个路径(/bim/collorationLock)
- 根据指定路径,查找zookeeper集群中是否存在该节点?
- 如果存在该节点(说明已经有锁了),根据查询者的一些特征数据(如IP/hostname)等判断当前用户是否为锁的持有者
- 如果不是查询者的锁,则返回null,创建失败
- 如果是查询者的锁,则将该锁返回给查询者
- 如果该节点不存在,则说明当前还没有锁,那么久创建一个临时节点,并将查询者的特征信息写入到这个节点数据中,然后返回锁。
通过以上过程,一个分布式锁就可以创建了,创建锁存在三种状态:
- 创建失败(null),说明该锁被其他查询者使用;
- 创建成功,但没有锁住,可以使用
- 创建成功,但当前已经被锁住,不能继续加锁。
4. 集群搭建
4.1 单机模式(standalone)
- 下载zookeeper (地址:http://www.apache.org/dist/zookeeper/ )
- 修改权限
chmod 777 zookeeper-3.4.10.tar.gz
- 解压
tar -zxvf zookeeper-3.4.10.tar.gz
- 创建配置文件:(将sample去掉即可)
[root@localhost zookeeper-3.4.10]# cp conf/zoo_sample.cfg conf/zoo.cfg
- 创建数据文件:
[root@localhost zookeeper-3.4.10]# mkdir /home/centos/Zookeeper/zoo/zk0
- 编辑配置文件,将其中默认的数据文件位置更新为新创建的数据文件位置
vim conf/zoo.cfg -> dataDir=/home/centos/Zookeeper/zoo/zk0
- 启动
zookeeper ./bin/zkServer.sh start
- 查看其状态
./bin/zkServer.sh status
[root@localhost zookeeper-3.4.10]# ./bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /home/centos/Zookeeper/zookeeper-3.4.10/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@localhost zookeeper-3.4.10]# jps
4772 QuorumPeerMain
4794 Jps
[root@localhost zookeeper-3.4.10]# ./bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /home/centos/Zookeeper/zookeeper-3.4.10/bin/../conf/zoo.cfg
Mode: standalone
4.2 伪分布式模式
所谓 “伪分布式集群” 就是在,在一台PC中,启动多个ZooKeeper的实例。“完全分布式集群” 是每台PC,启动一个ZooKeeper实例。
- 创建三个实例的数据文件:
[root@localhost zookeeper-3.4.10]# mkdir /home/centos/Zookeeper/zoo/zk1
[root@localhost zookeeper-3.4.10]# mkdir /home/centos/Zookeeper/zoo/zk2
[root@localhost zookeeper-3.4.10]# mkdir /home/centos/Zookeeper/zoo/zk3
- 为每个实例新建myid文件
[root@localhost zookeeper-3.4.10]# echo "1" > /home/centos/Zookeeper/zoo/zk1/myid
[root@localhost zookeeper-3.4.10]# echo "2" > /home/centos/Zookeeper/zoo/zk2/myid
[root@localhost zookeeper-3.4.10]# echo "3" > /home/centos/Zookeeper/zoo/zk3/myid
- 创建每个实例的配置文件
[root@localhost conf]# cp zoo_sample.cfg zk1.cfg
[root@localhost conf]# cp zoo_sample.cfg zk2.cfg
[root@localhost conf]# cp zoo_sample.cfg zk3.cfg
- 更新配置文件:
dataDir=/home/centos/Zookeeper/zoo/zk1
# the port at which the clients will connect
clientPort=2181
server.1=192.168.147.130:2888:3888
server.2=192.168.147.130:2889:3889
server.3=192.168.147.130:2890:3890
- 启动集群
[root@localhost zookeeper-3.4.10]# ./bin/zkServer.sh start zk1.cfg
ZooKeeper JMX enabled by default
Using config: /home/centos/Zookeeper/zookeeper-3.4.10/bin/../conf/zk1.cfg
Starting zookeeper ... STARTED
[root@localhost zookeeper-3.4.10]# ./bin/zkServer.sh start zk2.cfg
ZooKeeper JMX enabled by default
Using config: /home/centos/Zookeeper/zookeeper-3.4.10/bin/../conf/zk2.cfg
Starting zookeeper ... STARTED
[root@localhost zookeeper-3.4.10]# ./bin/zkServer.sh start zk3.cfg
ZooKeeper JMX enabled by default
Using config: /home/centos/Zookeeper/zookeeper-3.4.10/bin/../conf/zk3.cfg
Starting zookeeper ... STARTED
- 查看节点状态
[root@localhost zookeeper-3.4.10]# ./bin/zkServer.sh status zk1.cfg
ZooKeeper JMX enabled by default
Using config: /home/centos/Zookeeper/zookeeper-3.4.10/bin/../conf/zk1.cfg
Mode: follower
[root@localhost zookeeper-3.4.10]# ./bin/zkServer.sh status zk2.cfg
ZooKeeper JMX enabled by default
Using config: /home/centos/Zookeeper/zookeeper-3.4.10/bin/../conf/zk2.cfg
Mode: leader
[root@localhost zookeeper-3.4.10]# ./bin/zkServer.sh status zk3.cfg
ZooKeeper JMX enabled by default
Using config: /home/centos/Zookeeper/zookeeper-3.4.10/bin/../conf/zk3.cfg
Mode: follower
- 查看ZooKeeper物理文件目录结构-tree
[root@localhost zookeeper-3.4.10]# tree -L 3 /home/centos/Zookeeper/zoo
/home/centos/Zookeeper/zoo
├── zk0
│ └── version-2
├── zk1
│ ├── myid
│ ├── version-2
│ │ ├── acceptedEpoch
│ │ └── currentEpoch
│ └── zookeeper_server.pid
├── zk2
│ ├── myid
│ ├── version-2
│ │ ├── acceptedEpoch
│ │ └── currentEpoch
│ └── zookeeper_server.pid
└── zk3
├── myid
├── version-2
│ ├── acceptedEpoch
│ ├── currentEpoch
│ └── snapshot.100000000
└── zookeeper_server.pid
8 directories, 13 files
tree命令未生效解决方法:yum -y install tree
- 通过客户端连接Zookeeper集群
[root@localhost zookeeper-3.4.10]# ./bin/zkCli.sh -server 192.168.147.130:2183
Connecting to 192.168.147.130:2183
2018-09-03 01:55:27,409 [myid:] - INFO [main:Environment@100] - Client environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT
2018-09-03 01:55:27,412 [myid:] - INFO [main:Environment@100] - Client environment:host.name=localhost
2018-09-03 01:55:27,412 [myid:] - INFO [main:Environment@100] - Client environment:java.version=1.8.0_171
2018-09-03 01:55:27,424 [myid:] - INFO [main:Environment@100] - Client environment:java.vendor=Oracle Corporation
2018-09-03 01:55:27,425 [myid:] - INFO [main:Environment@100] - Client environment:java.home=/usr/local/java/jdk1.8.0_171/jre
2018-09-03 01:55:27,425 [myid:] - INFO [main:Environment@100] - Client environment:java.class.path=/home/centos/Zookeeper/zookeeper-3.4.10/bin/../build/classes:/home/centos/Zookeeper/zookeeper-3.4.10/bin/../build/lib/*.jar:/home/centos/Zookeeper/zookeeper-3.4.10/bin/../lib/slf4j-log4j12-1.6.1.jar:/home/centos/Zookeeper/zookeeper-3.4.10/bin/../lib/slf4j-api-1.6.1.jar:/home/centos/Zookeeper/zookeeper-3.4.10/bin/../lib/netty-3.10.5.Final.jar:/home/centos/Zookeeper/zookeeper-3.4.10/bin/../lib/log4j-1.2.16.jar:/home/centos/Zookeeper/zookeeper-3.4.10/bin/../lib/jline-0.9.94.jar:/home/centos/Zookeeper/zookeeper-3.4.10/bin/../zookeeper-3.4.10.jar:/home/centos/Zookeeper/zookeeper-3.4.10/bin/../src/java/lib/*.jar:/home/centos/Zookeeper/zookeeper-3.4.10/bin/../conf:.:/usr/local/java/jdk1.8.0_171/jre/lib/rt.jar:/usr/local/java/jdk1.8.0_171/lib/dt.jar:/usr/local/java/jdk1.8.0_171/lib/tools.jar
2018-09-03 01:55:27,425 [myid:] - INFO [main:Environment@100] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2018-09-03 01:55:27,425 [myid:] - INFO [main:Environment@100] - Client environment:java.io.tmpdir=/tmp
2018-09-03 01:55:27,425 [myid:] - INFO [main:Environment@100] - Client environment:java.compiler=<NA>
2018-09-03 01:55:27,425 [myid:] - INFO [main:Environment@100] - Client environment:os.name=Linux
2018-09-03 01:55:27,425 [myid:] - INFO [main:Environment@100] - Client environment:os.arch=amd64
2018-09-03 01:55:27,425 [myid:] - INFO [main:Environment@100] - Client environment:os.version=3.10.0-862.el7.x86_64
2018-09-03 01:55:27,425 [myid:] - INFO [main:Environment@100] - Client environment:user.name=root
2018-09-03 01:55:27,425 [myid:] - INFO [main:Environment@100] - Client environment:user.home=/root
2018-09-03 01:55:27,425 [myid:] - INFO [main:Environment@100] - Client environment:user.dir=/home/centos/Zookeeper/zookeeper-3.4.10
2018-09-03 01:55:27,426 [myid:] - INFO [main:ZooKeeper@438] - Initiating client connection, connectString=192.168.147.130:2183 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@69d0a921
Welcome to ZooKeeper!
2018-09-03 01:55:27,498 [myid:] - INFO [main-SendThread(192.168.147.130:2183):ClientCnxn$SendThread@1032] - Opening socket connection to server 192.168.147.130/192.168.147.130:2183. Will not attempt to authenticate using SASL (unknown error)
JLine support is enabled
2018-09-03 01:55:27,570 [myid:] - INFO [main-SendThread(192.168.147.130:2183):ClientCnxn$SendThread@876] - Socket connection established to 192.168.147.130/192.168.147.130:2183, initiating session
2018-09-03 01:55:27,597 [myid:] - INFO [main-SendThread(192.168.147.130:2183):ClientCnxn$SendThread@1299] - Session establishment complete on server 192.168.147.130/192.168.147.130:2183, sessionid = 0x3659df341960000, negotiated timeout = 30000
WATCHER::
WatchedEvent state:SyncConnected type:None path:null
- zookeeper 命令行操作:
[zk: 192.168.147.130:2183(CONNECTED) 0] help
ZooKeeper -server host:port cmd args
stat path [watch]
set path data [version]
ls path [watch]
delquota [-n|-b] path
ls2 path [watch]
setAcl path acl
setquota -n|-b val path
history
redo cmdno
printwatches on|off
delete path [version]
sync path
listquota path
rmr path
get path [watch]
create [-s] [-e] path data acl
addauth scheme auth
quit
getAcl path
close
connect host:port
#ls:查看/目录内容
[zk: 192.168.147.130:2183(CONNECTED) 1] ls /
[zookeeper]
#create:创建一个znode节点
[zk: 192.168.147.130:2183(CONNECTED) 5] create /node bim
Created /node
#ls,查看更新后的/目录
[zk: 192.168.147.130:2183(CONNECTED) 7] ls /
[node, zookeeper]
#get:查看/node的具体数据
[zk: 192.168.147.130:2183(CONNECTED) 8] get /node
bim
cZxid = 0x100000002
ctime = Mon Sep 03 02:02:42 EDT 2018
mZxid = 0x100000002
mtime = Mon Sep 03 02:02:42 EDT 2018
pZxid = 0x100000002
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 3
numChildren = 0
# 创建节点
[zk: 192.168.147.130:2183(CONNECTED) 8] create /FristNode "myfristnode"
Created /FristNode
[zk: 192.168.147.130:2183(CONNECTED) 9] ls /
[search, zookeeper, FristNode]
# 创建顺序节点
[zk: 192.168.147.130:2183(CONNECTED) 10] create -s /FristNode "myfristsqunode"
Created /FristNode0000000005
# 创建临时节点(session结束后删除)
[zk: 192.168.147.130:2183(CONNECTED) 11] create -e /SecondNode "mySecondnode"
Created /SecondNode
# 创建子节点
[zk: 192.168.147.130:2183(CONNECTED) 31] create /search/bim5d "bim5d_test"
Created /search/bim5d
# 获取子节点数据
[zk: 192.168.147.130:2183(CONNECTED) 32] get /search/bim5d
bim5d_test
cZxid = 0x100000023
ctime = Mon Sep 03 06:56:27 EDT 2018
mZxid = 0x100000023
mtime = Mon Sep 03 06:56:27 EDT 2018
pZxid = 0x100000023
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 10
numChildren = 0
# 添加监视(watch)
[zk: 192.168.147.130:2183(CONNECTED) 26] get /SecondNode "ceshi"
test2
cZxid = 0x10000001e
ctime = Mon Sep 03 06:38:10 EDT 2018
mZxid = 0x100000020
mtime = Mon Sep 03 06:45:53 EDT 2018
pZxid = 0x10000001e
cversion = 0
dataVersion = 1
aclVersion = 0
ephemeralOwner = 0x3659df341960006
dataLength = 5
numChildren = 0
[zk: 192.168.147.130:2183(CONNECTED) 27] set /SecondNode "test_second_1"
cZxid = 0x10000001e
WATCHER::
WatchedEvent state:SyncConnected type:NodeDataChanged path:/SecondNodectime = Mon Sep 03 06:38:10 EDT 2018
mZxid = 0x100000022
mtime = Mon Sep 03 06:52:03 EDT 2018
pZxid = 0x10000001e
cversion = 0
dataVersion = 2
aclVersion = 0
ephemeralOwner = 0x3659df341960006
dataLength = 13
numChildren = 0
# 查看节点
[zk: 192.168.147.130:2183(CONNECTED) 12] ls /
[search, FristNode0000000005, zookeeper, SecondNode, FristNode]
#set:修改数据
[zk: 192.168.147.130:2183(CONNECTED) 9] set /node bim5d
cZxid = 0x100000002
ctime = Mon Sep 03 02:02:42 EDT 2018
mZxid = 0x100000003
mtime = Mon Sep 03 02:03:36 EDT 2018
pZxid = 0x100000002
cversion = 0
dataVersion = 1
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 5
numChildren = 0
#get,查看/node修改后的数据信息,已经修改为bim5d
[zk: 192.168.147.130:2183(CONNECTED) 10] get /node
bim5d
cZxid = 0x100000002
ctime = Mon Sep 03 02:02:42 EDT 2018
mZxid = 0x100000003
mtime = Mon Sep 03 02:03:36 EDT 2018
pZxid = 0x100000002
cversion = 0
dataVersion = 1
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 5
numChildren = 0
# 检查状态
[zk: 192.168.147.130:2183(CONNECTED) 2] stat /search
cZxid = 0x100000018
ctime = Mon Sep 03 04:34:20 EDT 2018
mZxid = 0x100000018
mtime = Mon Sep 03 04:34:20 EDT 2018
pZxid = 0x100000023
cversion = 2
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 3
numChildren = 2
[zk: 192.168.147.130:2183(CONNECTED) 5] stat /FristNode
cZxid = 0x10000001c
ctime = Mon Sep 03 06:37:00 EDT 2018
mZxid = 0x100000021
mtime = Mon Sep 03 06:46:28 EDT 2018
pZxid = 0x10000001c
cversion = 0
dataVersion = 2
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 10
numChildren = 0
#delete:删除/node 或者 rmr
[zk: 192.168.147.130:2183(CONNECTED) 11] delete /node
[zk: 192.168.147.130:2183(CONNECTED) 3] rmr /FristNode0000000005
[zk: 192.168.147.130:2183(CONNECTED) 4] ls /
[search, zookeeper, FristNode]
#查看删除后的/目录内容
[zk: 192.168.147.130:2183(CONNECTED) 13] ls /
[zookeeper]
#quit退出客户端连接
[zk: 192.168.147.130:2183(CONNECTED) 14] quit
Quitting...
2018-09-03 02:14:32,319 [myid:] - INFO [main:ZooKeeper@684] - Session: 0x3659df341960000 closed
2018-09-03 02:14:32,330 [myid:] - INFO [main-EventThread:ClientCnxn$EventThread@519] - EventThread shut down for session: 0x3659df341960000
- Java连接zookeeper( Maven工程,需要配置好 pom.xml )
<dependencies>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>zookeeper</artifactId>
<version>3.3.1</version>
<exclusions>
<exclusion>
<groupId>javax.jms</groupId>
<artifactId>jms</artifactId>
</exclusion>
<exclusion>
<groupId>com.sun.jdmk</groupId>
<artifactId>jmxtools</artifactId>
</exclusion>
<exclusion>
<groupId>com.sun.jmx</groupId>
<artifactId>jmxri</artifactId>
</exclusion>
</exclusions>
</dependency>
</dependencies>
package com.glodon.guanl.zookeeper.demo;
import org.apache.zookeeper.*;
import java.io.IOException;
/**
* @author: guanl-c
* @date: 2018/9/3 14:22
* @description:
*/
public class ZookeeperDemo {
public static void main(String[] args) throws IOException, KeeperException, InterruptedException {
ZooKeeper zooKeeper = new ZooKeeper("192.168.147.130:2183", 60000, new Watcher() {
public void process(WatchedEvent watchedEvent) {
System.out.println("Event:" + watchedEvent.getType());
}
});
System.out.println("ls / => " + zooKeeper.getChildren("/", true));
// 创建一个目录节点
if (zooKeeper.exists("/node", true) == null) {
zooKeeper.create("/node", "bim".getBytes(), ZooDefs.Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT);
System.out.println("create /node bim");
System.out.println("get /node => " + new String(zooKeeper.getData("/node", false, null)));
System.out.println("ls / => " + zooKeeper.getChildren("/", true));
}
// 修改节点数据
if (zooKeeper.exists("/node", true) != null) {
zooKeeper.setData("/node", "changed".getBytes(), -1);
// 查看/node节点数据
System.out.println("get /node => " + new String(zooKeeper.getData("/node", false, null)));
}
// 删除节点
if (zooKeeper.exists("/node/sub1", true) != null) {
zooKeeper.delete("/node/sub1", -1);
zooKeeper.delete("/node", -1);
// 查看根节点
System.out.println("ls / => " + zooKeeper.getChildren("/", true));
}
// 关闭连接
zooKeeper.close();
}
}
输出结果:
Event:None
ls / => [zookeeper]
Event:NodeCreated
Event:NodeChildrenChanged
create /node bim
get /node => bim
ls / => [node, zookeeper]
Event:NodeDataChanged
get /node => changed
Process finished with exit code 0
参考链接:
-
w3cshool的zookeeper教程(原理及命令):https://www.w3cschool.cn/zookeeper/zookeeper_fundamentals.html
-
CSDN博客(原理):https://blog.csdn.net/weijifeng_/article/details/79775738
-
CSDN博客(原理):https://blog.csdn.net/xqb_756148978/article/details/52259381
-
粉丝日志(集群搭建):http://blog.fens.me/hadoop-zookeeper-intro/