zookeeper从入门到放弃

一. 单机安装

1.1 下载

zookeeper 3.6.2下载

1.2 解压zookeeper

tar -xzvf apache-zookeeper-3.6.2-bin.tar.gz
mv apache-zookeeper-3.6.2-bin zookeeper

1.3 修改配置文件

  • 修改配置文件的名字
mv zookeeper/conf/zoo_samp_cfg zoo_cfg
  • 修改配置文件的内容
# 心跳时间间隔 2秒  The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# 刚开始启动时,最大的心跳链接次数 synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# 正常使用时,最大的心跳链接次数  sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# 数据存放位置  example sakes.
dataDir=/app/kafka/data
# 客户端链接的端口号 the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1

1.4 启动服务

sh zooleeper/bin/zkServer.sh start

1.5 常用命令

  • 启动服务端
    zkServer.sh start
  • 停服务端
    zkServer.sh stop
  • 启动客户端
    zkCli.sh
  • 查看是否启动
    jps
  • 查看启动状态
    zkServer.sh status

二. 原理

2.1 选举机制

  • 半数机制: 集群中半数以上的机制存活,集群可用,所以,zookeeper适合按转发奇数台服务器
  • zookeeper中 有一个leader,其他为follower, leader是内部选举机制产生

2.2 节点类型

  • 持久(persistent): 客户端和服务端断开连接后,创建的节点不删除
    1. 持久化目录节点
      客户端和zookeeper断开后,该节点依然存在
    2. 持久化顺序编号目录节点
      客户端和zookeeper断开后,该节点依然存在,只是zookeeper给该节点名称进行顺序编号
  • 短暂(Ephemeral): 客户端和服务端断开连接后,创建的节点自己删除
    1. 临时目录节点
      客户端和zookeeper断开后,该节点被删除
    2. 临时顺序编号目录节点
      客户端和zookeeper断开后,该节点被删除,只是zookeeper给该节点名称进行顺序编号

三. 分布式安装

3.1 说明

由于资源紧张, 本集群搭建只在同一台服务器上搭建,伪集群

3.2 集群准备

3.2.1 创建文件夹

集群编号data路径zookeeper路径
1/app/kafka/cluster_data_01/app/kafka/zookeeper_1/
2/app/kafka/cluster_data_02/app/kafka/zookeeper_2/
3/app/kafka/cluster_data_03/app/kafka/zookeeper_3/

3.2.2 创建myid

集群编号myid文件路径myid内容
1/app/kafka/cluster_data_01/1
2/app/kafka/cluster_data_02/2
3/app/kafka/cluster_data_03/3

3.2.3 修改配置文件

集群配置如下

  • 第一个
# 心跳时间间隔 2秒  The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# 刚开始启动时,最大的心跳链接次数 synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# 正常使用时,最大的心跳链接次数  sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# 数据存放位置  example sakes.
dataDir=/app/kafka/cluster_data_01
# 客户端链接的端口号 the port at which the clients will connect
clientPort=2182

# cluster config
server.1=localhost:2188:3188
server.2=localhost:2288:3288
server.3=localhost:2388:3388
  • 第二个
# 心跳时间间隔 2秒  The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# 刚开始启动时,最大的心跳链接次数 synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# 正常使用时,最大的心跳链接次数  sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# 数据存放位置  example sakes.
dataDir=/app/kafka/cluster_data_02
# 客户端链接的端口号 the port at which the clients will connect
clientPort=2183

# cluster config
server.1=localhost:2188:3188
server.2=localhost:2288:3288
server.3=localhost:2388:3388
  • 第三个
# 心跳时间间隔 2秒  The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# 刚开始启动时,最大的心跳链接次数 synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# 正常使用时,最大的心跳链接次数  sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# 数据存放位置  example sakes.
dataDir=/app/kafka/cluster_data_03
# 客户端链接的端口号 the port at which the clients will connect
clientPort=2184

# cluster config
server.1=localhost:2188:3188
server.2=localhost:2288:3288
server.3=localhost:2388:3388

配置解读
server.A = B:C:D
A: 是一个数字,表示第几号服务器, 集群模式下配置一个文件myid,这个文件在 dataDir 下面,这里面的值就是A的值, zookeeper启动时,读取此文件,拿到里面的数据与zoo.cfg 里面的配置信息做对比,从而判断是哪个server

B: 这个是服务器的ip地址

C:这个服务器与集群中的leader服务器交换信息的端口

D:当leader 挂了, 重新进行选举端口

3.2.4 启动服务

sh /app/kafka/zookeeper_1/bin/zkServer.sh start
sh /app/kafka/zookeeper_2/bin/zkServer.sh start 
sh /app/kafka/zookeeper_3/bin/zkServer.sh start 

3.2.5 查看状态

# 第一个  追随者
[root@VM-0-6-centos bin]# sh /app/kafka/zookeeper_1/bin/zkServer.sh status
/app/jdk/jdk1.8.0_111/bin/java
ZooKeeper JMX enabled by default
Using config: /app/kafka/zookeeper_1/bin/../conf/zoo.cfg
Client port found: 2182. Client address: localhost. Client SSL: false.
Mode: follower


# 第二个  leader
[root@VM-0-6-centos bin]# sh /app/kafka/zookeeper_2/bin/zkServer.sh status
/app/jdk/jdk1.8.0_111/bin/java
ZooKeeper JMX enabled by default
Using config: /app/kafka/zookeeper_2/bin/../conf/zoo.cfg
Client port found: 2183. Client address: localhost. Client SSL: false.
Mode: leader


#第三个  追随者
[root@VM-0-6-centos bin]# sh /app/kafka/zookeeper_3/bin/zkServer.sh status
/app/jdk/jdk1.8.0_111/bin/java
ZooKeeper JMX enabled by default
Using config: /app/kafka/zookeeper_3/bin/../conf/zoo.cfg
Client port found: 2184. Client address: localhost. Client SSL: false.
Mode: follower


3.3 常用命令

3.3.1 启动客户端

sh zkCli.sh  -r -server localhost:2184

3.3.2 显示所有操作命令 ll

[zk: localhost:2184(CONNECTED) 1] ll

3.3.3 查看当前 znod所包含的内容

[zk: localhost:2184(CONNECTED) 2] ls /
[zookeeper]

3.3.4 创建节点

[zk: localhost:2184(CONNECTED) 2] create /sanguo
Created /sanguo
[zk: localhost:2184(CONNECTED) 6] ls /
[sanguo, zookeeper]

# 多级创建
[zk: localhost:2184(CONNECTED) 11] create /sanguo/shuguo
Created /sanguo/shuguo
[zk: localhost:2184(CONNECTED) 12] ls /
[sanguo, zookeeper]
[zk: localhost:2184(CONNECTED) 14] ls /sanguo
[shuguo]


3.3.5 节点下添加数据, 查找数据

[zk: localhost:2184(CONNECTED) 22] set /sanguo/shuguo "liubei"
[zk: localhost:2184(CONNECTED) 23] get /sanguo/shuguo
liubei

3.3.6 创建短暂节点

[zk: localhost:2184(CONNECTED) 24]  create -e /sanguo/wuguo
Created /sanguo/wuguo
[zk: localhost:2184(CONNECTED) 25] ls /sanguo
[shuguo, wuguo]

# 退出
quit

# 重新进入
sh zkCli.sh -r -server localhost:2184

# 查看节点
[zk: localhost:2184(CONNECTED) 0] ls /sanguo
[shuguo]

# 之前创建的 短暂节点“wuguo”不存在了

3.3.7 创建带有序号节点

zk: localhost:2184(CONNECTED) 1] create -s /sanguo/weiguo
Created /sanguo/weiguo0000000002
[zk: localhost:2184(CONNECTED) 2] create -s /sanguo/weiguo
Created /sanguo/weiguo0000000003
[zk: localhost:2184(CONNECTED) 3] create -s /sanguo/weiguo
Created /sanguo/weiguo0000000004
[zk: localhost:2184(CONNECTED) 4] ls /sanguo
[shuguo, weiguo0000000002, weiguo0000000003, weiguo0000000004]

#每次创建都会产生一个序号

3.3.8 修改数据

# 第一次设置
[zk: localhost:2184(CONNECTED) 8] set /sanguo/shuguo "liubei"
#第一次查看
[zk: localhost:2184(CONNECTED) 9] get /sanguo/shuguo
liubei

#第二次设置
[zk: localhost:2184(CONNECTED) 10] set /sanguo/shuguo "liushan"
#第二次查看
[zk: localhost:2184(CONNECTED) 11] get /sanguo/shuguo
liushan

3.3.9 监听节点的变化

# 第一个节点
[root@VM-0-6-centos bin]# sh zkCli.sh -r -server localhost:2182

# 将节点/sanguo 设置为监听
[zk: localhost:2182(CONNECTED) 6] addWatch /sanguo
[zk: localhost:2182(CONNECTED) 7] 

第二个节点
[root@VM-0-6-centos bin]# sh zkCli.sh -r -server localhost:2184
# 给 /sanguo 设置属性
set /sanguo "jingjing"
set /sanguo "jingjing123"
# 第一个节点
WATCHER::

WatchedEvent state:SyncConnected type:NodeDataChanged path:/sanguo

WATCHER::

WatchedEvent state:SyncConnected type:NodeDataChanged path:/sanguo

3.3.10 监听子节点变化


#第一个节点
[zk: localhost:2182(CONNECTED) 6] addWatch /sanguo

#第二个节点
create /sanguo/wuguo

#第一个节点
WatchedEvent state:SyncConnected type:NodeCreated path:/sanguo/wuguo

3.3.11 删除节点


[zk: localhost:2184(CONNECTED) 19] ls /sanguo
[shuguo, weiguo0000000002, weiguo0000000003, weiguo0000000004, wuguo]
[zk: localhost:2184(CONNECTED) 20] delete /sanguo/wuguo
[zk: localhost:2184(CONNECTED) 21] ls /sanguo
[shuguo, weiguo0000000002, weiguo0000000003, weiguo0000000004]

3.3.12 查看节点的状态

[zk: localhost:2184(CONNECTED) 22] stat /sanguo
cZxid = 0x300000004
ctime = Sun Nov 01 16:50:17 CST 2020
mZxid = 0x300000014
mtime = Sun Nov 01 17:23:43 CST 2020
pZxid = 0x300000018
cversion = 8
dataVersion = 2
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 9
numChildren = 4

3.3.13 删除含有子节点的节点

[zk: localhost:2184(CONNECTED) 24] deleteall /sanguo
[zk: localhost:2184(CONNECTED) 25] ls /sanguo
Node does not exist: /sanguo
[zk: localhost:2184(CONNECTED) 26] ls /
[zookeeper]

3.4 stat 结构体

  1. cZxid:创建节点的事务id
    每次修改zookeeper状态都会收到一个zxid形式的时间戳,也就是zookeeper的事务id
    事务id, 是zookeeper中修改的总的次序,每个修改都有唯一的zxid, 如果zxid1 小于 zxid2 ,则代表zxid1 早于 zxid2 先发生

  2. ctime = Sun Nov 01 16:50:17 CST 2020 创建时间

  3. mZxid = 0x300000014 最后更新的事务id

  4. mtime = Sun Nov 01 17:23:43 CST 2020 当前节点的修改时间

  5. pZxid = 0x300000018 最后更新的子节点

  6. cversion = 8 子节点的变化号,znode子节点修改字数

  7. dataVersion = 2 节点的数据变化号

  8. aclVersion = 0 访问控制列表的变化号

  9. ephemeralOwner = 0x0 如果是临时节点,是节点拥有的sessind id, 否则是0

  10. dataLength = 9 当前节点的数据长度

  11. numChildren = 4 子节点数量

四. 监听原理

在这里插入图片描述

五. 写数据原理

在这里插入图片描述

六. 代码链接测试

6.1 引入包

 <dependency>
            <groupId>org.apache.curator</groupId>
            <artifactId>curator-framework</artifactId>
            <version>4.0.1</version>
        </dependency>
        <dependency>
            <groupId>org.apache.curator</groupId>
            <artifactId>curator-client</artifactId>
            <version>4.0.1</version>
        </dependency>
        <dependency>
            <groupId>org.apache.curator</groupId>
            <artifactId>curator-recipes</artifactId>
            <version>4.0.1</version>
        </dependency>
        <dependency>
            <groupId>org.apache.zookeeper</groupId>
            <artifactId>zookeeper</artifactId>
            <version>3.4.10</version>
        </dependency>

6.2 代码测试

public static void main(String[] args) throws Exception {
        getNodes();
    }

    public static void getNodes() throws Exception {
        CuratorFramework client = CuratorFrameworkFactory
                .newClient(zkaddr, 1000 * 60, 1000 * 15, new RetryNTimes(10, 5000));
        client.start();//开始连接
        CuratorFrameworkState st = client.getState();
        System.out.println(st);
        List<String> children = client.getChildren().usingWatcher(new CuratorWatcher() {
            @Override
            public void process(WatchedEvent event) throws Exception {
                System.out.println("监控: " + event);
            }
        }).forPath("/");
        System.out.println(children);
        String result = client.create().withMode(CreateMode.PERSISTENT).withACL(ZooDefs.Ids.OPEN_ACL_UNSAFE).forPath("/test", "Data".getBytes());
        System.out.println(result);
        // 设置节点数据
        client.setData().forPath("/test", "111".getBytes());
        client.setData().forPath("/test", "222".getBytes());
        // 删除节点
        System.out.println(client.checkExists().forPath("/test"));
        client.delete().withVersion(-1).forPath("/test");
        System.out.println(client.checkExists().forPath("/test"));
        client.close();
        System.out.println("OK!");
        client.getCuratorListenable().addListener(new CuratorListener() {
            @Override
            public void eventReceived(CuratorFramework client, CuratorEvent event) throws Exception {
                System.out.println("事件: " + event);
            }
        });
        client.getConnectionStateListenable().addListener(new ConnectionStateListener() {
            @Override
            public void stateChanged(CuratorFramework client, ConnectionState newState) {
                System.out.println("连接状态事件: " + newState);
            }
        });
        client.getUnhandledErrorListenable().addListener(new UnhandledErrorListener() {
            @Override
            public void unhandledError(String message, Throwable e) {
                System.out.println("错误事件:" + message);
            }
        });
    }
  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值