zookeeper2大特征:
客户端如果对zookeeper的一个数据节点注册watcher监听,那么当改数据节点的内容或是其子节点列表发生变更时,zookeeper服务器就会向订阅的客户端发送变更通知
对在zookeeper上创建的临时节点,一旦客户端与服务器之间的会话失效,那么该临时节点也就被自动清除
zookeeper集群:(消息先进先出原则,管道)
在abc3个节点,会有一个(主)leader 角色负责写数据,从节点(follower)如果客户端连接的不是主(leader)节点,那么这个节点会路由给leader,leader再复制到另外2个节点,在超过一半的节点写完,就会回复客户端写完了(3台并发4-5万官方)
下面是搭建过程
虚拟机:
系统:centos7.6 zookeeper: zookeeper-3.4.14
10.1.234.110 10.1.234.111 10.1.234.112
下面是安装过程
ansible 一键部署
[root@ansible-11 zookeeper]# tree
.
├── conf
│ └── zoo.cfg
├── hosts
├── pkg
│ └── zookeeper-3.4.14.tar.gz
├── zoo1.yaml
├── zoo2.yaml
├── zoo3.yaml
├── zoo_class.yaml
└── zookeeper.yaml
sh install_zookeeper.sh
[root@ansible-11 zookeeper]# cat install_zookeeper.sh
#!/bin/bash
ansible-playbook -i hosts zookeeper.yaml
ansible-playbook -i hosts zoo1.yaml
ansible-playbook -i hosts zoo2.yaml
ansible-playbook -i hosts zoo3.yaml
ansible-playbook -i hosts zoo_class.yaml
[root@ansible-11 zookeeper]# cat conf/zoo.cfg
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=15
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/usr/local/zookeeper/data
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
maxClientCnxns=128
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
autopurge.purgeInterval=1
server.1=10.1.234.110:2888:3888
server.2=10.1.234.111:2888:3888
server.3=10.1.234.112:2888:3888
[root@ansible-11 zookeeper]# cat hosts
[zoo]
10.1.234.110 zoo1_name=zoo1
10.1.234.111 zoo1_name=zoo2
10.1.234.112 zoo1_name=zoo3
[zoo1]
10.1.234.110
[zoo2]
10.1.234.111
[zoo3]
10.1.234.112
[root@ansible-11 zookeeper]# cat zookeeper.yaml
---
- hosts: zoo
vars:
remote_user: root
gather_facts: false
tasks:
- name: 安装openjdk1.8
yum: name=java-1.8.0*
- name: 创建数据存放目录
file: dest=/usr/local/zookeeper/data state=directory
- name: 分发zookeeper二进制包
unarchive: src=pkg/zookeeper-3.4.14.tar.gz dest=/tmp
- name: 文件重命名
shell: mv /tmp/zookeeper-3.4.14 /usr/local/zookeeper
- name: 拷贝配置文件
copy: src=conf/zoo.cfg dest=/usr/local/zookeeper/conf/zoo.cfg
[root@ansible-11 zookeeper]# cat zoo1.yaml
---
- hosts: zoo1
vars:
remote_user: root
gather_facts: false
tasks:
- name: 集群id
shell: echo '1' > /usr/local/zookeeper/data/myid
[root@ansible-11 zookeeper]# cat zoo2.yaml
---
- hosts: zoo2
vars:
remote_user: root
gather_facts: false
tasks:
- name: 集群id
shell: echo '2' > /usr/local/zookeeper/data/myid
[root@ansible-11 zookeeper]# cat zoo3.yaml
---
- hosts: zoo3
vars:
remote_user: root
gather_facts: false
tasks:
- name: 集群id
shell: echo '3' > /usr/local/zookeeper/data/myid
[root@ansible-11 zookeeper]# cat zoo_class.yaml
---
- hosts: zoo
vars:
remote_user: root
gather_facts: false
tasks:
- name: 启动服务
shell: /usr/local/zookeeper/bin/zkServer.sh start
- name: 查看状态并将结果注入到zookeeper变量
shell: ss -nutlp |grep 2181
register: zookeeper
- name: 将结果输出到控制台
debug: var=zookeeper.stdout_lines
- name: 查看集群状态并将结果注入到zookeeper_class变量
shell: /usr/local/zookeeper/bin/zkServer.sh status
register: zookeeper_class
- name: 将结果输出到控制台
debug: var=zookeeper_class.stdout_lines