【传统方式部署zookeeper集群与迁移至k8s】

zookeeper简介:

	zk主要服务于分布式系统、配置管理、注册中心、集群管理等;
	
	为什么要迁移Zookeeper集群;
	存储kafka什么数据:kafka有多少节点、topic名称、协调kafka正常运行。
	
	ELK+Kafka收集k8s日志;

一、传统方式部署zookeeper集群

环境说明

192.168.79.34 node1
192.168.79.35 node2
192.168.79.36 node3

1、 所有node节点操作:

#所有节点安装java, 下载zk解压
yum install java -y
wget https://dlcdn.apache.org/zookeeper/zookeeper-3.8.0/apache-zookeeper-3.8.0-bin.tar.gz --no-check-certificate
tar xf apache-zookeeper-3.8.0-bin.tar.gz -C /opt
ln -s /opt/apache-zookeeper-3.8.0-bin/ /opt/zookeeper
mkdir /opt/zookeeper/logs
mkdir /opt/zookeeper/data

修改zoo.cfg
#node1的

cat /opt/zookeeper/conf/zoo.cfg

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=../data
dataLogDir=../logs
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
maxClientCnxns=60

# 客户端获取 zookeeper 服务的当前状态及相关信息
4lw.commands.whitelist=*

# 三个接点配置,格式为: server.服务编号=服务地址、LF通信端口、选举端口
server.1=192.168.79.34:2888:3888
server.2=192.168.79.35:2888:3888
server.3=192.168.79.36:2888:3888

node2的

cat /opt/zookeeper/conf/zoo.cfg

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=../data
dataLogDir=../logs
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
maxClientCnxns=60

# 客户端获取 zookeeper 服务的当前状态及相关信息
4lw.commands.whitelist=*

# 三个接点配置,格式为: server.服务编号=服务地址、LF通信端口、选举端口
server.1=192.168.79.34:2888:3888
server.2=192.168.79.35:2888:3888
server.3=192.168.79.36:2888:3888

node3的

cat /opt/zookeeper/conf/zoo.cfg

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=../data
dataLogDir=../logs
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
maxClientCnxns=60

# 客户端获取 zookeeper 服务的当前状态及相关信息
4lw.commands.whitelist=*

# 三个接点配置,格式为: server.服务编号=服务地址、LF通信端口、选举端口
server.1=192.168.79.34:2888:3888
server.2=192.168.79.35:2888:3888
server.3=192.168.79.36:2888:3888

创建节点标记ID

#node1操作
echo "1" > /opt/zookeeper/data/myid
#node2操作
echo "2" > /opt/zookeeper/data/myid
#node3操作
echo "3" > /opt/zookeeper/data/myid

启动zookeeper

 cd /opt/zookeeper/bin/
 ./zkServer.sh start

集群状态检查

[root@node4 bin]# bash zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: follower


[root@node5-db bin]# bash zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: leader

[root@node6 bin]# bash zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: follower

报错处理

在这里插入图片描述

2023-04-26 17:33:28,909 [myid:] - ERROR [ListenerHandler-/192.168.19.35:3888:o.a.z.s.q.QuorumCnxManager$Listener$ListenerHandler@1099] - Exception while listening to address /192.168.19.35:3888
java.net.BindException: 无法指定被请求的地址 (Bind failed)
	at java.net.PlainSocketImpl.socketBind(Native Method)
	at java.net.AbstractPlainSocketImpl.bind(AbstractPlainSocketImpl.java:513)


答: 原因地址写错了。 错误:192.168.19.35 ,正确的:192.168.79.35

二、制作ZK集群镜像

2.1 编写dockerfile

FROM openjdk:8-jre
#改时区,复制zk包,cfg文件,重命名。
RUN /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && \
    echo 'Asia/Shanghai' > /etc/timezone

ENV VERSION=3.8.0
ADD ./apache-zookeeper-${VERSION}-bin.tar.gz /
ADD ./zoo.cfg /apache-zookeeper-${VERSION}-bin/conf

RUN mv /apache-zookeeper-${VERSION}-bin /zookeeper

ADD ./entrypoint.sh /entrypoint.sh

#设定ZK对外暴露的端口;[客户端端口,LF通信端口,选举端口]    
EXPOSE 2181 2888 3888  
CMD ["/bin/bash","/entrypoint.sh"] 

2.2 编写zoo.cfg

# 服务器之间或客户端与服务器之间维持心跳的时间间隔 tickTime以毫秒为单位。
tickTime={ZOOK_TICKTIME}

# 集群中的follower服务器(F)与leader服务器(L)之间的初始连接心跳数 10* tickTime
initLimit={ZOOK_INIT_LIMIT}

# 集群中的follower服务器与leader服务器之间请求和应答之间能容忍的最多心跳数 5 * tickTime
syncLimit={ZOOK_SYNC_LIMIT}

# 数据保存目录
dataDir={ZOOK_DATA_DIR}

# 日志保存目录
dataLogDir={ZOOK_LOG_DIR}

# 客户端连接端口
clientPort={ZOOK_CLIENT_PORT}

# 客户端最大连接数。#默认为60个
maxClientCnxns={ZOOK_MAX_CLIENT_CNXNS}

# 客户端获取 zookeeper 服务的当前状态及相关信息
4lw.commands.whitelist=*

# 集群节点地址:格式为: server.服务编号=服务地址、LF通信端口、选举端口
# 不建议将地址写死配置文件,建议通过entrypoint脚本传递进来

2.3 编写entrypoint.sh


#设定变量
ZOOK_BIN_DIR=/zookeeper/bin
ZOOK_CONF_DIR=/zookeeper/conf/zoo.cfg

# 2、对配置文件中的字符串进行变量替换
sed -i s@{ZOOK_TICKTIME}@${ZOOK_TICKTIME:-2000}@g ${ZOOK_CONF_DIR}
sed -i s@{ZOOK_INIT_LIMIT}@${ZOOK_INIT_LIMIT:-10}@g ${ZOOK_CONF_DIR}
sed -i s@{ZOOK_SYNC_LIMIT}@${ZOOK_SYNC_LIMIT:-5}@g ${ZOOK_CONF_DIR}
sed -i s@{ZOOK_DATA_DIR}@${ZOOK_DATA_DIR:-/data}@g ${ZOOK_CONF_DIR}

sed -i s@{ZOOK_LOG_DIR}@${ZOOK_LOG_DIR:-/logs}@g ${ZOOK_CONF_DIR}
sed -i s@{ZOOK_CLIENT_PORT}@${ZOOK_CLIENT_PORT:-2181}@g ${ZOOK_CONF_DIR}
sed -i s@{ZOOK_MAX_CLIENT_CNXNS}@${ZOOK_MAX_CLIENT_CNXNS:-60}@g ${ZOOK_CONF_DIR}

# 3、准备ZK的集群节点地址,后期肯定是需要通过ENV的方式注入进来
for server in ${ZOOK_SERVERS}
do
	echo ${server} >> ${ZOOK_CONF_DIR}
done

# 4、在datadir目录中创建myid的文件,并填入对应的编号. k8s使用sts来部署zk集群
#例子:echo -e $(( $(echo "zk-1" | sed -r s#.*-##) + 1 ))
ZOOK_MYID=$(( $(hostname | sed 's#.*-##g') + 1 ))
echo ${ZOOK_MYID:-99} > ${ZOOK_DATA_DIR:-/data}/myid


#5、前台运行Zookeeper
cd ${ZOOK_BIN_DIR}
./zkServer.sh start-foreground

2.4 构建镜像,推送到仓库

[root@node4 zk-dockerfile]# ls
apache-zookeeper-3.8.0-bin.tar.gz  Dockerfile  entrypoint.sh  zoo.cfg

docker build -t harbor.oldxu.net/base/zookeeper:3.8.0 .
docker push harbor.oldxu.net/base/zookeeper:3.8.0

三、迁移ZK集群到K8S

3.1 迁移ZK思路

1、Zookeeper属于有状态服务;
2、Zookeeper集群存在角色之分;
2、Zookeeper集群每个节点都需要存储自己的数据;
4、Zookeeper集群每个节点都需要有一个唯一的地址;

3.2 创建headless service

01-zookeeper-headless.yaml

apiVersion: v1
kind: Service
metadata:
  name: zk-svc
spec:
  clusterIP: None
  selector:
    app: zk
  ports:
  - name: client
    port: 2181
    targetPort: 2181
  - name: leader-follwer
    port: 2888
    targetPort: 2888
  - name: selection
    port: 3888
    targetPort: 3888

3.3 创建StatefulSet

02-zk-sts.yaml

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: zookeeper
spec:
  serviceName: "zk-svc"
  replicas: 3
  selector:
    matchLabels:
      app: zk
  template:
    metadata:
      labels:
        app: zk
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                - key: app
                  operator: In
                  values: ["zk"]
              topologyKey: "kubernetes.io/hostname"
      imagePullSecrets:
      - name: harbor-login
      
      containers:
      - name: zk
        image: harbor.oldxu.net/base/zookeeper:3.8.0
        imagePullPolicy: IfNotPresent
        ports:
        - name: client
          containerPort: 2181
        - name: leader-follwer
          containerPort: 2888
        - name: selection
          containerPort: 3888
        
        env:
        - name: ZOOK_SERVERS
          value: "server.1=zookeeper-0.zk-svc.default.svc.cluster.local:2888:3888 server.2=zookeeper-1.zk-svc.default.svc.cluster.local:2888:3888 server.3=zookeeper-2.zk-svc.default.svc.cluster.local:2888:3888" 
        
        readinessProbe:     # 就绪探针,不就绪则不介入流量
          exec:
            command:
            - "/bin/bash"
            - "-c"
            - '[[ "$(/zookeeper/bin/zkServer.sh status 2>/dev/null | grep 2181)" ]] && exit 0 || exit 1'
          initialDelaySeconds: 5
        
        livenessProbe:    #存活探针。如果不存活则根据重启策略进行重启
          exec:
            command:
            - "/bin/bash"
            - "-c"
            - '[[ "$(/zookeeper/bin/zkServer.sh status 2>/dev/null | grep 2181)" ]] && exit 0 || exit 1'            
          initialDelaySeconds: 5
        
        volumeMounts:
        - name: data
          mountPath: /data
          
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes: ["ReadWriteMany"]
      storageClassName: "nfs"
      resources:
        requests:
          storage: 20Gi

3.4 检查Zookeeper集群

1、查看Pod以及Service
在这里插入图片描述

service/zk-svc              ClusterIP      None             <none>                                 2181/TCP,2888/TCP,3888/TCP   16m

2、检查集群状态

[root@master01 zookeeperProject]# kubectl exec -it zookeeper-0 -- /zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: follower


[root@master01 zookeeperProject]# kubectl exec -it zookeeper-1 -- /zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: leader


[root@master01 zookeeperProject]# kubectl exec -it zookeeper-2 -- /zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: follower

其他:

[root@master01 zookeeperProject]# dig @10.96.0.10 zk-svc.default.svc.cluster.local +short
10.244.0.4
10.244.2.228
10.244.1.239

[root@master01 zookeeperProject]# dig @10.96.0.10 zookeeper-0.zk-svc.default.svc.cluster.local +short
10.244.2.228

[root@master01 zookeeperProject]# dig @10.96.0.10 zookeeper-1.zk-svc.default.svc.cluster.local +short
10.244.1.239

[root@master01 zookeeperProject]# dig @10.96.0.10 zookeeper-2.zk-svc.default.svc.cluster.local +short
10.244.0.4


3、连接Zookeeper集群
在这里插入图片描述

[root@master01 zookeeperProject]# kubectl exec -it zookeeper-2 -- /bin/bash

root@zookeeper-2:/# /zookeeper/bin/zkCli.sh -server zk-svc
Connecting to zk-svc

[zk: zk-svc(CONNECTED) 0] create /hello lss
Created /hello
[zk: zk-svc(CONNECTED) 1] get /hello 
lss
[zk: zk-svc(CONNECTED) 2] 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值