zookeeper daemonset方式部署至k8s


前言

k8s改造zookeeper部署方式

改造zookeeper的Statefulset方式至DaemonSet。


一、改造准备

先在需要部署zookeeper容器的node节点新建数据及日志存放目录

1.新建目录

mkdir -p /tmp/data/
mkdir -p /tmp/datalog

2.新建配置文件

第1台:vi /tmp/dp/zoo.cfg
#This file was autogenerated DO NOT EDIT
clientPort=2181
dataDir=/data
dataLogDir=/datalog
tickTime=2000
initLimit=10
syncLimit=5
maxClientCnxns=2000
minSessionTimeout=4000
maxSessionTimeout=40000
autopurge.snapRetainCount=3
autopurge.purgeInteval=12
server.1=0.0.0.0:2888:3888
server.2=zk2:2888:3888
server.3=zk3:2888:3888
第2台:vi /tmp/dp/zoo.cfg
#This file was autogenerated DO NOT EDIT
clientPort=2181
dataDir=/data
dataLogDir=/datalog
tickTime=2000
initLimit=10
syncLimit=5
maxClientCnxns=2000
minSessionTimeout=4000
maxSessionTimeout=40000
autopurge.snapRetainCount=3
autopurge.purgeInteval=12
server.1=zk1:2888:3888
server.2=0.0.0.0:2888:3888
server.3=zk3:2888:3888
第3台:vi /tmp/dp/zoo.cfg
#This file was autogenerated DO NOT EDIT
clientPort=2181
dataDir=/data
dataLogDir=/datalog
tickTime=2000
initLimit=10
syncLimit=5
maxClientCnxns=2000
minSessionTimeout=4000
maxSessionTimeout=40000
autopurge.snapRetainCount=3
autopurge.purgeInteval=12
server.1=zk1:2888:3888
server.2=zk2:2888:3888
server.3=0.0.0.0:2888:3888

二、新建DaemonSet yaml文件

1.第一台node节点配置

apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    app: zk1
  name: zk1
  namespace: default
spec:
#  replicas: 1
  selector:
    matchLabels:
      app: zk1
#  strategy:
#    rollingUpdate:
#      maxSurge: 1
#      maxUnavailable: 0
#    type: RollingUpdate
  template:
    metadata:
      labels:
        app: zk1
    spec:
      nodeSelector:
        kubernetes.io/hostname: fat2master.fat2master
      tolerations:
      - key:
        operator: Exists
        value: 
      containers:
      - env:
        - name: ZOO_MY_ID
          value: '1'
#       - name: ZOO_SERVERS
#         value: server.1=zk1:2888:3888;2181 server.2=zk2:2888:3888;2181 server.3=zk3:2888:3888;2181
        image: zookeeper:3.4.10
        imagePullPolicy: Always
#        nodeSelector:
#          kubernetes.io/hostname: fat2master.fat2master
        name: zk1
        resources:
          requests:
            memory: "2Gi"
            cpu: "1"
        ports:
          - name: http
            containerPort: 2181
          - name: server
            containerPort: 2888
          - name: leader-election
            containerPort: 3888
        volumeMounts:
        - mountPath: /data
          name: data
        - mountPath: /datalog
          name: log
        - mountPath: /conf/zoo.cfg
          name: conf
        resources:
          requests:
            cpu: "1000m"
            memory: "2048Mi"
          limits:
            cpu: "1000m"
            memory: "2048Mi"          
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      volumes:
      - name: data
        hostPath:
          path: /tmp/data
      - name: log
        hostPath:
          path: /tmp/log
      - name: conf
        hostPath:
          path: /tmp/dp/zoo.cfg
          type: File

---
apiVersion: v1
kind: Service
metadata:
  name: zk1
  labels:
    app: zk1
spec:
  ports:
    - port: 2181
      name: client
    - port: 2888
      name: server
    - port: 3888
      name: leader-election
  selector:
    app: zk1

2.第二台node节点配置

apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    app: zk2
  name: zk2
  namespace: default
spec:
#  replicas: 1
  selector:
    matchLabels:
      app: zk2
#  strategy:
#    rollingUpdate:
#      maxSurge: 1
#      maxUnavailable: 0
#    type: RollingUpdate
  template:
    metadata:
      labels:
        app: zk2
    spec:
      nodeSelector:
        kubernetes.io/hostname: fat2slave1.fat2slave1
      tolerations:
      - key:
        operator: Exists
        value: 
      containers:
      - env:
        - name: ZOO_MY_ID
          value: '2'
#       - name: ZOO_SERVERS
#         value: server.1=zk2:2888:3888;2181 server.2=zk2:2888:3888;2181 server.3=zk3:2888:3888;2181
        image: zookeeper:3.4.10
        imagePullPolicy: Always
#        nodeSelector:
#          kubernetes.io/hostname: fat2master.fat2master
        name: zk2
        resources:
          requests:
            memory: "2Gi"
            cpu: "1"
        ports:
          - name: http
            containerPort: 2181
          - name: server
            containerPort: 2888
          - name: leader-election
            containerPort: 3888
        volumeMounts:
        - mountPath: /data
          name: data
        - mountPath: /datalog
          name: log
        - mountPath: /conf/zoo.cfg
          name: conf
        resources:
          requests:
            cpu: "1000m"
            memory: "2048Mi"
          limits:
            cpu: "1000m"
            memory: "2048Mi"          
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      volumes:
      - name: data
        hostPath:
          path: /tmp/data
      - name: log
        hostPath:
          path: /tmp/log
      - name: conf
        hostPath:
          path: /tmp/dp/zoo.cfg
          type: File

---
apiVersion: v1
kind: Service
metadata:
  name: zk2
  labels:
    app: zk2
spec:
  ports:
    - port: 2181
      name: client
    - port: 2888
      name: server
    - port: 3888
      name: leader-election
  selector:
    app: zk2

3.第三台node节点配置

apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    app: zk3
  name: zk3
  namespace: default
spec:
#  replicas: 1
  selector:
    matchLabels:
      app: zk3
#  strategy:
#    rollingUpdate:
#      maxSurge: 1
#      maxUnavailable: 0
#    type: RollingUpdate
  template:
    metadata:
      labels:
        app: zk3
    spec:
      nodeSelector:
        kubernetes.io/hostname: fat2slave2.fat2slave2
      tolerations:
      - key:
        operator: Exists
        value: 
      containers:
      - env:
        - name: ZOO_MY_ID
          value: '3'
#       - name: ZOO_SERVERS
#         value: server.1=zk3:2888:3888;2181 server.2=zk2:2888:3888;2181 server.3=zk3:2888:3888;2181
        image: zookeeper:3.4.10
        imagePullPolicy: Always
#        nodeSelector:
#          kubernetes.io/hostname: fat2master.fat2master
        name: zk3
        resources:
          requests:
            memory: "2Gi"
            cpu: "1"
        ports:
          - name: http
            containerPort: 2181
          - name: server
            containerPort: 2888
          - name: leader-election
            containerPort: 3888
        volumeMounts:
        - mountPath: /data
          name: data
        - mountPath: /datalog
          name: log
        - mountPath: /conf/zoo.cfg
          name: conf
        resources:
          requests:
            cpu: "1000m"
            memory: "2048Mi"
          limits:
            cpu: "1000m"
            memory: "2048Mi"          
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      volumes:
      - name: data
        hostPath:
          path: /tmp/data
      - name: log
        hostPath:
          path: /tmp/log
      - name: conf
        hostPath:
          path: /tmp/dp/zoo.cfg
          type: File

---
apiVersion: v1
kind: Service
metadata:
  name: zk3
  labels:
    app: zk3
spec:
  ports:
    - port: 2181
      name: client
    - port: 2888
      name: server
    - port: 3888
      name: leader-election
  selector:
    app: zk3

三.总结

注意点如下

1.监听的ip,由于pod没有zk1/zk2/zk3的主机名,因此监听地址需要修改

#This file was autogenerated DO NOT EDIT
clientPort=2181
dataDir=/data
dataLogDir=/datalog
tickTime=2000
initLimit=10
syncLimit=5
maxClientCnxns=2000
minSessionTimeout=4000
maxSessionTimeout=40000
autopurge.snapRetainCount=3
autopurge.purgeInteval=12
server.1=zk1:2888:3888
server.2=zk2:2888:3888
server.3=0.0.0.0:2888:3888

如上最后一行,由于是在第三台节点部署,因此ip需要写成0.0.0.0,其它两个节点类似

2.nodeSelector配置,需要指定机器运行

nodeSelector:
  kubernetes.io/hostname: fat2master.fat2master
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值