在Kubernetes集群上搭建Stateful Zookeeper集群

一、准备工作
1、pull镜像

docker pull mirrorgooglecontainers/kubernetes-zookeeper:1.0-3.4.10

2、Zookeeper集群需要用到存储,这里需要准备持久卷(PersistentVolume,简称PV),我这里以yaml文件创建3个PV,供3个Zookeeper节点创建出来的持久卷(PersistentVolumeClaim,简称PVC)绑定。
3、persistent-volume.yaml

kind: PersistentVolume
apiVersion: v1
metadata:
  name: k8s-pv-zk1
  annotations:
    volume.beta.kubernetes.io/storage-class: "anything"
  labels:
    type: local
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/var/lib/zookeeper"    #此目录不需要自行创建,系统会自动创建
  persistentVolumeReclaimPolicy: Recycle
---
kind: PersistentVolume
apiVersion: v1
metadata:
  name: k8s-pv-zk2
  annotations:
    volume.beta.kubernetes.io/storage-class: "anything"
  labels:
    type: local
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/var/lib/zookeeper"
  persistentVolumeReclaimPolicy: Recycle
---
kind: PersistentVolume
apiVersion: v1
metadata:
  name: k8s-pv-zk3
  annotations:
    volume.beta.kubernetes.io/storage-class: "anything"
  labels:
    type: local
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/var/lib/zookeeper"
  persistentVolumeReclaimPolicy: Recycle

4、使用以下命令创建pv

kubectl create -f persistent-volume.yaml

5、查看创建结果

[root@k8s-node1 zookeeper]# kubectl get pv 
NAME         CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                  STORAGECLASS   REASON    AGE
k8s-pv-zk1   1Gi        RWO            Recycle          Bound     default/datadir-zk-0   anything                 42m
k8s-pv-zk2   1Gi        RWO            Recycle          Bound     default/datadir-zk-2   anything                 42m
k8s-pv-zk3   1Gi        RWO            Recycle          Bound     default/datadir-zk-1   anything                 42m

6、创建Zookeeper集群
6.1、zookeeper.yaml

apiVersion: v1
kind: Service
metadata:
  name: zk-hs
  labels:
    app: zk
spec:
  ports:
  - port: 2888
    name: server
  - port: 3888
    name: leader-election
  clusterIP: None
  selector:
    app: zk
---
apiVersion: v1
kind: Service
metadata:
  name: zk-cs
  labels:
    app: zk
spec:
  type: NodePort
  ports:
  - port: 2181
    targetPort: 2181
    name: client
  selector:
    app: zk
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: zk-pdb
spec:
  selector:
    matchLabels:
      app: zk
  maxUnavailable: 1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: zk
spec:
  selector:
    matchLabels:
      app: zk
  serviceName: zk-hs
  replicas: 2
  updateStrategy:
    type: RollingUpdate
  podManagementPolicy: Parallel
  template:
    metadata:
      labels:
        app: zk
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: "app"
                    operator: In
                    values:
                    - zk
              topologyKey: "kubernetes.io/hostname"
      containers:
      - name: kubernetes-zookeeper
        imagePullPolicy: Always
        image: "mirrorgooglecontainers/kubernetes-zookeeper:1.0-3.4.10"
        resources:
          requests:
            memory: "1Gi"
            cpu: "0.5"
        ports:
        - containerPort: 2181
          name: client
        - containerPort: 2888
          name: server
        - containerPort: 3888
          name: leader-election
        command:
        - sh
        - -c
        - "start-zookeeper \
          --servers=2 \
          --data_dir=/var/lib/zookeeper/data \
          --data_log_dir=/var/lib/zookeeper/data/log \
          --conf_dir=/opt/zookeeper/conf \
          --client_port=2181 \
          --election_port=3888 \
          --server_port=2888 \
          --tick_time=2000 \
          --init_limit=10 \
          --sync_limit=5 \
          --heap=512M \
          --max_client_cnxns=60 \
          --snap_retain_count=3 \
          --purge_interval=12 \
          --max_session_timeout=40000 \
          --min_session_timeout=4000 \
          --log_level=INFO"
        readinessProbe:
          exec:
            command:
            - sh
            - -c
            - "zookeeper-ready 2181"
          initialDelaySeconds: 10
          timeoutSeconds: 5
        livenessProbe:
          exec:
            command:
            - sh
            - -c
            - "zookeeper-ready 2181"
          initialDelaySeconds: 10
          timeoutSeconds: 5
        volumeMounts:
        - name: datadir
          mountPath: /var/lib/zookeeper
      securityContext:
        runAsUser: 1000
        fsGroup: 1000
  volumeClaimTemplates:
  - metadata:
      name: datadir
      annotations:
        volume.beta.kubernetes.io/storage-class: "anything"
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 1Gi

7、使用以下命令创建

kubectl create -f zookeeper.yaml

说明:创建完后会出现一个问题,就是所有的Zookeeper pod都启动不起来,查看日志发现是用户对文件夹【/var/lib/zookeeper】没有权限引起的,文件夹的权限是root用户。需要将此目录手动修改为普通用户即可

useradd zookeeper
chown -R zookeeper:zookeeper /var/lib/zookeeper/

在这里插入图片描述
8、查看创建状态

kubectl get pod -o wide
zk-0                                   1/1       Running   0          37m       10.2.11.40   192.168.29.182
zk-1                                   1/1       Running   0          37m       10.2.1.31    192.168.29.176
zk-2                                   1/1       Running   0          37m       10.2.11.43   192.168.29.183

9、查看PV,发现持久卷声明已经绑定上了

[root@k8s-node1 zookeeper]# kubectl get pv -o wide
NAME         CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                  STORAGECLASS   REASON    AGE
k8s-pv-zk1   1Gi        RWO            Recycle          Bound     default/datadir-zk-0   anything                 50m
k8s-pv-zk2   1Gi        RWO            Recycle          Bound     default/datadir-zk-2   anything                 50m
k8s-pv-zk3   1Gi        RWO            Recycle          Bound     default/datadir-zk-1   anything                 50m

10、查看PVC状态

[root@k8s-node1 zookeeper]# kubectl get pvc -o wide
NAME           STATUS    VOLUME       CAPACITY   ACCESS MODES   STORAGECLASS   AGE
datadir-zk-0   Bound     k8s-pv-zk1   1Gi        RWO            anything       47m
datadir-zk-1   Bound     k8s-pv-zk3   1Gi        RWO            anything       47m
datadir-zk-2   Bound     k8s-pv-zk2   1Gi        RWO            anything       47m

11、验证Zookeeper集群是否正常,查看集群节点状态

for i in 0 1 2; do kubectl exec zk-$i zkServer.sh status; done

在这里插入图片描述
通过查看zk状态发现一个leader,两个follower,说明已将zk集群在k8s中创建成功
12、查看主机名

[root@k8s-node1 zookeeper]# for i in 0 1 2; do kubectl exec zk-$i -- hostname; done
zk-0
zk-1
zk-2

13、查看myid

[root@k8s-node1 zookeeper]# for i in 0 1 2; do echo "myid zk-$i";kubectl exec zk-$i -- cat /var/lib/zookeeper/data/myid; done
myid zk-0
1
myid zk-1
2
myid zk-2
3

14、查看完整域名

[root@k8s-node1 zookeeper]# for i in 0 1 2; do kubectl exec zk-$i -- hostname -f; done
zk-0.zk-hs.default.svc.cluster.local.
zk-1.zk-hs.default.svc.cluster.local.
zk-2.zk-hs.default.svc.cluster.local.

15、查看zk配置

[root@k8s-node1 ~]# kubectl exec zk-0  -- cat /opt/zookeeper/conf/zoo.cfg
#This file was autogenerated DO NOT EDIT
clientPort=2181
dataDir=/var/lib/zookeeper/data
dataLogDir=/var/lib/zookeeper/data/log
tickTime=2000
initLimit=10
syncLimit=5
maxClientCnxns=60
minSessionTimeout=4000
maxSessionTimeout=40000
autopurge.snapRetainCount=3
autopurge.purgeInteval=12
server.1=zk-0.zk-hs.default.svc.cluster.local.:2888:3888
server.2=zk-1.zk-hs.default.svc.cluster.local.:2888:3888

16、测试zookeeper集群整体性

[root@k8s-node1 ~]# kubectl exec -ti zk-1 bash
zookeeper@zk-1:/$ cd /opt/zookeeper
zookeeper@zk-1:/opt/zookeeper$ cd bin/
zookeeper@zk-1:/opt/zookeeper/bin$ zkCli.sh
Connecting to localhost:2181
[zk: localhost:2181(CONNECTED) 1] create /hello world
Created /hello
[zk: localhost:2181(CONNECTED) 2] get /hello         
world
cZxid = 0x200000079
ctime = Fri Nov 22 01:24:40 UTC 2019
mZxid = 0x200000079
mtime = Fri Nov 22 01:24:40 UTC 2019
pZxid = 0x200000079
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 5
numChildren = 0

17、在其他节点查看数据是否同步:
17.1、比如在zk-0节点查看数据:

[root@k8s-node1 ~]# kubectl exec -ti zk-0  bash
zookeeper@zk-0:/$ cd /opt/zookeeper/bin/
zookeeper@zk-0:/opt/zookeeper/bin$ zkCli.sh 
Connecting to localhost:2181
[cluster, controller_epoch, controller, brokers, zookeeper, admin, isr_change_notification, consumers, hello, config]
[zk: localhost:2181(CONNECTED) 1] ls /hello
[]
[zk: localhost:2181(CONNECTED) 2] get /hello
world
cZxid = 0x200000079
ctime = Fri Nov 22 01:24:40 UTC 2019
mZxid = 0x200000079
mtime = Fri Nov 22 01:24:40 UTC 2019
pZxid = 0x200000079
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 5
numChildren = 0
要在 Kubernetes 集群中部署 Redis 集群,您需要使用 StatefulSet 来确保每个 Redis 节点都有唯一的名称和稳定的网络标识符。以下是一些基本步骤和配置文件示例: 1. 创建一个 ConfigMap 来存储 Redis 的配置文件。 ``` apiVersion: v1 kind: ConfigMap metadata: name: redis-config data: redis.conf: | bind 0.0.0.0 port 6379 cluster-enabled yes cluster-config-file /data/nodes.conf cluster-node-timeout 5000 appendonly yes ``` 2. 创建一个 Headless Service 来让每个 Redis Pod 都有唯一的 DNS 名称。 ``` apiVersion: v1 kind: Service metadata: name: redis spec: clusterIP: None selector: app: redis ports: - name: redis port: 6379 ``` 3. 创建一个 StatefulSet 来部署 Redis 集群。 ``` apiVersion: apps/v1 kind: StatefulSet metadata: name: redis spec: serviceName: redis replicas: 3 selector: matchLabels: app: redis template: metadata: labels: app: redis spec: containers: - name: redis image: redis:5.0.5-alpine command: - sh - -c - "redis-server /usr/local/etc/redis/redis.conf" ports: - name: redis containerPort: 6379 volumeMounts: - name: data mountPath: /data - name: config mountPath: /usr/local/etc/redis/ volumes: - name: data emptyDir: {} - name: config configMap: name: redis-config items: - key: redis.conf path: redis.conf volumeClaimTemplates: - metadata: name: data spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 1Gi ``` 在这个配置文件中,我们定义了一个有 3 个 Pod 的 StatefulSet,每个 Pod 使用 Redis 5.0.5-alpine 镜像,并且挂载了一个名为 `data` 的 PersistentVolumeClaim 来存储 Redis 数据。我们还定义了一个名为 `config` 的 ConfigMap,其中包含 Redis 的配置文件。最后,我们将 `serviceName` 设置为 `redis`,这将确保每个 Pod 都有唯一的 DNS 名称。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

运维那些事~

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值