k8s-1.2.3部署redis-cluster+predixy代理集群

19 篇文章 1 订阅
18 篇文章 3 订阅

一、环境准备

CentOS Linux release 7.7.1908 (Core)  3.10.0-1062.el7.x86_64 

kubeadm-1.22.3-0.x86_64
kubelet-1.22.3-0.x86_64
kubectl-1.22.3-0.x86_64
kubernetes-cni-0.8.7-0.x86_64
 

主机名IPVIP
k8s-master01192.168.30.106192.168.30.115
k8s-master02192.168.30.107
k8s-master03192.168.30.108
k8s-node01192.168.30.109
k8s-node02192.168.30.110
k8s-pv192.168.30.114NFS存储

二、redis介绍

  • redis是一个key-value存储系统。和Memcached类似,它支持存储的value类型相对更多
  • 包括string(字符串)、list(链表)、set(集合)、zset(sorted set --有序集合)和hash(哈希类型)
  • 这些数据类型都支持push/pop、add/remove及取交集并集和差集及更丰富的操作,且这些操作都是原子性的
  • 与memcached一样,为了保证效率,数据都是缓存在内存中。区别的是redis会周期性的把更新的数据写入磁盘或者把修改操作写入追加的记录文件,并且在此基础上实现了master-slave(主从)同步

三、什么是redis-cluster

Redis-Cluster采用无中心结构,每个节点保存数据和整个集群状态,每个节点都和其他所有节点连接。

其结构特点:
1、所有的redis节点彼此互联(PING-PONG机制),内部使用二进制协议优化传输速度和带宽。
2、节点的fail是通过集群中超过半数的节点检测失效时才生效。
3、客户端与redis节点直连,不需要中间proxy层.客户端不需要连接集群所有节点,连接集群中任何一个可用节点即可。
4、redis-cluster把所有的物理节点映射到[0-16383]slot上(不一定是平均分配),cluster 负责维护node<->slot<->value。
5、Redis集群预分好16384个桶,当需要在 Redis 集群中放置一个 key-value 时,根据 CRC16(key) mod 16384的值,决定将一个key放到哪个桶中。

6、Redis Cluster要求至少需要3个master才能组成一个集群,同时每个master至少需要有一个slave节点

更多信息可以参考网上

四、k8s上部署redis-cluster集群准备工作

redis-cluster是属于有状态的服务,所以在k8s上就需要用statefulset来部署,需要结合使用StatefulSets控制器和PersistentVolumes持久化存储

具体的SatefulSets和PersistentVolumes持久化存储,这里就不详细说明了,可以参考我前面的文章(k8s-1.22.3版本部署持久化存储之StorageClass+NFS_niko0598的博客-CSDN博客)

五、创建StorageClass步骤

本案例部署采用的namespace命名空间是k8s-redis

1、配置NFS Server存储

具体参考我上编文章

2、NFS的共享目录

# exportfs
/data/volumes   192.168.30.0/24

3、创建NFS的rbac

#ssh登录k8s-master01
cd /root/k8s-cluster/redis-cluster

vim nfs-rbac.yaml 

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-redis-provisioner
  # replace with namespace where provisioner is deployed
  namespace: k8s-redis        #根据实际环境设定namespace,下面类同
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-redis-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-redis-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-redis-provisioner
    # replace with namespace where provisioner is deployed
    namespace: k8s-redis
roleRef:
  kind: ClusterRole
  name: nfs-redis-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-redis-provisioner
    # replace with namespace where provisioner is deployed
  namespace: k8s-redis
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-redis-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-redis-provisioner
    # replace with namespace where provisioner is deployed
    namespace: k8s-redis
roleRef:
  kind: Role
  name: leader-locking-nfs-redis-provisioner
  apiGroup: rbac.authorization.k8s.io

kubectl apply -f nfs-rbac.yaml

4、创建redis-cluster集群的storageclass

vim nfs-storageClass.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-redis-nfs-storage
  namespace: k8s-redis
provisioner: redis-nfs-storage #这里的名称要和provisioner配置文件中的环境变量PROVISIONER_NAME保持一致
parameters:
#  archiveOnDelete: "false"
  archiveOnDelete: "true"
reclaimPolicy: Retain

 kubectl apply -f nfs-storageClass.yaml

 查看SC

# kubectl get sc -n k8s-redis
NAME                            PROVISIONER         RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
managed-redis-nfs-storage       redis-nfs-storage   Retain          Immediate           false                  5d23h

5、创建redis-cluster集群的

vim nfs-provisioner.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-redis-provisioner
  labels:
    app: nfs-redis-provisioner
  # replace with namespace where provisioner is deployed
  namespace: k8s-redis  #与RBAC文件中的namespace保持一致
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nfs-redis-provisioner
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-redis-provisioner
  template:
    metadata:
      labels:
        app: nfs-redis-provisioner
    spec:
      serviceAccountName: nfs-redis-provisioner
      containers:
        - name: nfs-redis-provisioner
          #image: quay.io/external_storage/nfs-redis-provisioner:latest
          image: registry-op.test.cn/nfs-subdir-external-provisioner:v4.0.1
          volumeMounts:
            - name: nfs-redis-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: redis-nfs-storage  #provisioner名称,请确保该名称与 nfs-StorageClass.yaml文件中的provisioner名称保持一致
            - name: NFS_SERVER
              value: 192.168.30.114   #NFS Server IP地址
            - name: NFS_PATH
              value: "/data/volumes"    #NFS挂载卷
      volumes:
        - name: nfs-redis-root
          nfs:
            server: 192.168.30.114  #NFS Server IP地址
            path: "/data/volumes"     #NFS 挂载卷
      imagePullSecrets:
      - name: registry-op.test.cn

 kubectl apply -f nfs-provisioner.yaml

# kubectl get pods -n k8s-redis
NAME                                     READY   STATUS    RESTARTS   AGE
nfs-redis-provisioner-5bb559cc64-z7frn   1/1     Running   0          5d23h

六、部署redis-cluster statefulset服务

1、准备redis image

这里创建redis-cluster需要用到redis-trib.rb工具,这个工具可以下一个官方redis源码,里面就有这个工具

1)创建一个Dockerfile

mkdir -p /opt/docker-build/redis
cd /opt/docker-build/redis

vim Dockerfile

FROM redis:5.0
RUN apt-get update -y
RUN apt-get install -y  ruby rubygems
RUN apt-get clean all
RUN gem install redis
RUN apt-get install dnsutils -y
COPY ./redis-trib.rb/redis-trib.rb /usr/local/bin/

2)构建镜像,并上传到我的镜像仓库上

docker build -t registry-op.test.cn/redis:5.0
docker push registry-op.test.cn/redis:5.0

2、创建redis-cluster的configmap

需要注意:fix-ip.sh 脚本的作用用于当redis集群某pod重建后Pod IP发生变化,在/data/nodes.conf中将新的Pod IP替换原Pod IP。不然集群会出问题。

vim redis-configmap.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: redis-cluster
  namespace: k8s-redis
data:
  fix-ip.sh: |
    #!/bin/sh
    CLUSTER_CONFIG="/data/nodes.conf"
    if [ -f ${CLUSTER_CONFIG} ]; then
      if [ -z "${POD_IP}" ]; then
        echo "Unable to determine Pod IP address!"
        exit 1
      fi
      echo "Updating my IP to ${POD_IP} in ${CLUSTER_CONFIG}"
      sed -i.bak -e '/myself/ s/[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}/'${POD_IP}'/' ${CLUSTER_CON
FIG}
    fi
    exec "$@"
  redis.conf: |
    cluster-enabled yes
    cluster-config-file /data/nodes.conf
    cluster-node-timeout 10000
    protected-mode no
    daemonize no
    pidfile /var/run/redis.pid
    port 6379
    tcp-backlog 511
    bind 0.0.0.0
    timeout 3600
    tcp-keepalive 1
    loglevel verbose
    logfile /data/redis.log
    databases 16
    save 900 1
    save 300 10
    save 60 10000
    stop-writes-on-bgsave-error yes
    rdbcompression yes
    rdbchecksum yes
    dbfilename dump.rdb
    dir /data
    #requirepass yl123456
    appendonly yes
    appendfilename "appendonly.aof"
    appendfsync everysec
    no-appendfsync-on-rewrite no
    auto-aof-rewrite-percentage 100
    auto-aof-rewrite-min-size 64mb
    lua-time-limit 20000
    slowlog-log-slower-than 10000
    slowlog-max-len 128
    #rename-command FLUSHALL  ""
    latency-monitor-threshold 0
    notify-keyspace-events ""
    hash-max-ziplist-entries 512
    hash-max-ziplist-value 64
    list-max-ziplist-entries 512
    list-max-ziplist-value 64
    set-max-intset-entries 512
    zset-max-ziplist-entries 128
    zset-max-ziplist-value 64
    hll-sparse-max-bytes 3000
    activerehashing yes
    client-output-buffer-limit normal 0 0 0
    client-output-buffer-limit slave 256mb 64mb 60
    client-output-buffer-limit pubsub 32mb 8mb 60
    hz 10
    aof-rewrite-incremental-fsync yes

kubectl apply -f redis-configmap.yaml

3、创建Statefulset服务

vim redis-statefulset.yaml

apiVersion: v1
kind: Service
metadata:
  namespace: k8s-redis
  name: redis-cluster
spec:
  clusterIP: None
  ports:
  - port: 6379
    targetPort: 6379
    name: client
  - port: 16379
    targetPort: 16379
    name: gossip
  selector:
    app: redis-cluster
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  namespace: k8s-redis
  name: redis-cluster
spec:
  serviceName: redis-cluster
  podManagementPolicy: OrderedReady
  replicas: 6
  selector:
    matchLabels:
      app: redis-cluster
  template:
    metadata:
      labels:
        app: redis-cluster
    spec:
      containers:
      - name: redis
        image: registry-op.test.cn/redis:5.0
        ports:
        - containerPort: 6379
          name: client
        - containerPort: 16379
          name: gossip
        command: ["/etc/redis/fix-ip.sh", "redis-server", "/etc/redis/redis.conf"]
        env:
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        volumeMounts:
        - name: conf
          mountPath: /etc/redis/
          readOnly: false
        - name: data
          mountPath: /data
          readOnly: false
      volumes:
      - name: conf
        configMap:
          name: redis-cluster
          defaultMode: 0755
      imagePullSecrets:
      - name: registry-op.test.cn
  volumeClaimTemplates:
  - metadata:
      name: data
      annotations:
        volume.beta.kubernetes.io/storage-class: "managed-redis-nfs-storage"
    spec:
      accessModes:
        - ReadWriteMany
      resources:
        requests:
          storage: 10Gi

kubectl apply -f redis-statefulset.yaml

#查看pod及存储状态

# kubectl get pods -n k8s-redis|grep redis-cluster
NAME                                     READY   STATUS    RESTARTS   AGE
redis-cluster-0                          1/1     Running   0          5d20h
redis-cluster-1                          1/1     Running   0          5d20h
redis-cluster-2                          1/1     Running   0          5d20h
redis-cluster-3                          1/1     Running   0          5d20h
redis-cluster-4                          1/1     Running   0          5d20h
redis-cluster-5                          1/1     Running   0          5d20h

# kubectl get svc -n k8s-redis|grep redis-cluster
redis-cluster   ClusterIP   None             <none>        6379/TCP,16379/TCP   5d20h


# kubectl get pv -n k8s-redis|grep redis-cluster
pvc-38976b68-b516-4e43-9dd8-bba57e241a22   10Gi       RWX            Retain           Bound    k8s-redis/data-redis-cluster-4   managed-redis-nfs-storage            5d21h
pvc-60f98315-75d2-46ce-9b29-fb0eecce733b   10Gi       RWX            Retain           Bound    k8s-redis/data-redis-cluster-2   managed-redis-nfs-storage            5d22h
pvc-6a2d3792-cf3c-450d-ac87-deda6621fe84   10Gi       RWX            Retain           Bound    k8s-redis/data-redis-cluster-3   managed-redis-nfs-storage            5d21h
pvc-aad10550-3ddc-4b57-b7c4-d51bf82216bc   10Gi       RWX            Retain           Bound    k8s-redis/data-redis-cluster-0   managed-redis-nfs-storage            5d23h
pvc-b2d822cd-9e7a-41f3-9b1e-93f1b8340165   10Gi       RWX            Retain           Bound    k8s-redis/data-redis-cluster-5   managed-redis-nfs-storage            5d21h
pvc-e6e81482-1ec6-4e1c-8726-3cbb52177929   10Gi       RWX            Retain           Bound    k8s-redis/data-redis-cluster-1   managed-redis-nfs-storage            5d22h


# kubectl get pvc -n k8s-redis|grep redis-cluster
data-redis-cluster-0   Bound    pvc-aad10550-3ddc-4b57-b7c4-d51bf82216bc   10Gi       RWX            managed-redis-nfs-storage   5d23h
data-redis-cluster-1   Bound    pvc-e6e81482-1ec6-4e1c-8726-3cbb52177929   10Gi       RWX            managed-redis-nfs-storage   5d22h
data-redis-cluster-2   Bound    pvc-60f98315-75d2-46ce-9b29-fb0eecce733b   10Gi       RWX            managed-redis-nfs-storage   5d22h
data-redis-cluster-3   Bound    pvc-6a2d3792-cf3c-450d-ac87-deda6621fe84   10Gi       RWX            managed-redis-nfs-storage   5d21h
data-redis-cluster-4   Bound    pvc-38976b68-b516-4e43-9dd8-bba57e241a22   10Gi       RWX            managed-redis-nfs-storage   5d21h
data-redis-cluster-5   Bound    pvc-b2d822cd-9e7a-41f3-9b1e-93f1b8340165   10Gi       RWX            managed-redis-nfs-storage   5d21h

4、查看NFS存储状态

ssh登录192.168.30.114

# ll /data/volumes/|grep redis-cluster
drwxrwxrwx 2 root root      4096 Dec  8 14:35 k8s-redis-data-redis-cluster-0-pvc-aad10550-3ddc-4b57-b7c4-d51bf82216bc
drwxrwxrwx 2 root root      4096 Dec  8 14:35 k8s-redis-data-redis-cluster-1-pvc-e6e81482-1ec6-4e1c-8726-3cbb52177929
drwxrwxrwx 2 root root      4096 Dec 13 15:35 k8s-redis-data-redis-cluster-2-pvc-60f98315-75d2-46ce-9b29-fb0eecce733b
drwxrwxrwx 2 root root      4096 Dec  8 14:35 k8s-redis-data-redis-cluster-3-pvc-6a2d3792-cf3c-450d-ac87-deda6621fe84
drwxrwxrwx 2 root root      4096 Dec  8 14:35 k8s-redis-data-redis-cluster-4-pvc-38976b68-b516-4e43-9dd8-bba57e241a22
drwxrwxrwx 2 root root      4096 Dec 13 15:35 k8s-redis-data-redis-cluster-5-pvc-b2d822cd-9e7a-41f3-9b1e-93f1b8340165
# find k8s-redis-data-redis-cluster-*
k8s-redis-data-redis-cluster-0-pvc-aad10550-3ddc-4b57-b7c4-d51bf82216bc
k8s-redis-data-redis-cluster-0-pvc-aad10550-3ddc-4b57-b7c4-d51bf82216bc/appendonly.aof
k8s-redis-data-redis-cluster-0-pvc-aad10550-3ddc-4b57-b7c4-d51bf82216bc/nodes.conf.bak
k8s-redis-data-redis-cluster-0-pvc-aad10550-3ddc-4b57-b7c4-d51bf82216bc/dump.rdb
k8s-redis-data-redis-cluster-0-pvc-aad10550-3ddc-4b57-b7c4-d51bf82216bc/nodes.conf
k8s-redis-data-redis-cluster-0-pvc-aad10550-3ddc-4b57-b7c4-d51bf82216bc/redis.log
k8s-redis-data-redis-cluster-1-pvc-e6e81482-1ec6-4e1c-8726-3cbb52177929
k8s-redis-data-redis-cluster-1-pvc-e6e81482-1ec6-4e1c-8726-3cbb52177929/appendonly.aof
k8s-redis-data-redis-cluster-1-pvc-e6e81482-1ec6-4e1c-8726-3cbb52177929/nodes.conf.bak
k8s-redis-data-redis-cluster-1-pvc-e6e81482-1ec6-4e1c-8726-3cbb52177929/dump.rdb
k8s-redis-data-redis-cluster-1-pvc-e6e81482-1ec6-4e1c-8726-3cbb52177929/nodes.conf
k8s-redis-data-redis-cluster-1-pvc-e6e81482-1ec6-4e1c-8726-3cbb52177929/redis.log
k8s-redis-data-redis-cluster-2-pvc-60f98315-75d2-46ce-9b29-fb0eecce733b
k8s-redis-data-redis-cluster-2-pvc-60f98315-75d2-46ce-9b29-fb0eecce733b/appendonly.aof
k8s-redis-data-redis-cluster-2-pvc-60f98315-75d2-46ce-9b29-fb0eecce733b/nodes.conf.bak
k8s-redis-data-redis-cluster-2-pvc-60f98315-75d2-46ce-9b29-fb0eecce733b/dump.rdb
k8s-redis-data-redis-cluster-2-pvc-60f98315-75d2-46ce-9b29-fb0eecce733b/nodes.conf
k8s-redis-data-redis-cluster-2-pvc-60f98315-75d2-46ce-9b29-fb0eecce733b/redis.log
k8s-redis-data-redis-cluster-3-pvc-6a2d3792-cf3c-450d-ac87-deda6621fe84
k8s-redis-data-redis-cluster-3-pvc-6a2d3792-cf3c-450d-ac87-deda6621fe84/appendonly.aof
k8s-redis-data-redis-cluster-3-pvc-6a2d3792-cf3c-450d-ac87-deda6621fe84/nodes.conf.bak
k8s-redis-data-redis-cluster-3-pvc-6a2d3792-cf3c-450d-ac87-deda6621fe84/dump.rdb
k8s-redis-data-redis-cluster-3-pvc-6a2d3792-cf3c-450d-ac87-deda6621fe84/nodes.conf
k8s-redis-data-redis-cluster-3-pvc-6a2d3792-cf3c-450d-ac87-deda6621fe84/redis.log
k8s-redis-data-redis-cluster-4-pvc-38976b68-b516-4e43-9dd8-bba57e241a22
k8s-redis-data-redis-cluster-4-pvc-38976b68-b516-4e43-9dd8-bba57e241a22/appendonly.aof
k8s-redis-data-redis-cluster-4-pvc-38976b68-b516-4e43-9dd8-bba57e241a22/nodes.conf.bak
k8s-redis-data-redis-cluster-4-pvc-38976b68-b516-4e43-9dd8-bba57e241a22/dump.rdb
k8s-redis-data-redis-cluster-4-pvc-38976b68-b516-4e43-9dd8-bba57e241a22/nodes.conf
k8s-redis-data-redis-cluster-4-pvc-38976b68-b516-4e43-9dd8-bba57e241a22/redis.log
k8s-redis-data-redis-cluster-5-pvc-b2d822cd-9e7a-41f3-9b1e-93f1b8340165
k8s-redis-data-redis-cluster-5-pvc-b2d822cd-9e7a-41f3-9b1e-93f1b8340165/appendonly.aof
k8s-redis-data-redis-cluster-5-pvc-b2d822cd-9e7a-41f3-9b1e-93f1b8340165/nodes.conf.bak
k8s-redis-data-redis-cluster-5-pvc-b2d822cd-9e7a-41f3-9b1e-93f1b8340165/dump.rdb
k8s-redis-data-redis-cluster-5-pvc-b2d822cd-9e7a-41f3-9b1e-93f1b8340165/nodes.conf
k8s-redis-data-redis-cluster-5-pvc-b2d822cd-9e7a-41f3-9b1e-93f1b8340165/redis.log

5、初始化redis-cluster集群

接下来执行以下命令,会提示请输入"yes"来形成redis-cluster集群,集群的形式是:

组成三组Redis服务master-->slave,三个master,三个slave

注意:

redis-trib.rb必须使用ip进行初始化redis集群,使用域名会报如下错误

******/redis/client.rb:126:in `call’: ERR Invalid node address specified: redis-cluster-0.redis-headless.sts-app.svc.cluster.local:6379 (Redis::CommandError)

#以下是集群初始化命令,这个命令是先获取6台redis的ip地址,然后再初始化

kubectl exec -it redis-cluster-0 -n k8s-redis -- redis-trib.rb create --replicas 1 $(kubectl get pods -l app=redis-cluster -n k8s-redis -o jsonpath='{range.items[*]}{.status.podIP}:6379 ')

#查看6台redis的IP,通过个以下命令,可以看到6台redis的ip是多少

# kubectl get pods -n k8s-redis  -o wide|grep redis-cluster
redis-cluster-0                          1/1     Running   0          5d21h   10.244.3.45   k8s-node02   <none>              <none>
redis-cluster-1                          1/1     Running   0          5d21h   10.244.5.41   k8s-node01   <none>              <none>
redis-cluster-2                          1/1     Running   0          5d21h   10.244.3.46   k8s-node02   <none>              <none>
redis-cluster-3                          1/1     Running   0          5d21h   10.244.5.42   k8s-node01   <none>              <none>
redis-cluster-4                          1/1     Running   0          5d21h   10.244.3.47   k8s-node02   <none>              <none>
redis-cluster-5                          1/1     Running   0          5d21h   10.244.5.43   k8s-node01   <none>              <none>

#取6台redis IP

注意:最后一个单引后前面有一个空格

# kubectl get pods -l app=redis-cluster -n k8s-redis -o jsonpath='{range.items[*]}{.status.podIP}:6379 '
10.244.3.45:6379 10.244.5.41:6379 10.244.3.46:6379 10.244.5.42:6379 10.244.3.47:6379 10.244.5.43:6379 

#执行命令,开始初始化

#kubectl exec -it redis-cluster-0 -n k8s-redis -- redis-trib.rb create --replicas 1 $(kubectl get pods -l app=redis-cluster -n k8s-redis -o jsonpath='{range.items[*]}{.status.podIP}:6379 ')


Adding replica 10.244.3.45:6379 to 10.244.5.42:6379
M: e5a3154a17131075f35fb32953b8cf8d6cfc7df0 10.244.3.47:6379
   slots:0-5460 (5461 slots) master
M: 961398483262f505a115957e7e4eda7ff3e64900 10.244.5.43:6379
   slots:5461-10922 (5462 slots) master
M: 2d1440e37ea4f4e9f6d39d240367deaa609d324d 10.244.5.42:6379
   slots:10923-16383 (5461 slots) master
S: 0d7bf40bf18d474509116437959b65551cd68b03 10.244.3.46:6379
   replicates e5a3154a17131075f35fb32953b8cf8d6cfc7df0
S: 8cbf699a850c0dafe51524127a594fdbf0a27784 10.244.5.41:6379
   replicates 961398483262f505a115957e7e4eda7ff3e64900
S: 2987a33f4ce2e412dcc11c1c1daa2538591cd930 10.244.3.45:6379
   replicates 2d1440e37ea4f4e9f6d39d240367deaa609d324d
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join......
>>> Performing Cluster Check (using node 10.244.3.47:6379)
M: e5a3154a17131075f35fb32953b8cf8d6cfc7df0 10.244.3.47:6379
   slots:0-5460 (5461 slots) master
M: 961398483262f505a115957e7e4eda7ff3e64900 10.244.5.43:6379
   slots:5461-10922 (5462 slots) master
M: 2d1440e37ea4f4e9f6d39d240367deaa609d324d 10.244.5.42:6379
   slots:10923-16383 (5461 slots) master
M: 0d7bf40bf18d474509116437959b65551cd68b03 10.244.3.46:6379
   slots: (0 slots) master
   replicates e5a3154a17131075f35fb32953b8cf8d6cfc7df0
M: 8cbf699a850c0dafe51524127a594fdbf0a27784 10.244.5.41:6379
   slots: (0 slots) master
   replicates 961398483262f505a115957e7e4eda7ff3e64900
M: 2987a33f4ce2e412dcc11c1c1daa2538591cd930 10.244.3.45:6379
   slots: (0 slots) master
   replicates 2d1440e37ea4f4e9f6d39d240367deaa609d324d
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered. 
通过上面初始化信息,可以看出集群关系:
redis-cluster-0是master节点,redis-cluster-3是它的从节点。
redis-cluster-1是master节点,redis-cluster-4是它的从节点。
redis-cluster-2是master节点,redis-cluster-5是它的从节点。

6、验证Redis-Cluster集群部署

# kubectl exec -it redis-cluster-0 -n k8s-redis -- redis-cli cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:586272
cluster_stats_messages_pong_sent:580527
cluster_stats_messages_meet_sent:1
cluster_stats_messages_sent:1166800
cluster_stats_messages_ping_received:580521
cluster_stats_messages_pong_received:586273
cluster_stats_messages_meet_received:6
cluster_stats_messages_received:1166800
#  for x in $(seq 0 5); do echo "redis-cluster-$x"; kubectl exec redis-cluster-$x -n k8s-redis -- redis-cli role; echo; done
redis-cluster-0
master
707196
10.244.5.42
6379
707196

redis-cluster-1
master
707154
10.244.3.47
6379
707140

redis-cluster-2
master
707021
10.244.5.43
6379
707021

redis-cluster-3
slave
10.244.3.45
6379
connected
707196

redis-cluster-4
slave
10.244.5.41
6379
connected
707154

redis-cluster-5
slave
10.244.3.46
6379
connected
707021

到此整个redis-cluster已经部署完成,当然如果要准备在生产环境上使用,还面临着不少问题 ,需要再经过严格的测试。

  • 存储NFS存储出问题,应该如何应对
  • 当集群容量不够了,扩容问题 等等 
  • 如何规避redis pod的主从一起部署到同一台物理机上

7、具体应用

在k8s集群内部可以通过coredns直接访问,可以通过装curl,来测试

通过dns访问redis-cluster.k8s-redis.svc.cluster.local,可以取出整个集群的ip列表

# kubectl exec -it curl /bin/sh -n default

[ root@curl:/ ]$ nslookup redis-cluster.k8s-redis.svc.cluster.local
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      redis-cluster.k8s-redis.svc.cluster.local
Address 1: 10.244.3.47 redis-cluster-4.redis-cluster.k8s-redis.svc.cluster.local
Address 2: 10.244.5.42 redis-cluster-3.redis-cluster.k8s-redis.svc.cluster.local
Address 3: 10.244.5.41 redis-cluster-1.redis-cluster.k8s-redis.svc.cluster.local
Address 4: 10.244.5.43 redis-cluster-5.redis-cluster.k8s-redis.svc.cluster.local
Address 5: 10.244.3.45 redis-cluster-0.redis-cluster.k8s-redis.svc.cluster.local
Address 6: 10.244.3.46 redis-cluster-2.redis-cluster.k8s-redis.svc.cluster.local

七、部署redis-cluster集群代理predixy

1、为什么要在redis集群前面加个predixy代理

1)Redis pod重启可导致IP变化

2)Redis处理连接负载高

3)集群扩缩容无感知

4)数据安全风险

具体可以参考【小米 Redis 的 K8S 容器化部署实践_Cluster

2、构建predixy镜像

1)下载predixy源码

mkdir -p /opt/docker-build/predixy
cd /opt/docker-build/predixy
wget https://codeload.github.com/joyieldInc/predixy/zip/refs/heads/master
unzip predixy-master.zip
mv predixy-master predixy-1.0.5

2)准备Dockerfile

注:为了方便调试,这里装上了redis主要是用到redis-cli工具

vim Dockerfile

FROM centos:7

RUN yum install -y epel-release net-tools
RUN yum install -y redis
RUN yum install -y libstdc++-static gcc gcc-c++ make
RUN mkdir /opt/predixy-1.0.5
RUN mkdir /etc/predixy
COPY ./predixy-1.0.5/src/predixy  /usr/local/bin/
COPY ./predixy-1.0.5/conf/*  /etc/predixy/
ENTRYPOINT ["/usr/local/bin/predixy","/etc/predixy/predixy.conf"]

3)构建镜像,并推送到镜像仓库

docker build -t registry-op.test.cn/predixy:1.0.5 .
docker push registry-op.test.cn/predixy:1.0.5

3、部署predixy

1)创建predixy-configmap

vim predixy-configmap.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: predixy-config
  namespace: k8s-redis  #namespace按自已的修改
data:
  predixy.conf: |

    ################################### GENERAL ####################################
    ## Predixy configuration file example
    ## Specify a name for this predixy service
    ## redis command INFO can get this
    Name Predixy-DefaultNS
    ## Specify listen address, support IPV4, IPV6, Unix socket
    ## Examples:
    # Bind 127.0.0.1:7617
    # Bind 0.0.0.0:7617
    # Bind /tmp/predixy
    ## Default is 0.0.0.0:7617
    Bind 0.0.0.0:7617
    ## Worker threads
    WorkerThreads 4
    ## Memory limit, 0 means unlimited
    ## Examples:
    # MaxMemory 100M
    # MaxMemory 1G
    # MaxMemory 0
    ## MaxMemory can change online by CONFIG SET MaxMemory xxx
    ## Default is 0
    # MaxMemory 0
    ## Close the connection after a client is idle for N seconds (0 to disable)
    ## ClientTimeout can change online by CONFIG SET ClientTimeout N
    ## Default is 0 为0时表示禁止该功能,不主动断开客户端连接
    ClientTimeout 0
    ## IO buffer size
    ## Default is 4096
    # BufSize 4096
    ################################### LOG ########################################
    ## Log file path
    ## Unspecify will log to stdout
    ## Default is Unspecified
    Log /data/predixy.log

    ## LogRotate support

    ## 1d rotate log every day
    ## nh rotate log every n hours   1 <= n <= 24
    ## nm rotate log every n minutes 1 <= n <= 1440
    ## nG rotate log evenry nG bytes
    ## nM rotate log evenry nM bytes
    ## time rotate and size rotate can combine eg 1h 2G, means 1h or 2G roate a time
    ## Examples:
    # LogRotate 1d 2G
    # LogRotate 1d
    LogRotate 1d

    ## Default is disable LogRotate


    ## In multi-threads, worker thread log need lock,
    ## AllowMissLog can reduce lock time for improve performance
    ## AllowMissLog can change online by CONFIG SET AllowMissLog true|false
    ## Default is true
    # AllowMissLog false

    ## LogLevelSample, output a log every N
    ## all level sample can change online by CONFIG SET LogXXXSample N
    LogVerbSample 0
    LogDebugSample 0
    LogInfoSample 100
    LogNoticeSample 1
    LogWarnSample 1
    LogErrorSample 1


    ################################### AUTHORITY ##################################
    # Include auth.conf

    ################################### SERVERS ####################################
    #Include cluster.conf
    # Include sentinel.conf
    # Include try.conf
    ###############################################################################
    #这个clusterserverpool也可以放到cluster.conf文件中,看自已的需求
    ClusterServerPool {
      MasterReadPriority 60
      StaticSlaveReadPriority 50
      DynamicSlaveReadPriority 60
      RefreshInterval 1
      ServerTimeout 1
      ServerFailureLimit 10
      ServerRetryTimeout 1
      KeepAlive 120
      Servers {
        #这个很重要,就是redis-server的集群IP地址,一定要要能取到,可以按我上面的curl方法,能取就行
        + redis-cluster.k8s-redis.svc.cluster.local:6379  
        #+ 10.244.5.43:6379
       }
     }
    ################################### DATACENTER #################################
    ## LocalDC specify current machine dc
    # LocalDC bj

    ## see dc.conf
    # Include dc.conf


    ################################### COMMAND ####################################
    ## Custom command define, see command.conf
    #Include command.conf
    ################################### LATENCY ####################################
    ## Latency monitor define, see latency.conf
    #Include latency.conf

kubectl apply -f predixy-configmap.yaml
2)创建静态存储,因为我把代理的日志存放到存储上了

#创建pv

vim pv-nfs.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: redis-pv
  namespace: k8s-redis
  labels:
    type: nfs
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: nfs
  nfs:
    path: "/data/volumes"
    server: 192.168.30.114 #k8s-nfs matser
    readOnly: false

 kubectl apply -f pv-nfs.yaml

#创建pvc

vim pvc-nfs.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: redis-pvc
  namespace: k8s-redis
spec:
  accessModes:
  - ReadWriteMany
  resources:
     requests:
       storage: 5Gi
  storageClassName: nfs

kubectl apply -f pvc-nfs.yaml

#查看状态

# kubectl get pv -A|grep redis-pv
redis-pv                                   5Gi        RWX            Recycle          Bound    k8s-redis/redis-pvc              nfs                                  28h

# kubectl get pvc -A|grep redis-pvc
k8s-redis   redis-pvc              Bound    redis-pv                                   5Gi        RWX            nfs                         28h

3)部署predixy-deployment

注:predixy是无状态服务

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: '1'
  name: predixy
  namespace: k8s-redis
spec:
  progressDeadlineSeconds: 600
  replicas: 2
  revisionHistoryLimit: 5
  selector:
    matchLabels:
      #k8s.cn/name: predixy
      app: predixy
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: predixy
    spec:
      containers:
        - command:
            - predixy
            - /etc/predixy/predixy.conf
          #image: haandol/predixy
          #imagePullPolicy: IfNotPresent
          image: registry-op.test.cn/predixy:1.0.5
          name: predixy
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          resources:
            requests:
              cpu: 100m
              memory: 30Mi
            limits:
              cpu: 100m
              memory: 30Mi
          volumeMounts:
            - mountPath: /etc/predixy/
              name: predixy-config-dir
              readOnly: true
            - mountPath: /data/
              name: predixy-data-dir
      imagePullSecrets:
      - name: registry-op.test.cn
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      terminationGracePeriodSeconds: 30
      volumes:
        - configMap:
            defaultMode: 420
            name: predixy-config
          name: predixy-config-dir
        - name: predixy-data-dir
          persistentVolumeClaim:
            claimName: redis-pvc

---
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
  name: predixy
  namespace: k8s-redis
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: predixy
  minReplicas: 1
  maxReplicas: 3
  metrics:
  - type: Resource
    resource:
       name: cpu
       target:
          type: Utilization
          averageUtilization: 50
  - type: Resource
    resource:
       name: memory
       target:
          type: AverageValue
          averageValue: 80Mi

---
apiVersion: v1
kind: Service
metadata:
  name: predixy
  namespace: k8s-redis
spec:
  externalTrafficPolicy: Cluster
  ports:
    - name: predixy-port
      nodePort: 30617
      port: 7617
      protocol: TCP
      targetPort: 7617
  selector:
    k8s.cn/name: predixy
  sessionAffinity: None
  type: NodePort
  selector:
    app: predixy

 kubectl apply -f predixy-deployment.yaml

#查看状态

# kubectl get pod -A|grep predixy
k8s-redis              predixy-695b7ccb56-wbjrd                    1/1     Running     0              22h

4)测试功能

找一台机器,装有redis服务,手动连接

# redis-cli -h 192.168.30.109 -p 30617
# Proxy
Version:1.0.5
Name:Predixy-DefaultNS
Bind:0.0.0.0:7617
RedisMode:proxy
SingleThread:false
WorkerThreads:4
Uptime:1639386772
UptimeSince:2021-12-13 09:12:52

# SystemResource
UsedMemory:1338824
MaxMemory:0
MaxRSS:18563072
UsedCpuSys:3447.022
UsedCpuUser:608.583

# Stats
Accept:417
ClientConnections:1
TotalRequests:34307607
TotalResponses:34307599
TotalRecvClientBytes:1232140329
TotalSendServerBytes:1234417512
TotalRecvServerBytes:233152024
TotalSendClientBytes:171171181

# Servers
Server:redis-cluster.k8s-redis.svc.cluster.local:6379
Role:slave
Group:565909bf0235ddd449aacb3fd2843901c98620c0
DC:
CurrentIsFail:0
Connections:4
Connect:5
Requests:11777
Responses:11776
SendBytes:330348
RecvBytes:8934460

Server:10.244.5.42:6379
Role:slave
Group:a17b6ac1a65322f252623b3e9fcc8d5d6fc61d73
DC:
CurrentIsFail:0
Connections:4
Connect:4
Requests:11524
Responses:11524
SendBytes:322576
RecvBytes:8810124

Server:10.244.3.46:6379
Role:master
Group:755726d8c9cfbfc63e46725c8d085c0a1485356f
DC:
CurrentIsFail:0
Connections:4
Connect:84
Requests:14683354
Responses:14683322
SendBytes:528502172
RecvBytes:82358322

Server:10.244.3.47:6379
Role:slave
Group:565909bf0235ddd449aacb3fd2843901c98620c0
DC:
CurrentIsFail:0
Connections:4
Connect:4
Requests:11329
Responses:11329
SendBytes:317116
RecvBytes:8660937

Server:10.244.5.41:6379
Role:master
Group:565909bf0235ddd449aacb3fd2843901c98620c0
DC:
CurrentIsFail:0
Connections:4
Connect:4
Requests:11569
Responses:11569
SendBytes:323836
RecvBytes:8844489

Server:10.244.5.43:6379
Role:slave
Group:755726d8c9cfbfc63e46725c8d085c0a1485356f
DC:
CurrentIsFail:0
Connections:4
Connect:82
Requests:19566293
Responses:19566293
SendBytes:704291604
RecvBytes:106536210

Server:10.244.3.45:6379
Role:master
Group:a17b6ac1a65322f252623b3e9fcc8d5d6fc61d73
DC:
CurrentIsFail:0
Connections:4
Connect:5
Requests:11785
Responses:11784
SendBytes:329860
RecvBytes:9007482


# LatencyMonitor

5)做一下压力测试,用自带的redis-benchmark测试

# redis-benchmark -h 192.168.30.110 -p 30617 -c 50 -n 10000 -t get
====== GET ======
  10000 requests completed in 20.70 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

0.56% <= 1 milliseconds
7.80% <= 2 milliseconds
12.93% <= 3 milliseconds
17.89% <= 4 milliseconds
20.73% <= 5 milliseconds
21.61% <= 6 milliseconds
21.90% <= 7 milliseconds
22.21% <= 8 milliseconds
22.45% <= 9 milliseconds
22.48% <= 88 milliseconds
22.54% <= 89 milliseconds
22.57% <= 90 milliseconds
22.65% <= 91 milliseconds
23.06% <= 92 milliseconds
23.35% <= 93 milliseconds
23.93% <= 94 milliseconds
25.58% <= 95 milliseconds
28.82% <= 96 milliseconds
34.55% <= 97 milliseconds
41.28% <= 98 milliseconds
49.08% <= 99 milliseconds
58.52% <= 100 milliseconds
65.94% <= 101 milliseconds
71.38% <= 102 milliseconds
74.56% <= 103 milliseconds
76.19% <= 104 milliseconds
77.25% <= 105 milliseconds
77.57% <= 106 milliseconds
77.69% <= 107 milliseconds
77.71% <= 108 milliseconds
77.73% <= 109 milliseconds
77.79% <= 190 milliseconds
77.91% <= 191 milliseconds
78.18% <= 192 milliseconds
78.48% <= 193 milliseconds
79.25% <= 194 milliseconds
80.55% <= 195 milliseconds
82.86% <= 196 milliseconds
85.94% <= 197 milliseconds
89.23% <= 198 milliseconds
91.97% <= 199 milliseconds
94.09% <= 200 milliseconds
95.46% <= 201 milliseconds
96.12% <= 202 milliseconds
96.69% <= 203 milliseconds
96.94% <= 204 milliseconds
97.00% <= 205 milliseconds
97.08% <= 206 milliseconds
97.13% <= 207 milliseconds
97.15% <= 208 milliseconds
97.17% <= 291 milliseconds
97.18% <= 292 milliseconds
97.22% <= 293 milliseconds
97.26% <= 294 milliseconds
97.66% <= 295 milliseconds
97.92% <= 296 milliseconds
98.44% <= 297 milliseconds
98.86% <= 298 milliseconds
99.07% <= 299 milliseconds
99.29% <= 300 milliseconds
99.48% <= 301 milliseconds
99.53% <= 303 milliseconds
99.54% <= 304 milliseconds
99.55% <= 305 milliseconds
99.56% <= 392 milliseconds
99.58% <= 393 milliseconds
99.66% <= 394 milliseconds
99.71% <= 395 milliseconds
99.72% <= 396 milliseconds
99.87% <= 397 milliseconds
99.88% <= 399 milliseconds
99.94% <= 400 milliseconds
100.00% <= 400 milliseconds
483.19 requests per second

喜欢的朋友,请点赞和收藏,谢谢 

  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

归海听雪

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值