k8s搭建3主3从redis集群

1. nfs

安装nfs

yum -y install nfs-utils rpcbind

设置共享路径(此处网段为哪些网段可以访问)

 vim /etc/exports
 /data/k8s/redis/pv1 192.168.100.0/24(rw,sync,no_root_squash)
/data/k8s/redis/pv2 192.168.100.0/24(rw,sync,no_root_squash)
/data/k8s/redis/pv3 192.168.100.0/24(rw,sync,no_root_squash)
/data/k8s/redis/pv4 192.168.100.0/24(rw,sync,no_root_squash)
/data/k8s/redis/pv5 192.168.100.0/24(rw,sync,no_root_squash)
/data/k8s/redis/pv6 192.168.100.0/24(rw,sync,no_root_squash)

创建目录

mkdir -p /data/k8s/redis/pv{1..6}

启动

一定要先启用rpcbind,这是为nfs分配端口,nfs本身没有端口。

systemctl restart rpcbind
systemctl restart nfs
systemctl enable nfs

查看是否生效

exportfs -v

#回显
/data/k8s/redis/pv1
		192.168.100.0/24(sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash)
/data/k8s/redis/pv2
		192.168.100.0/24(sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash)
/data/k8s/redis/pv3
		192.168.100.0/24(sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash)
/data/k8s/redis/pv4
		192.168.100.0/24(sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash)
/data/k8s/redis/pv5
		192.168.100.0/24(sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash)
/data/k8s/redis/pv6
		192.168.100.0/24(sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash)

客户端

yum -y install nfs-utils

查看存储端共享(此IP为nfs服务器IP)

showmount -e 192.168.100.150

#回显
Export list for 192.168.100.150:
/data/k8s/redis/pv6 192.168.100.0/24
/data/k8s/redis/pv5 192.168.100.0/24
/data/k8s/redis/pv4 192.168.100.0/24
/data/k8s/redis/pv3 192.168.100.0/24
/data/k8s/redis/pv2 192.168.100.0/24
/data/k8s/redis/pv1 192.168.100.0/24

创建共享目录

mkdir /nfs

挂载并查看挂载情况

mount 192.168.100.150:/data /nfs
df -Th |grep nfs

#回显
192.168.100.150:/data   nfs4       17G  1.6G   16G  10% /nfs

验证共享

#客户端
cd /nfs/k8s/redis/pv1/
mkdir 1

#服务端
ls /data/k8s/redis/pv1/
1

服务没问题。

2. 创建pv

注意pathserver、容量大小、访问类型、回收策略,这些参数都会影响到pv与pvc的绑定

 vim pv.yaml
 apiVersion: v1
kind: PersistentVolume	
metadata: #------------------------------------------------------元数据
  name: redis-pv1 #----------------------------------------------pv名称	
  namespace: nfs #-----------------------------------------------名称空间
spec: #----------------------------------------------------------定义pv模块
  capacity: #----------------------------------------------------定义pv容量
    storage: 5Gi #-----------------------------------------------容量大小
  accessModes: #-------------------------------------------------访问类型
  - ReadWriteOnce #----------------------------------------------一人读写
  persistentVolumeReclaimPolicy: Recycle #-----------------------回收策略-手动回收
  storageClassName: "nfs" #--------------------------------------存储类型
  nfs: #---------------------------------------------------------nfs类型
    path: /data/k8s/redis/pv1 #----------------------------------共享路径
    server: 192.168.100.150 #------------------------------------nfs的IP
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: redis-pv2
spec:
  capacity:
    storage: 5Gi
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: "nfs"
  nfs:
    path: /data/k8s/redis/pv2
    server: 192.168.100.150
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: redis-pv3
spec:
  capacity:
    storage: 5Gi
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: "nfs"
  nfs:
    path: /data/k8s/redis/pv3
    server: 192.168.100.150
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: redis-pv4
spec:
  capacity:
    storage: 5Gi
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: "nfs"
  nfs:
    path: /data/k8s/redis/pv4
    server: 192.168.100.150
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: redis-pv5
spec:
  capacity:
    storage: 5Gi
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: "nfs"
  nfs:
    path: /data/k8s/redis/pv5
    server: 192.168.100.150
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: redis-pv6
spec:
  capacity:
    storage: 5Gi
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: "nfs"
  nfs:
    path: /data/k8s/redis/pv6
    server: 192.168.100.150
#创建pv
kubectl create -f pv.yaml

#查看pv
kubectl get pv
#回显
NAME        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
redis-pv1   5Gi        RWO            Recycle          Available           nfs                     12m
redis-pv2   5Gi        RWO            Recycle          Available           nfs                     12m
redis-pv3   5Gi        RWO            Recycle          Available           nfs                     12m
redis-pv4   5Gi        RWO            Recycle          Available           nfs                     12m
redis-pv5   5Gi        RWO            Recycle          Available           nfs                     12m
redis-pv6   5Gi        RWO            Recycle          Available           nfs                     12m

redis配置文件

使用ConfigMap方式创建

vim redis.conf 
appendonly yes
cluster-enabled yes
cluster-config-file /var/lib/redis/nodes.conf
cluster-node-timeout 5000
dir /var/lib/redis
port 6379

#创建
kubectl create configmap redis-conf --from-file=redis.conf

#查看信息
kubectl describe cm redis-conf

#回显
Name:         redis-conf
Namespace:    default
Labels:       <none>
Annotations:  <none>

Data
====
redis.conf:
----
appendonly yes
cluster-enabled yes
cluster-config-file /var/lib/redis/nodes.conf
cluster-node-timeout 5000
dir /var/lib/redis
port 6379


Events:  <none>


创建无头服务

StatefulSet控制器需要使用无头服务来实现稳定网络标识的基础。
注意:
clusterIP: None(这表示这是一个无头服务)

vim headless-service.yaml 
apiVersion: v1
kind: Service
metadata:
  name: redis-service
  labels:
    app: redis
spec:
  ports:
  - name: redis-port
    port: 6379
  clusterIP: None
  selector:
    app: redis
    appCluster: redis-cluster

#查看
kubectl get svc
#回显
NAME               TYPE        CLUSTER-IP    EXTERNAL-IP   
redis-service      ClusterIP   None          <none>        6379/TCP             12h

创建redis集群

pvc模板中标注的三个参数要与pv的参数一致,不然pv无法与pvc匹配。

vim redis.yaml 
apiVersion: apps/v1beta1
kind: StatefulSet #--------------------------------pod控制器类型
metadata: #----------------------------------------资源的元数据信息
  name: redis-app #--------------------------------资源的名称,在同一namespace中必须唯一
spec: #--------------------------------------------设置资源的内容
  serviceName: "redis-service" #-------------------资源名称
  replicas: 6 #------------------------------------副本数量
  template: #--------------------------------------标签
    metadata:
      labels:
        app: redis
        appCluster: redis-cluster
    spec: #----------------------------------------亲和性
      terminationGracePeriodSeconds: 20
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: app
                  operator: In
                  values:
                  - redis
              topologyKey: kubernetes.io/hostname
      containers: #--------------------------------容器设置
      - name: redis
        image: redis #-----------------------------不指定版本默认拉取最新版本
        command:
          - "redis-server"
        args:
          - "/etc/redis/redis.conf"
          - "--protected-mode"
          - "no"
        resources:
          requests:
            cpu: "100m"
            memory: "100Mi"
        ports:
            - name: redis
              containerPort: 6379
              protocol: "TCP"
            - name: cluster
              containerPort: 16379
              protocol: "TCP"
        volumeMounts:
          - name: "redis-conf"
            mountPath: "/etc/redis"
          - name: "redis-data"
            mountPath: "/var/lib/redis"
      volumes:
      - name: "redis-conf"
        configMap: #--------------------------------使用cm配置文件
          name: "redis-conf"
          items:
            - key: "redis.conf"
              path: "redis.conf"
  volumeClaimTemplates: #---------------------------pvc模板
  - metadata:
      name: redis-data
    spec:
      accessModes: [ "ReadWriteOnce" ] #-----------访问控制
      storageClassName: "nfs" #--------------------共享类型
      resources:
        requests:
          storage: 5Gi #---------------------------容量大小

#生成并查看
kubectl create -f redis.conf

#查看
kubectl get pvc
#回显
NAME                     STATUS   VOLUME      CAPACITY   ACCESS MODES   STORAGECLASS   AGE
redis-data-redis-app-0   Bound    redis-pv6   5Gi        RWO            nfs            11h
redis-data-redis-app-1   Bound    redis-pv2   5Gi        RWO            nfs            11h
redis-data-redis-app-2   Bound    redis-pv3   5Gi        RWO            nfs            11h
redis-data-redis-app-3   Bound    redis-pv4   5Gi        RWO            nfs            11h
redis-data-redis-app-4   Bound    redis-pv5   5Gi        RWO            nfs            11h
redis-data-redis-app-5   Bound    redis-pv1   5Gi        RWO            nfs            11h

查看pod

kubectl get pod
#回显
NAME                                READY   STATUS    RESTARTS   AGE
redis-app-0                         1/1     Running   0          11h
redis-app-1                         1/1     Running   0          11h
redis-app-2                         1/1     Running   0          11h
redis-app-3                         1/1     Running   0          11h
redis-app-4                         1/1     Running   0          11h
redis-app-5                         1/1     Running   0          11h

如上,总共创建了6个Redis节点(Pod),其中3个将用于master,另外3个分别作为master的slave;Redis的配置通过volume将之前生成的redis-conf这个Configmap,挂载到了容器的/etc/redis/redis.conf;Redis的数据存储路径使用volumeClaimTemplates声明(也就是PVC),其会绑定到我们先前创建的PV上。

初始化

创建Ubuntu容器

kubectl run -it ubuntu --image=ubuntu --restart=Never /bin/bash

进入容器并更换阿里云的源

cat > /etc/apt/sources.list << EOF
deb http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse

deb http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse

deb http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse

deb http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse
 
deb http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse
EOF

安装基本软件环境

apt-get update
apt-get install -y vim wget python2.7 python-pip redis-tools dnsutils

初始化集群
首先,我们需要安装redis-trib:

pip install redis-trib==0.5.1

创建master集群

redis-trib.py create \
  `dig +short redis-app-0.redis-service.default.svc.cluster.local`:6379 \
  `dig +short redis-app-1.redis-service.default.svc.cluster.local`:6379 \
  `dig +short redis-app-2.redis-service.default.svc.cluster.local`:6379

其次,为每个Master添加Slave

redis-trib.py replicate \
  --master-addr `dig +short redis-app-0.redis-service.default.svc.cluster.local`:6379 \
  --slave-addr `dig +short redis-app-3.redis-service.default.svc.cluster.local`:6379

redis-trib.py replicate \
  --master-addr `dig +short redis-app-1.redis-service.default.svc.cluster.local`:6379 \
  --slave-addr `dig +short redis-app-4.redis-service.default.svc.cluster.local`:6379

redis-trib.py replicate \
  --master-addr `dig +short redis-app-2.redis-service.default.svc.cluster.local`:6379 \
  --slave-addr `dig +short redis-app-5.redis-service.default.svc.cluster.local`:6379

检测集群

检测集群情况

kubectl exec -it redis-app-2 /bin/bash
root@redis-app-2:/data# /usr/local/bin/redis-cli -c
127.0.0.1:6379> cluster nodes
#可以看到三主三从
5df9715324e3abeb35efd4ed7b12da44744010d9 10.244.2.64:6379@16379 slave 74af0c26e4aa29a697272e16bcc254231bfc6f01 0 1623208036142 2 connected
382a85b79974d46a83e9d6313a61bb3e8ddfa653 10.244.3.15:6379@16379 slave b192548c2c54b9950ed6d8c5123275037338f36c 0 1623208035000 4 connected
74af0c26e4aa29a697272e16bcc254231bfc6f01 10.244.1.76:6379@16379 myself,master - 0 1623208036000 2 connected 5462-10922
85eda6a8c524eaaf2c90534abebda4b2f4acafe2 10.244.2.63:6379@16379 master - 0 1623208036040 1 connected 0-5461
ff3d2e3b3d477c016ef4ce6cc35c6536227778c2 10.244.1.77:6379@16379 slave 85eda6a8c524eaaf2c90534abebda4b2f4acafe2 0 1623208037151 1 connected
b192548c2c54b9950ed6d8c5123275037338f36c 10.244.3.14:6379@16379 master - 0 1623208035133 4 connected 10923-16383
127.0.0.1:6379> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:4
cluster_my_epoch:2
cluster_stats_messages_ping_sent:1491
cluster_stats_messages_pong_sent:1511
cluster_stats_messages_meet_sent:2
cluster_stats_messages_sent:3004
cluster_stats_messages_ping_received:1511
cluster_stats_messages_pong_received:1491
cluster_stats_messages_received:3002

检测nfs共享

可以查看到数据证明共享了

ls /nfs/k8s/redis/pv1/
appendonly.aof  dump.rdb  nodes.conf

创建集群内部访问serive

无头服务是不能被集群内部访问的,因此需要创建个service,共集群内部访问与负载均衡

cat svc.yaml 
apiVersion: v1
kind: Service
metadata:
  name: redis-cluster
spec:
  type: ClusterIP
  clusterIP: 10.1.0.100
  ports:
  - port: 6379
    targetPort: 6379
    name: client
  - port: 16379
    targetPort: 16379
    name: gossip
  selector:
    app: redis-cluster

#查看
kubectl get svc
#回显
NAME               TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)              AGE
redis-cluster      ClusterIP   10.1.0.100    <none>        6379/TCP,16379/TCP   12h
redis-service      ClusterIP   None          <none>        6379/TCP             13h

如上,该Service名称为redis-cluster,在K8S集群中暴露6379端口,并且会对labels name为app:
redis或appCluster: redis-cluster的pod进行负载均衡。

检测主从切换

kubectl exec -it redis-app-0 /bin/bash
root@redis-app-0:/data#  /usr/local/bin/redis-cli -c
127.0.0.1:6379> role
1) "master"
2) (integer) 1890
3) 1) 1) "10.244.3.15"
      2) "6379"
      3) "1890"

如上可以看到,app-0是master,其slave的IP为10.244.3.15。

删除主节点

kubectl delete pod redis-app-0

#等待重新拉起进入。
kubectl exec -it redis-app-0 /bin/bash
/usr/local/bin/redis-cli -c
127.0.0.1:6379> role
1) "slave"
2) "10.244.3.15"
3) (integer) 6379
4) "connected"
5) (integer) 2212

如上看到,主已经变为slave了。

查看主节点

之前app-0的从是10.244.3.15,宕机以后从切换成主,进入从查看。

kubectl exec -it redis-app-3 bash
root@redis-app-3:/data# /usr/local/bin/redis-cli -c
127.0.0.1:6379> role
1) "master"
2) (integer) 2534
3) 1) 1) "10.244.3.18"
      2) "6379"
      3) "2534"

如上,主从切换完成

  • 2
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 3
    评论
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值