K8S部署redis集群

一、创建分布式redis集群(cluster)

mkdir –p /root/redis-cluster
cd /root/redis-cluster

a.安装nfs-共享存储

centos系统中使用yum 安装

yum -y install nfs-utils rpcbind

创建6个节点

mkdir -p /usr/local/kubernetes/redis/{pv1,pv2,pv3,pv4,pv5,pv6}

创建exports文件,添加如下蓝色内容

vim /etc/exports
/usr/local/kubernetes/redis/pv1 *(rw,no_root_squash,no_all_squash,sync)/usr/local/kubernetes/redis/pv2 *(rw,no_root_squash,no_all_squash,sync)/usr/local/kubernetes/redis/pv3 *(rw,no_root_squash,no_all_squash,sync)/usr/local/kubernetes/redis/pv4 *(rw,no_root_squash,no_all_squash,sync)/usr/local/kubernetes/redis/pv5 *(rw,no_root_squash,no_all_squash,sync)/usr/local/kubernetes/redis/pv6 *(rw,no_root_squash,no_all_squash,sync)

b.启动服务nfs rpcbind 服务

sudo service nfs-server start

执行sudo service nfs-server status查看状态

所有节点安装nfs客户端
安装rpcbind、nfs-utils软件包并启动服务

yum -y install nfs-utils rpcbind
chkconfig rpcbind on
systemctl restart rpcbind

c.创建PV,创建6个供pvc挂载使用

vim pvxin.yam
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv1
spec:
capacity:
storage: 2000M #磁盘大小2000M
accessModes:
- ReadWriteMany #多客户可读写
nfs:
server: 192.168.1.32
path: "/usr/local/kubernetes/redis/pv1"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv2
spec:
capacity:
storage: 2000M
accessModes:
- ReadWriteMany
nfs:
server: 192.168.1.32
path: "/usr/local/kubernetes/redis/pv2"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv3
spec:
capacity:
storage: 2000M
accessModes:
- ReadWriteMany
nfs:
server: 192.168.1.32
path: "/usr/local/kubernetes/redis/pv3"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv4
spec:
capacity:
storage: 2000M
accessModes:
- ReadWriteMany
nfs:
server: 192.168.1.32
path: "/usr/local/kubernetes/redis/pv4"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv5
spec:
capacity:
storage: 2000M
accessModes:
- ReadWriteMany
nfs:
server: 192.168.1.32
path: "/usr/local/kubernetes/redis/pv5"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv6
spec:
capacity:
storage: 2000M
accessModes:
- ReadWriteMany
nfs:
server: 192.168.1.32
path: "/usr/local/kubernetes/redis/pv6"

如上,sercer改成自己的ip,可以看到所有PV除了名称和挂载的路径外都基本一致。执行创建即可:

[root@master redis]#kubectl create -f pv.yaml 
persistentvolume "nfs-pv1"created
persistentvolume "nfs-pv2"created
persistentvolume "nfs-pv3"created
persistentvolume "nfs-pv4"created
persistentvolume "nfs-pv5"created
persistentvolume "nfs-pv6"created

d.创建Configmap

首先创建文件redis.conf ,内容如下:

appendonly yes
cluster-enabled yes
cluster-config-file/var/lib/redis/nodes.conf
cluster-node-timeout 5000
dir/var/lib/redis
port 6379

通过redis.conf创建名为redis-conf的Configmap:

kubectl create configmap redis-conf --from-file=redis.conf
查看创建的configmap:kubectl describe cm redis-conf
Name: redis-conf
Namespace: default
Labels: <none>
Annotations: <none> 

Data

redis.conf:

appendonly yes
cluster-enabled yes
cluster-config-file/var/lib/redis/nodes.conf
cluster-node-timeout 5000
dir/var/lib/redis
port 6379 
Events: <none>

e.创建Headless service

Headless service是StatefulSet实现稳定网络标识的基础,我们需要提前创建。准备文件headless-service.yml如下:

apiVersion: v1
kind: Service
metadata:
name: redis-service
labels:
app: redis
spec:
ports:
- name: redis-port
port: 6379
clusterIP: None
selector:
app: redis
appCluster: redis-cluster

创建:kubectl create -f headless-service.yml,可通过kubectl get svc查看

f.创建Redis 集群节点

创建好Headless service后,就可以利用StatefulSet创建Redis 集群节点,这也是本文的核心内容。我们先创建redis.yml文件:

apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis-app
namespace:ha-test
spec:
serviceName: "redis-test-service"
serviceName: "redis-access-service"
replicas: 6
template:
metadata:
labels:
app: redis
spec:
terminationGracePeriodSeconds: 20
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- redis
topologyKey: kubernetes.io/hostname
containers:
- name: redis
image: redis
command:
- "redis-server"
args:
- "/etc/redis/redis.conf"
- "--protected-mode"
- "no"
resources:
requests:
cpu: "100m"
memory: "100Mi"
ports:
- name: redis
containerPort: 6379
protocol: "TCP"
- name: cluster
containerPort: 16379
protocol: "TCP"
volumeMounts:
- name: "redis-conf"
mountPath: "/etc/redis"
- name: "redis-data"
mountPath: "/redis/lib/redis"
volumes:
- name: "redis-conf"
configMap:
name: "redis-conf"
items:
- key: "redis.conf"
path: "redis.conf"
volumeClaimTemplates:
- metadata:
name: redis-data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 200M
storageClassName: nfs-storage
selector:
matchLabels:
app: redis

如上,总共创建了6个Redis节点(Pod),其中3个将用于master,另外3个分别作为master的slave;Redis的配置通过volume将之前生成的redis-conf这个Configmap,挂载到了容器的/etc/redis/redis.conf;Redis的数据存储路径使用volumeClaimTemplates声明(也就是PVC),其会绑定到我们先前创建的PV上。

g.关联集群节点

集群节点可以通过 redis-cli 命令将所有节点关联成主从关系,此命令至少关联六个节点:

kubectl -n ha-test exec -it redis-service-0 -- redis-cli --cluster create --cluster-replicas 1 $(kubectl get pods -n ha-test -l k8s.kuboard.cn/name=redis-service -o jsonpath='{range.items[*]}{.status.podIP}:6379 ' | sed "s/ :6379//g")

h.问题

以上步骤已经将集群搭建完毕,但在外部访问还存在一定的问题,此种方式还存在一些风险,故暂不使用该方式。

二、创建哨兵模式redis集群

本次部署为基于k8s部署redis一主两从三哨兵集群

a.设置nfs存储

(1) yum下载nfs,rpcbind,四台机器上都执行

yum install -y nfs-utils rpcbind

(2) 在176上执行,设置nfs共享目录,在exports文件中新增下列内容

vi /etc/exports
/k8s/redis-sentinel/0 *(rw,sync,no_subtree_check,no_root_squash)
/k8s/redis-sentinel/1 *(rw,sync,no_subtree_check,no_root_squash)
/k8s/redis-sentinel/2 *(rw,sync,no_subtree_check,no_root_squash)

(3) 在主机节点上创建数据存储路径

mkdir -p /k8s/redis-sentinel

(4)进入目录

cd /k8s/redis-sentinel

(5) 在主机节点上创建数据存储路径

mkdir 0 1 2

(6) 在主机节点上执行,设置redis数据存储路径文件权限

chmod -R 755 /k8s/redis-sentinel

(7) 在主机节点上执行,使新增的nfs共享目录生效

exportfs -arv

(8) 在主机上执行,启动rpcbind并设置开机自启

systemctl enable rpcbind && systemctl start rpcbind

(9) 在主机上执行,启动nfs并设置开机自启

systemctl enable nfs && systemctl start nfs

以下操作均在主机上以root用户操作

b.创建pv

redis-sentinel-pv.yaml

注:文件中需要将server这个ip改为自己的ip,其他一样即可

(2) 使redis-sentinel-pv.yml文件生效

kubectl apply -f redis-sentinel-pv.yaml

(3) 查看之前配置文件生成的pv卷是否创建成功

kubectl get pv | grep redis

c.创建ConfigMap

redis-sentinel-configmap.yaml
注:需更改redis-sentinel.conf下的 sentinel monitor mymaster 这个设置,192.168.212.176更改为自己的ip 即可.

(2) 使redis-sentinel-configmap.yaml配置文件生效

kubectl apply -f redis-sentinel-configmap.yaml

(3) 查看configmap配置文件是否创建成功

kubectl get configmap -n default

d.创建service

redis-sentinel-service-master.yaml

kind: Service

apiVersion: v1

metadata:

  labels:

    app: redis-sentinel-master-ss

  name: redis-sentinel-master-ss

  namespace: ha-test

spec:

  clusterIP: None

  ports:

  - name: redis

    port: 6379

    targetPort: 6379

  selector:

    app: redis-sentinel-master-ss

redis-sentinel-service-slave.yaml

kind: Service

apiVersion: v1

metadata:

  labels:

    app: redis-sentinel-slave-ss

  name: redis-sentinel-slave-ss

  namespace: ha-test

spec:

  clusterIP: None

  ports:

  - name: redis

    port: 6379

    targetPort: 6379

  selector:

    app: redis-sentinel-slave-ss

redis-sentinel-svc.yml

apiVersion: v1

kind: Service

metadata:

  labels:

    app: redis-sentinel-master-ss

  name: redis-sentinel-master-ss-svc

  namespace: ha-test

spec:

  externalTrafficPolicy: Cluster

  ports:

  - name: redis

    nodePort: 32379

    port: 6379

    protocol: TCP

    targetPort: 6379

  selector:

    app: redis-sentinel-master-ss

  sessionAffinity: None

  type: NodePort

status:

  loadBalancer: {}

sentinel-svc.yaml

apiVersion: v1

kind: Service

metadata:

  labels:

    app: redis-sentinel-sentinel-ss

  name: redis-sentinel-sentinel-ss-svc

  namespace: ha-test

spec:

  ports:

  - name: redis

    port: 26379

    protocol: TCP

    targetPort: 26379

  selector:

    app: redis-sentinel-sentinel-ss

  sessionAffinity: None

  type: NodePort

status:

  loadBalancer: {}

使上述四个yml文件生效

kubectl apply -f redis-sentinel-service-master.yaml -f redis-sentinel-service-slave.yaml -f redis-sentinel-svc.yml -f sentinel-svc.yaml

e.创建StatefulSet

redis-sentinel-rbac.yaml

apiVersion: v1

kind: ServiceAccount

metadata:

  name: redis-sentinel

  namespace: ha-test

---

kind: Role

apiVersion: rbac.authorization.k8s.io/v1

metadata:

  name: redis-sentinel

  namespace: ha-test

rules:

  - apiGroups:

      - ""

    resources:

      - endpoints

    verbs:

      - get

---

kind: RoleBinding

apiVersion: rbac.authorization.k8s.io/v1

metadata:

  name: redis-sentinel

  namespace: ha-test

roleRef:

  apiGroup: rbac.authorization.k8s.io

  kind: Role

  name: redis-sentinel

subjects:

- kind: ServiceAccount

  name: redis-sentinel

  namespace: ha-test

redis-sentinel-ss-master.yaml

kind: StatefulSet

apiVersion: apps/v1

metadata:

  labels:

    app: redis-sentinel-master-ss

  name: redis-sentinel-master-ss

  namespace: ha-test

spec:

  replicas: 1

  selector:

    matchLabels:

      app: redis-sentinel-master-ss

  serviceName: redis-sentinel-master-ss

  template:

    metadata:

      labels:

        app: redis-sentinel-master-ss

    spec:

      containers:

      - args:

        - -c

        - cp /mnt/redis-master.conf /data/ ; redis-server /data/redis-master.conf

        command:

        - sh

        image: redis:5.0.5-alpine

        imagePullPolicy: IfNotPresent

        name: redis-master

        ports:

        - containerPort: 6379

          name: masterport

          protocol: TCP

        volumeMounts:

        - mountPath: /mnt/

          name: config-volume

          readOnly: false

        - mountPath: /data/

          name: redis-sentinel-master-storage

          readOnly: false

      serviceAccountName: redis-sentinel

      terminationGracePeriodSeconds: 30

      volumes:

      - configMap:

          items:

          - key: redis-master.conf

            path: redis-master.conf

          name: redis-sentinel-config

        name: config-volume

  volumeClaimTemplates:

  - metadata:

      name: redis-sentinel-master-storage

    spec:

      accessModes:

      - ReadWriteMany

      storageClassName: "nfs-storage"

      resources:

        requests:

          storage: 2Gi

redis-sentinel-ss-slave.yaml

kind: StatefulSet

apiVersion: apps/v1

metadata:

  labels:

    app: redis-sentinel-slave-ss

  name: redis-sentinel-slave-ss

  namespace: ha-test

spec:

  replicas: 2

  selector:

    matchLabels:

      app: redis-sentinel-slave-ss

  serviceName: redis-sentinel-slave-ss

  template:

    metadata:

      labels:

        app: redis-sentinel-slave-ss

    spec:

      containers:

      - args:

        - -c

        - cp /mnt/redis-slave.conf /data/ ; redis-server /data/redis-slave.conf

        command:

        - sh

        image: redis:5.0.5-alpine

        imagePullPolicy: IfNotPresent

        name: redis-slave

        ports:

        - containerPort: 6379

          name: slaveport

          protocol: TCP

        volumeMounts:

        - mountPath: /mnt/

          name: config-volume

          readOnly: false

        - mountPath: /data/

          name: redis-sentinel-slave-storage

          readOnly: false

      serviceAccountName: redis-sentinel

      terminationGracePeriodSeconds: 30

      volumes:

      - configMap:

          items:

          - key: redis-slave.conf

            path: redis-slave.conf

          name: redis-sentinel-config

        name: config-volume

  volumeClaimTemplates:

  - metadata:

      name: redis-sentinel-slave-storage

    spec:

      accessModes:

      - ReadWriteMany

      storageClassName: "nfs-storage"

      resources:

        requests:

          storage: 2Gi

使上述三个配置文件生效

kubectl apply -f redis-sentinel-rbac.yaml -f redis-sentinel-ss-master.yaml -f redis-sentinel-ss-slave.yaml

查看命名空间default下各pod情况,STATUS皆为running则为正常

kubectl get pods -n default

进入容器并查看状态

kubectl exec -ti redis-sentinel-slave-ss-1 -n default – redis-cli -a ‘Idc&57$S6z-h redis-sentinel-master-ss-0.redis-sentinel-master-ss.default.svc.cluster.local info replication

至此,一主两从搭建完毕

f.创建sentinel(哨兵)

redis-sentinel-ss-sentinel.yaml

kind: StatefulSet

apiVersion: apps/v1

metadata:

  labels:

    app: redis-sentinel-sentinel-ss

  name: redis-sentinel-sentinel-ss

  namespace: ha-test

spec:

  replicas: 3

  selector:

    matchLabels:

      app: redis-sentinel-sentinel-ss

  serviceName: redis-sentinel-sentinel-ss

  template:

    metadata:

      labels:

        app: redis-sentinel-sentinel-ss

    spec:

      containers:

      - args:

        - -c

        - cp /mnt/redis-sentinel.conf /data/ ; redis-sentinel /data/redis-sentinel.conf

        command:

        - sh

        image: redis:5.0.5-alpine

        imagePullPolicy: IfNotPresent

        name: redis-sentinel

        ports:

        - containerPort: 26379

          name: sentinel-port

          protocol: TCP

        volumeMounts:

        - mountPath: /mnt/

          name: config-volume

          readOnly: false

      serviceAccountName: redis-sentinel

      terminationGracePeriodSeconds: 30

      volumes:

      - configMap:

          items:

          - key: redis-sentinel.conf

            path: redis-sentinel.conf

          name: redis-sentinel-config

        name: config-volume

redis-sentinel-service-sentinel.yaml

kind: Service

apiVersion: v1

metadata:

  labels:

    app: redis-sentinel-sentinel-ss

  name: redis-sentinel-sentinel-ss

  namespace: ha-test

spec:

  clusterIP: None

  ports:

  - name: redis

    port: 26379

    targetPort: 26379

  selector:

app: redis-sentinel-sentinel-ss

使上述两个文件生效

kubectl apply -f redis-sentinel-ss-sentinel.yaml -f redis-sentinel-service-sentinel.yaml

查看命名空间下各个服务状态

kubectl get service -n default

至此哨兵搭建完成.之后用代码检验

g.代码检验

pods通讯测试
master连接slave测试

[root@k8s-master01 redis-sentinel]# kubectl exec -ti redis-sentinel-master-ss-0 -n default -- redis-cli -h redis-sentinel-slave-ss-0.redis-sentinel-slave-ss.default.svc.cluster.local  ping
PONG
[root@k8s-master01 redis-sentinel]# kubectl exec -ti redis-sentinel-master-ss-0 -n default -- redis-cli -h redis-sentinel-slave-ss-1.redis-sentinel-slave-ss.default.svc.cluster.local  ping 
PONG

slave连接master测试

[root@k8s-master01 redis-sentinel]# kubectl exec -ti redis-sentinel-slave-ss-0 -n default -- redis-cli -h redis-sentinel-master-ss-0.redis-sentinel-master-ss.default.svc.cluster.local  ping 
PONG 
[root@k8s-master01 redis-sentinel]# kubectl exec -ti redis-sentinel-slave-ss-1 -n default -- redis-cli -h redis-sentinel-master-ss-0.redis-sentinel-master-ss.default.svc.cluster.local  ping 
PONG

同步状态查看

[root@k8s-master01 redis-sentinel]# kubectl exec -ti redis-sentinel-slave-ss-1 -n default -- redis-cli -h redis-sentinel-master-ss-0.redis-sentinel-master-ss.default.svc.cluster.local  info replication 

Replication

role:master 
connected_slaves:2 
slave0:ip=172.168.5.94,port=6379,state=online,offset=80410,lag=1 slave1:ip=172.168.6.113,port=6379,state=online,offset=80410,lag=0 master_replid:ad4341815b25f12d4aeb390a19a8bd8452875879 master_replid2:0000000000000000000000000000000000000000 
master_repl_offset:80410 
second_repl_offset:-1 
repl_backlog_active:1 
repl_backlog_size:1048576 
repl_backlog_first_byte_offset:1 
repl_backlog_histlen:80410

同步测试

master写入数据 [root@k8s-master01 redis-sentinel]# kubectl exec -ti redis-sentinel-slave-ss-1 -n default -- redis-cli -h redis-sentinel-master-ss-0.redis-sentinel-master-ss.default .svc.cluster.local set test test_data

OK


master获取数据 [root@k8s-master01 redis-sentinel]# kubectl exec -ti redis-sentinel-slave-ss-1 -n default -- redis-cli -h redis-sentinel-master-ss-0.redis-sentinel-master-ss.default .svc.cluster.local get test

“test_data”

slave获取数据 [root@k8s-master01 redis-sentinel]# kubectl exec -ti redis-sentinel-slave-ss-1 -n default -- redis-cli get test

“test_data”
从节点无法写入数据
[

root@k8s-master01 redis-sentinel]# kubectl exec -ti redis-sentinel-slave-ss-1 -n default -- redis-cli   set k v 
(error) READONLY You can't write against a read only replica.

NFS查看数据存储

[root@nfs redis-sentinel]# tree . 
. ├── 0 │   └── dump.rdb 
├── 1 │   └── dump.rdb 
└── 2     └── dump.rdb 
3 directories, 3 files

容灾测试

查看当前数据

[root@k8s-master01 ~]# kubectl exec -ti redis-sentinel-master-ss-0 -n default  -- redis-cli -h 127.0.0.1 -p 6379 get test 
"test_data"

关闭master节点,查看状态
[

root@k8s-master01 ~]# kubectl get pods -n default 
NAME READY STATUS RESTARTS AGE 
redis-sentinel-sentinel-ss-0   1/1            Running   0                    22m 
redis-sentinel-sentinel-ss-1   1/1             Running   0                 22m 
redis-sentinel-sentinel-ss-2   1/1             Running   0                  22m 
redis-sentinel-slave-ss-0         1/1            Running   0                   17h 
redis-sentinel-slave-ss-1         1/1            Running   0                   17h

查看sentinel状态

[root@k8s-master01 redis]# kubectl exec -ti redis-sentinel-sentinel-ss-2 -n public-service -- redis-cli -h 127.0.0.1 -p 26379 info Sentinel 

Sentinel

sentinel_masters:1 
sentinel_tilt:0 
sentinel_running_scripts:0 
sentinel_scripts_queue_length:0 
sentinel_simulate_failure_flags:0 master0:name=mymaster,status=ok,address=172.168.6.116:6379,slaves=2,sentinels=3 
[root@k8s-master01 redis]# kubectl exec -ti redis-sentinel-slave-ss-0 -n public-service -- redis-cli -h 127.0.0.1 -p 6379 info replication 

Replication

role:slave 
master_host:172.168.6.116 
master_port:6379 
master_link_status:up master_last_io_seconds_ago:0 
master_sync_in_progress:0 
slave_repl_offset:82961 
slave_priority:100 
slave_read_only:1 
connected_slaves:0 
master_replid:4097ccd725a7ffc6f3767f7c726fc883baf3d7ef master_replid2:603280e5266e0a6b0f299d2b33384c1fd8c3ee64 master_repl_offset:82961 second_repl_offset:68647 
repl_backlog_active:1
repl_backlog_size:1048576 
repl_backlog_first_byte_offset:1 
repl_backlog_histlen:82961
  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值