背景
在kubernetes中部署redis集群(Cluster模式, 最低要求三主三丛), 一般情况下默认部署6个副本都是master模式, 本章节将自动化处理主从关系
基于Job进行主从分配, 分配之后自动消亡, 不占用资源
一、环境准备
-
当前安装Redis:7.4.1版本
镜像处理
[root@master01 ~]# docker pull redis:7.4.1
[root@master01 ~]# docker tag redis:7.4.1 base.troila.com:9000/library/redis:7.4.1
[root@master01 ~]# docker push base.troila.com:9000/library/redis:7.4.1
- redis密码处理
- 此章节redis安装过程中, 没有此secret, 应为我在整体的项目工程中提前将所有的中间件的用户、密码进行统一分配, 统一封装到base charts中,请自行考虑是否需要密码
- 以下章节基于密码进行安装
[root@master01 ~]# kubectl -n troila-ticp-dev get secrets troila-ticp-dev-secret -o yaml
apiVersion: v1
data:
mysql.username: cm9vdA==
mysql.password: VHJvaWxhMTIz
nacos.username: bmFjb3M=
nacos.password: bmFjb3M=
redis.password: MTIzNDU2
seata.username: c2VhdGE=
seata.password: c2VhdGE=
sentinel.username: c2VudGluZWwK
sentinel.password: c2VudGluZWwK
kind: Secret
metadata:
annotations:
meta.helm.sh/release-name: troila-ticp-base-dev
meta.helm.sh/release-namespace: troila-ticp-dev
creationTimestamp: "2024-11-03T09:12:15Z"
labels:
app.kubernetes.io/managed-by: Helm
troila.kubernetes.io/company: dev.troila.com
troila.kubernetes.io/product: troila
troila.kubernetes.io/type: component
troila.kubernetes.io/version: 0.0.1
name: troila-ticp-dev-secret
namespace: troila-ticp-dev
resourceVersion: "59956"
uid: 140c3197-67fb-45b6-af5e-d8ab42a64e20
type: Opaque
[root@master01 ~]#
二、安装过程处理
创建helm chart
- 目录结构如下
troila-ticp-redis-dev
├── Chart.yaml
├── templates
│ ├── _helpers.tpl
│ ├── database
│ │ ├── configmap.yaml
│ │ ├── service.yaml
│ │ └── statefulset.yaml
│ ├── job
│ │ └── init-cluster-job.yaml
│ └── NOTES.txt
└── values.yaml
- Chart.yaml
- appVersion应用模块对应的版本
- version 当前ECK的版本
- name chart的名称
apiVersion: v2
name: troila-ticp-redis-dev
description: "helm 安装 redis 应用"
type: application
# 配置当前chart版本
version: 0.0.1
# 配置应用版本
appVersion: "7.4.1"
- values.yaml
- 在原有的kubefile的基础上, 增加了节点亲和性
- 提取了基本的配置信息、名称、副本数、quota限制、镜像信息等等
# 全局定义公共参数
global:
warehouse: base.troila.com:9000
libraryObject: library
libraryApp: troila
# storageClass.create: [false, true]
# false: 客户提供nfs和storageClass,我们无需创建;
# true: 我们自己创建存储介质
storageClass:
create: false
name: troila-ticp-dev-nfs-storage
provisionerName: dev.troila.com/nfs
nfs:
server: 172.27.109.6
path: /home/nfs/develop
# storageClass.create: [false, true]
# false: 客户提供serviceAccountName,我们无需创建,将客户提供的名称填写到serviceAccount.namne位置;
# true: 我们自己创建存储介质
# ------
# storageClass.secret.create: [false, true]
# false: 客户提供secret,我们无需创建,用于链接仓库地址的用户名称密码;
# true: 我们自己创建存储介质
serviceAccount:
create: false
name: troila-ticp-dev-rbac
secret:
create: false
name: troila-ticp-dev-rbac-secret
username: admin
password: Troila@123
email: jiangxincan@troila.com
nameOverride: "troila-ticp-redis"
fullnameOverride: "troila-ticp-redis"
## 集群节点数量, 设置3个主节点, 每个主节点有1个从节点
## 总节点数量 := master * node + master
## 注意, 此处node 始终为1, 应为 redis集群cluster模式, 只支持1主1从
master: 3
node: 1
terminationGracePeriodSeconds: 20
app:
type: component
active: "dev"
image:
repository: redis
pullPolicy: IfNotPresent
tag: "7.4.1"
## 集群安装完成之后, 是否自动初始化集群
## 默认为true
## 如果annotations与ttlSecondsAfterFinished同时存在, 则优先使用hook
job:
enabled: true
annotations:
helm.sh/hook: post-install,post-upgrade # 在 Helm 安装和升级前执行(使用helm安装时)
helm.sh/hook-delete-policy: hook-succeeded,hook-failed # Job 完成后自动删除(使用helm安装时)
helm.sh/hook-weight: "5" # 确保 Job 在 StatefulSet 之后运行
ttlSecondsAfterFinished: 60 # Job 完成后 60 秒(1 分钟)自动删除(使用kubernetes file安装时)
persistence:
enabled: true
size: 1Gi
# 注意, 如果开启nodePort访问, 则结合集群节点数量, 节点端口号需要设置成唯一的
service:
type: NodePort
client:
port: 6379
nodePort: 32530
server:
port: 16379
nodePort: 32540
resources:
limits:
cpu: 512m
memory: 1Gi
requests:
cpu: 256m
memory: 512Mi
- templates/_helpers.tpl
- 自定义全局函数(label、镜像下载等)
{{/*
Expand the name of the chart.
*/}}
{{- define "redis.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "redis.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- printf "%s-%s" .Values.fullnameOverride .Values.app.active | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- printf "%s-%s" .Release.Name .Values.app.active | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s-%s" .Release.Name $name .Values.app.active | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "redis.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "redis.labels" -}}
{{ include "redis.selectorLabels" . }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "redis.selectorLabels" -}}
troila.kubernetes.io/managed-by: {{ .Release.Service }}
troila.kubernetes.io/company: test.troila.com
troila.kubernetes.io/product: {{ .Release.Namespace }}
troila.kubernetes.io/version: {{ .Chart.Version }}
troila.kubernetes.io/app: {{ include "redis.fullname" . }}
{{- if .Chart.AppVersion }}
troila.kubernetes.io/appVersion: {{ .Chart.AppVersion | quote }}
{{- end }}
troila.kubernetes.io/type: {{ .Values.app.type }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "redis.proxy.labels" -}}
{{ include "redis.proxy.selectorLabels" . }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "redis.proxy.selectorLabels" -}}
troila.kubernetes.io/managed-by: {{ .Release.Service }}
{{- if eq .Values.app.active "" }}
troila.kubernetes.io/company: troila.com
{{- else }}
troila.kubernetes.io/company: {{ .Values.app.active }}.troila.com
{{- end }}
troila.kubernetes.io/product: {{ .Release.Namespace }}
troila.kubernetes.io/version: {{ .Chart.Version }}
troila.kubernetes.io/app: {{ include "redis.fullname" . }}-proxy
troila.kubernetes.io/appVersion: {{ .Values.proxy.image.tag | quote }}
troila.kubernetes.io/type: {{ .Values.app.type }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "redis.serviceAccountName" -}}
{{- if .Values.global.serviceAccount.create }}
{{- default (include "redis.fullname" .) .Values.global.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.global.serviceAccount.name }}
{{- end }}
{{- end }}
{{/*
Create Secret, download image
*/}}
{{- define "redis.imagePullSecret" }}
{{- with .Values.global.serviceAccount.secret }}
{{- printf "{\"auths\":{\"%s\":{\"username\":\"%s\",\"password\":\"%s\",\"email\":\"%s\",\"auth\":\"%s\"}}}" $.Values.global.warehouse .username .password .email (printf "%s:%s" .username .password | b64enc) | b64enc }}
{{- end }}
{{- end }}
- templates/database
- configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "redis.fullname" . }}-config
labels:
{{- include "redis.labels" . | nindent 4 }}
data:
redis-config: |+
bind 0.0.0.0
appendonly yes
appendfsync everysec
protected-mode no
dir /data
port 6379
cluster-enabled yes
cluster-config-file /data/nodes.conf
cluster-node-timeout 15000
cluster-require-full-coverage no
cluster-migration-barrier 1
cluster-replica-no-failover no
- service.yaml
- 此处创建一个无头服务
- 然后在根据副本数创建对应的service, (Nodport时)动态计算pod
apiVersion: v1
kind: Service
metadata:
name: {{ include "redis.fullname" . }}-headless
labels:
{{- include "redis.labels" . | nindent 4 }}
spec:
clusterIP: None
ports:
- name: client
port: {{ .Values.service.client.port }}
targetPort: {{ .Values.service.client.port }}
protocol: TCP
- name: server
port: {{ .Values.service.server.port }}
targetPort: {{ .Values.service.server.port }}
protocol: TCP
selector:
{{- include "redis.selectorLabels" . | nindent 4 }}
---
{{- range $i := until ((add (mul .Values.master .Values.node) .Values.master) | int) }}
---
apiVersion: v1
kind: Service
metadata:
name: {{ include "redis.fullname" $ }}-{{ $i }}-service
labels:
{{- include "redis.labels" $ | nindent 4 }}
spec:
{{- if eq $.Values.service.type "NodePort" }}
type: {{ $.Values.service.type }}
{{- else }}
type: ClusterIP
{{- end }}
ports:
- name: client
port: {{ $.Values.service.client.port }}
targetPort: {{ $.Values.service.client.port }}
protocol: TCP
{{- if eq $.Values.service.type "NodePort" }}
nodePort: {{ add $.Values.service.client.nodePort $i }}
{{- end }}
- name: server
port: {{ $.Values.service.server.port }}
targetPort: {{ $.Values.service.server.port }}
protocol: TCP
{{- if eq $.Values.service.type "NodePort" }}
nodePort: {{ add $.Values.service.server.nodePort $i }}
{{- end }}
selector:
{{- include "redis.selectorLabels" $ | nindent 4 }}
statefulset.kubernetes.io/pod-name: {{ include "redis.fullname" $ }}-{{ $i }}
{{- end }}
-
statefulset.yaml
-
当前给redis添加了密码, 存到了secret中,初始化时, 需加上
--requirepass ${REDIS_PASSWORD} --masterauth ${REDIS_PASSWORD} --cluster-announce-ip $(POD_IP)
-
{{- $globlalConfig := printf "%s-%s" .Release.Namespace "config" }}
{{- $globlalSecret := printf "%s-%s" .Release.Namespace "secret" }}
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ include "redis.fullname" . }}
labels:
{{- include "redis.labels" . | nindent 4 }}
spec:
replicas: {{ add (mul .Values.master .Values.node) .Values.master }}
serviceName: {{ include "redis.fullname" . }}-headless
selector:
matchLabels:
{{- include "redis.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
{{- include "redis.selectorLabels" . | nindent 8 }}
spec:
imagePullSecrets:
- name: {{ .Values.global.serviceAccount.secret.name }}
serviceAccountName: {{ include "redis.serviceAccountName" . }}
terminationGracePeriodSeconds: {{ .Values.terminationGracePeriodSeconds }}
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: troila.kubernetes.io/app
operator: In
values:
- {{ include "redis.fullname" . }}
topologyKey: kubernetes.io/hostname
containers:
- name: {{ include "redis.fullname" . }}
image: "{{ .Values.global.warehouse }}/{{ .Values.global.libraryObject }}/{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: client
containerPort: {{ .Values.service.client.port }}
- name: server
containerPort: {{ .Values.service.server.port }}
command:
- "bash"
- "-c"
- |
redis-server /etc/redis/redis.conf --requirepass ${REDIS_PASSWORD} --masterauth ${REDIS_PASSWORD} --cluster-announce-ip $(POD_IP)
resources:
{{- toYaml .Values.resources | nindent 12 }}
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: {{ $globlalSecret }}
key: redis.password
volumeMounts:
- name: config
mountPath: /etc/redis
- name: data
mountPath: /data
volumes:
- name: config
configMap:
name: {{ include "redis.fullname" . }}-config
items:
- key: redis-config
path: redis.conf
{{- if not .Values.persistence.enabled }}
- name: data
emptyDir: {}
{{- end }}
{{- if .Values.persistence.enabled }}
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
storageClassName: {{ .Values.global.storageClass.name }}
resources:
requests:
storage: {{ .Values.persistence.size | quote }}
{{- end }}
- templates/job
- init-cluster-job.yaml
- 创建一个Job, 在redis副本启动之后, 进行集群配置
{{- if .Values.job.enabled }}
{{- $globlalConfig := printf "%s-%s" .Release.Namespace "config" }}
{{- $globlalSecret := printf "%s-%s" .Release.Namespace "secret" }}
{{- $jobName := printf "%s-job" (include "redis.fullname" .) }}
apiVersion: batch/v1
kind: Job
metadata:
name: {{ $jobName }}
labels:
{{- include "redis.labels" . | nindent 4 }}
## 同时设定 hook、ttlSecondsAfterFinished, 优先使用hook
annotations:
{{- toYaml .Values.job.annotations | nindent 4 }}
spec:
ttlSecondsAfterFinished: {{ .Values.job.ttlSecondsAfterFinished }} # Job 完成后 300 秒(5 分钟)自动删除
template:
metadata:
labels:
{{- include "redis.labels" . | nindent 8 }}
spec:
restartPolicy: OnFailure # Job 默认不会重启 Pod,失败时需要手动处理
imagePullSecrets:
- name: {{ .Values.global.serviceAccount.secret.name }}
serviceAccountName: {{ include "redis.serviceAccountName" . }}
containers:
- name: {{ $jobName }}
image: "{{ .Values.global.warehouse }}/{{ .Values.global.libraryObject }}/{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
command:
- "/bin/bash"
- "-c"
- |
NUM_PODS={{ add (mul .Values.master .Values.node) .Values.master }} # 总 Pod 数量
MAX_RETRIES=60 # 最大重试次数(可选,防止无限循环)
SLEEP_TIME=10 # 每次重试的间隔时间, 单位是秒
SUCCESS_COUNT=0 # 成功计数清零
SERVICE={{ include "redis.fullname" . }}-headless.{{ .Release.Namespace }}.svc.cluster.local # 集群服务名
while true; do
# 每次循环开始时成功计数清零
SUCCESS_COUNT=0
# 遍历所有 Pod, 逐个检查是否可用,并记录可用数量, 直到成功数量等于总 Pod 数量
for i in $(seq 0 $((NUM_PODS - 1))); do
if redis-cli -h {{ include "redis.fullname" . }}-${i}.${SERVICE} -a "${REDIS_PASSWORD}" ping &>/dev/null; then
echo "{{ include "redis.fullname" . }}-${i} Installation Successful"
SUCCESS_COUNT=$((SUCCESS_COUNT + 1))
else
echo "{{ include "redis.fullname" . }}-$i Installation Failed"
fi
done
# 检查是否所有 Pod 都成功
if [ ${SUCCESS_COUNT} -eq ${NUM_PODS} ]; then
echo "All Redis Pods are reachable. Proceeding with cluster initialization..."
break # 成功时退出外层 while 循环
else
echo "Only ${SUCCESS_COUNT}/${NUM_PODS} Pods are reachable. Retrying in ${SLEEP_TIME} seconds..."
if [ ${MAX_RETRIES} -gt 0 ]; then
MAX_RETRIES=$((MAX_RETRIES - 1))
if [ ${MAX_RETRIES} -eq 0 ]; then
echo "Max retries reached. Some Pods are still not reachable."
exit 1
fi
fi
sleep ${SLEEP_TIME} # 等待一段时间后重试
fi
done
# 初始化 Redis 集群
echo "Initializing Redis cluster with replicas: {{ .Values.node }}"
REDIS_NODES=$(echo "${REDIS_NODES}" | tr '\n' ' ')
redis-cli -h {{ include "redis.fullname" . }}-0.${SERVICE} -a "${REDIS_PASSWORD}" --cluster create ${REDIS_NODES} --cluster-replicas {{ .Values.node }} --cluster-yes
# 查询集群信息
redis-cli -h {{ include "redis.fullname" . }}-0.${SERVICE} -a "${REDIS_PASSWORD}" CLUSTER NODES
if [ $? -eq 0 ]; then
echo "Redis cluster initialized successfully."
exit 0 # 成功退出
else
echo "Failed to initialize Redis cluster."
exit 1 # 失败退出
fi
resources:
limits:
cpu: 256m
memory: 512Mi
requests:
cpu: 128m
memory: 256Mi
env:
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: {{ $globlalSecret }}
key: redis.password
- name: REDIS_NODES
value: |
{{- range $i := until ((add (mul .Values.master .Values.node) .Values.master) | int) }}
{{ include "redis.fullname" $ }}-{{ $i }}.{{ include "redis.fullname" $ }}-headless.{{ $.Release.Namespace }}.svc.cluster.local:{{ $.Values.service.client.port }}
{{- end }}
{{- end }}
综上helm部分全部完成
三、 安装部署
- 首先创建命名空间
[root@master01 ~]# kubectl create ns troila-ticp-dev
- helm一键安装
[root@master01 ~]# helm -n troila-ticp-dev install troila-ticp-redis-dev
NAME: troila-ticp-dev
LAST DEPLOYED: Tue May 20 15:07:50 2025
NAMESPACE: troila-ticp-dev
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
查看节点信息
for x in $(seq 0 5 ); do echo "troila-ticp-redis-dev-$x"; kubectl -n troila-ticp-dev exec -it troila-ticp-redis-dev-$x -- redis-cli -c -h troila-ticp-redis-dev-$x -a 123456 role 5>/dev/null | tail -n 5; echo; done
查看配置
kubectl -n troila-ticp-dev exec -it troila-ticp-redis-dev-0 -- redis-cli -a 123456 cluster nodes
kubectl -n troila-ticp-dev exec -it troila-ticp-redis-dev-0 -- redis-cli -a 123456 cluster info
卸载
1. 卸载chart
helm -n troila-ticp-dev uninstall troila-ticp-redis-dev
2. 删除PVC
kubectl -n troila-ticp-dev delete pvc $(kubectl -n troila-ticp-dev get pvc | grep troila-ticp-redis-dev | awk '{print $1}')
- 安装结果如下
- 我这里是Nodeport, 外部直接访问测试用
[root@master01 ~]# helm -n troila-ticp-dev ls | grep redis
troila-ticp-redis-dev troila-ticp-dev 1 2025-05-26 10:44:33.643357 +0800 CST deployed troila-ticp-redis-dev-0.0.1 v7.4.1
[root@master01 ~]#
[root@master01 ~]# kubectl -n troila-ticp-dev get svc | grep redis
troila-ticp-redis-dev-0-service NodePort 10.96.3.178 <none> 6379:32530/TCP,16379:32540/TCP 23h
troila-ticp-redis-dev-1-service NodePort 10.96.3.96 <none> 6379:32531/TCP,16379:32541/TCP 23h
troila-ticp-redis-dev-2-service NodePort 10.96.1.101 <none> 6379:32532/TCP,16379:32542/TCP 23h
troila-ticp-redis-dev-3-service NodePort 10.96.2.26 <none> 6379:32533/TCP,16379:32543/TCP 23h
troila-ticp-redis-dev-4-service NodePort 10.96.2.247 <none> 6379:32534/TCP,16379:32544/TCP 23h
troila-ticp-redis-dev-5-service NodePort 10.96.2.122 <none> 6379:32535/TCP,16379:32545/TCP 23h
troila-ticp-redis-dev-headless ClusterIP None <none> 6379/TCP,16379/TCP 23h
[root@master01 ~]# kubectl -n troila-ticp-dev get pod -o wide | grep redis
troila-ticp-redis-dev-0 1/1 Running 0 24h 100.78.211.196 slave03.troila.com <none> <none>
troila-ticp-redis-dev-1 1/1 Running 0 24h 100.106.213.136 slave05.troila.com <none> <none>
troila-ticp-redis-dev-2 1/1 Running 0 24h 100.115.50.105 slave14.troila.com <none> <none>
troila-ticp-redis-dev-3 1/1 Running 0 24h 100.123.19.208 slave02.troila.com <none> <none>
troila-ticp-redis-dev-4 1/1 Running 0 24h 100.123.104.188 slave04.troila.com <none> <none>
troila-ticp-redis-dev-5 1/1 Running 0 24h 100.97.128.121 slave15.troila.com <none> <none>
[root@master01 ~]#
## 第0个节点信息, 连接、节点分配、槽位分配没有问题 0-5460
[root@master01 ~]# kubectl -n troila-ticp-dev exec -it troila-ticp-redis-dev-0 -- redis-cli -a 123456 cluster nodes
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
942c5c56c6667fd9093058c9df1303d3a57310aa 100.106.213.136:6379@16379 master - 0 1748314705942 2 connected 5461-10922
98c2a174ee48264d771ba51a13b684baccecf502 100.78.211.196:6379@16379 myself,master - 0 0 1 connected 0-5460
89fd5e96c5cf7ec962c259497b2bd92f3af26669 100.123.104.188:6379@16379 slave 98c2a174ee48264d771ba51a13b684baccecf502 0 1748314707958 1 connected
9c8c725b4b3c15f240cf7ad054e5d0becce512da 100.115.50.105:6379@16379 master - 0 1748314705000 3 connected 10923-16383
d631aad2283cb78e1d31bb6b1696b2c518243417 100.123.19.208:6379@16379 slave 9c8c725b4b3c15f240cf7ad054e5d0becce512da 0 1748314706000 3 connected
611074531aff1d2fa4e7d5ffa676977456ba7fdc 100.97.128.121:6379@16379 slave 942c5c56c6667fd9093058c9df1303d3a57310aa 0 1748314706949 2 connected
## 第1个节点信息, 连接、节点分配、槽位分配没有问题 5461-10922
[root@master01 ~]# kubectl -n troila-ticp-dev exec -it troila-ticp-redis-dev-1 -- redis-cli -a 123456 cluster nodes
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
9c8c725b4b3c15f240cf7ad054e5d0becce512da 100.115.50.105:6379@16379 master - 0 1748314776945 3 connected 10923-16383
611074531aff1d2fa4e7d5ffa676977456ba7fdc 100.97.128.121:6379@16379 slave 942c5c56c6667fd9093058c9df1303d3a57310aa 0 1748314776000 2 connected
98c2a174ee48264d771ba51a13b684baccecf502 100.78.211.196:6379@16379 master - 0 1748314777000 1 connected 0-5460
d631aad2283cb78e1d31bb6b1696b2c518243417 100.123.19.208:6379@16379 slave 9c8c725b4b3c15f240cf7ad054e5d0becce512da 0 1748314775000 3 connected
942c5c56c6667fd9093058c9df1303d3a57310aa 100.106.213.136:6379@16379 myself,master - 0 0 2 connected 5461-10922
89fd5e96c5cf7ec962c259497b2bd92f3af26669 100.123.104.188:6379@16379 slave 98c2a174ee48264d771ba51a13b684baccecf502 0 1748314777945 1 connected
## 第2个节点信息, 连接、节点分配、槽位分配没有问题 10923-16383
[root@master01 ~]# kubectl -n troila-ticp-dev exec -it troila-ticp-redis-dev-2 -- redis-cli -a 123456 cluster nodes
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
9c8c725b4b3c15f240cf7ad054e5d0becce512da 100.115.50.105:6379@16379 myself,master - 0 0 3 connected 10923-16383
98c2a174ee48264d771ba51a13b684baccecf502 100.78.211.196:6379@16379 master - 0 1748314782000 1 connected 0-5460
611074531aff1d2fa4e7d5ffa676977456ba7fdc 100.97.128.121:6379@16379 slave 942c5c56c6667fd9093058c9df1303d3a57310aa 0 1748314783315 2 connected
942c5c56c6667fd9093058c9df1303d3a57310aa 100.106.213.136:6379@16379 master - 0 1748314782000 2 connected 5461-10922 d631aad2283cb78e1d31bb6b1696b2c518243417 100.123.19.208:6379@16379 slave 9c8c725b4b3c15f240cf7ad054e5d0becce512da 0 1748314782308 3 connected 89fd5e96c5cf7ec962c259497b2bd92f3af26669 100.123.104.188:6379@16379 slave 98c2a174ee48264d771ba51a13b684baccecf502 0 1748314784323 1 connected [root@master01 ~]# ## 节点查看 [root@master01 ~]# for x in $(seq 0 5 ); do echo "troila-ticp-redis-dev-$x"; kubectl -n troila-ticp-dev exec -it troila-ticp-redis-dev-$x -- redis-cli -c -h troila-ticp-redis-dev-$x -a 123456 role 5>/dev/null | tail -n 5; echo; done
troila-ticp-redis-dev-0
1) "master"
2) (integer) 937946
3) 1) 1) "100.123.104.188"
2) "6379"
3) "937946"
troila-ticp-redis-dev-1
1) "master"
2) (integer) 9559314
3) 1) 1) "100.97.128.121"
2) "6379"
3) "9559314"
troila-ticp-redis-dev-2
1) "master"
2) (integer) 205532
3) 1) 1) "100.123.19.208"
2) "6379"
3) "205532"
troila-ticp-redis-dev-3
1) "slave"
2) "100.115.50.105"
3) (integer) 6379
4) "connected"
5) (integer) 205532
troila-ticp-redis-dev-4
1) "slave"
2) "100.78.211.196"
3) (integer) 6379
4) "connected"
5) (integer) 937946
troila-ticp-redis-dev-5
1) "slave"
2) "100.106.213.136"
3) (integer) 6379
4) "connected"
5) (integer) 9559314
[root@master01 ~]#
ok, 整体完成, 后续如果有pod消亡, 会自动进行补位和选举