一、有状态的工作负载
1、普通方式
创建mysql-statefulset.yaml
Headless Service 定义
主要用于对有状态工作负载进行内部通信,Headless Service 不会分配一个虚拟 IP 地址(ClusterIP),而是直接暴露后端的 Pod 地址。
clusterIP:设置为None时:
- 会将每个Pod的IP直接返回给客户端
- 支持通过DNS直接访问每个Pod,例如(pod-name.service-name.namespace.svc.cluster.local)
# Headless Service 定义
apiVersion: v1
kind: Service
metadata: # 资源名称和资源标签
name: mysql-headless
labels:
app: mysql
spec:
clusterIP: None # 设置为 Headless Service
selector:
app: mysql # 定义目标Pod,通过目标Pod的标签匹配到对应的Pod
ports:
- port: 3306 # 暴露service的端口,访问时通过该端口访问
targetPort: 3306 # 访问该service资源的上面端口时,流量会转发到对应Pod的对应端口上
定义对应的有状态服务StatefulSet
这里使用mysql创建有状态工作负载
# StatefulSet 定义
apiVersion: apps/v1 # 有状态工作负载版本号
kind: StatefulSet # 声明为有状态工作负载
metadata: # 定义资源的名称和标签
name: mysql # 资源名称
labels:
app: mysql # 资源标签
spec:
selector:
matchLabels:
app: mysql # 和 Service 的 selector 匹配
serviceName: mysql-headless # 绑定到 Headless Service
replicas: 3 # 副本数
template: # 定义pod模板,生成每个Pod都使用这个模板
metadata:
labels:
app: mysql
spec:
containers: # 设置容器
- name: mysql # 容器名
image: 192.168.2.21/data/repository/mysql:8.4.3 # 容器所使用的镜像
ports: # 暴露端口
- containerPort: 3306
env: # 运行时容器参数
- name: MYSQL_ROOT_PASSWORD
value: "123456" # 根密码
volumeMounts: # 挂载数据卷
- name: mysql-data # 挂载到名为mysql-data的pvc资源对象
mountPath: /var/lib/mysql # 数据挂载路径(容器内路径)
volumeClaimTemplates: # 用于存储卷的动态持久化配置,为每一个Pod独立存储需求
- metadata:
name: mysql-data # 指定挂载的pvc对象
spec:
accessModes: ["ReadWriteOnce"] # PVC 访问模式,这里是每个卷只能被一个pod进行读写
resources:
requests:
storage: 10Gi # 每个副本的存储需求
结合成同一个yaml进行创建,每个资源通过—隔离开
mysql-statefulset.yaml
# Headless Service 定义
apiVersion: v1
kind: Service
metadata: # 资源名称和资源标签
name: mysql-headless
labels:
app: mysql
spec:
clusterIP: None # 设置为 Headless Service
selector:
app: mysql # 定义目标Pod,通过目标Pod的标签匹配到对应的Pod
ports:
- port: 3306 # 暴露service的端口,访问时通过该端口访问
targetPort: 3306 # 访问该service资源的上面端口时,流量会转发到对应Pod的对应端口上
---
# 创建pv
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-local-pv # PV 的名称
spec: # 资源描述
capacity:
storage: 35Gi # 存储容量
volumeMode: Filesystem # 卷模式(Filesystem 或 Block)
accessModes:
- ReadWriteOnce # 访问模式
persistentVolumeReclaimPolicy: Retain # 回收策略
storageClassName: default-storage # 关联的 StorageClass
local: # 本地存储
path: /mnt/data # 本地磁盘路径
nodeAffinity: # 节点亲和性
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values: # 指定存储卷绑定的节点
- k8snode1
- k8snode2
---
# StatefulSet 定义
apiVersion: apps/v1 # 有状态工作负载版本号
kind: StatefulSet # 声明为有状态工作负载
metadata: # 定义资源的名称和标签
name: mysql # 资源名称
labels:
app: mysql # 资源标签
spec:
selector:
matchLabels:
app: mysql # 和 Service 的 selector 匹配
serviceName: mysql-headless # 绑定到 Headless Service
replicas: 3 # 副本数
template: # 定义pod模板,生成每个Pod都使用这个模板
metadata:
labels:
app: mysql
spec:
containers: # 设置容器
- name: mysql # 容器名
image: 192.168.2.21/data/repository/mysql:8.4.3 # 容器所使用的镜像
ports: # 暴露端口
- containerPort: 3306
env: # 运行时容器参数
- name: MYSQL_ROOT_PASSWORD
value: "123456" # 根密码
volumeMounts: # 挂载数据卷
- name: mysql-local-pvc # 挂载到名为mysql-local-pvc的pvc资源对象
mountPath: /var/lib/mysql # 数据挂载路径(容器内路径)
volumeClaimTemplates: # 用于存储卷的动态持久化配置,为每一个Pod独立存储需求
- metadata:
name: mysql-local-pvc # 指定挂载的pvc对象
spec:
accessModes: ["ReadWriteOnce"] # PVC 访问模式,这里是每个卷只能被一个pod进行读写
resources:
requests:
storage: 10Gi # 每个副本的存储需求
storageClassName: default-storage # 指定使用的 StorageClass
命令
# 删除名为mysql有状态工作负载,但不删除pod和pvc --cascade=orphan 确保删除 StatefulSet 时不会删除关联的 PVC 和 Pod
kubectl delete statefulset mysql --cascade=orphan
# 删除生成的pod,例如名为mysql-0、mysql-1、mysql-2等都会被删除
kubectl delete pod -l app=mysql
# 删除pvc
kubectl delete pvc -l app=mysql
# 删除service
kubectl delete service mysql-headless
# 确保所有的都已经被删除
kubectl get statefulset,pod,pvc,service
# 详情
kubectl describe pod <pod-name>
这种方式存在的问题:
使用本地存储时,多个副本如果指定了同一个pv,那么会出现第一个副本会将指定的pv空间全部沾满,从而导致第二个副本无法再使用指定的pv,进而创建副本失败
解决方法:手动分配,在创建副本时,先创建pvc,将pvc绑定到对应的pv
2 、自动分配存储
使用 Local Persistent Volume 插件(态存储卷管理)
插件安装步骤:
(1)在docker镜像中下载Local Persistent Volume镜像并推送到本地镜像仓库
local-path-provisioner镜像地址:https://hub.docker.com/r/rancher/local-path-provisioner/tags
busybox镜像地址:https://hub.docker.com/_/busybox/tags
# 在docker公共镜像仓库中下载
docker pull rancher/local-path-provisioner:v0.0.30
docker pull busybox:latest
# 打标签
docker tag rancher/local-path-provisioner:v0.0.30 192.168.31.243/plug/repository/rancher/local-path-provisioner:v0.0.30
docker tag busybox:latest 192.168.31.243/plug/repository/busybox:latest
# 推送到本地镜像仓库,提供给k8s使用,推送之前记得先通过docker登录到镜像仓库
docker push 192.168.31.243/plug/repository/rancher/local-path-provisioner:v0.0.30
docker push 192.168.31.243/plug/repository/busybox:latest
# 拉取
(2)准备yaml配置文件
github地址:https://github.com/rancher/local-path-provisioner
配置文件地址:https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml
apiVersion: v1
kind: Namespace
metadata:
name: local-path-storage
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: local-path-provisioner-service-account
namespace: local-path-storage
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: local-path-provisioner-role
namespace: local-path-storage
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch", "create", "patch", "update", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: local-path-provisioner-role
rules:
- apiGroups: [""]
resources: ["nodes", "persistentvolumeclaims", "configmaps", "pods", "pods/log"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "patch", "update", "delete"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: local-path-provisioner-bind
namespace: local-path-storage
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: local-path-provisioner-role
subjects:
- kind: ServiceAccount
name: local-path-provisioner-service-account
namespace: local-path-storage
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: local-path-provisioner-bind
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: local-path-provisioner-role
subjects:
- kind: ServiceAccount
name: local-path-provisioner-service-account
namespace: local-path-storage
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: local-path-provisioner
namespace: local-path-storage
spec:
replicas: 1
selector:
matchLabels:
app: local-path-provisioner
template:
metadata:
labels:
app: local-path-provisioner
spec:
serviceAccountName: local-path-provisioner-service-account
containers:
- name: local-path-provisioner
image: 192.168.31.243/plug/repository/rancher/local-path-provisioner:v0.0.30 # 修改为已经推送上去的本地镜像仓库
imagePullPolicy: IfNotPresent
command:
- local-path-provisioner
- --debug
- start
- --config
- /etc/config/config.json
volumeMounts:
- name: config-volume
mountPath: /etc/config/
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: CONFIG_MOUNT_PATH
value: /etc/config/
volumes:
- name: config-volume
configMap:
name: local-path-config
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-path
provisioner: rancher.io/local-path
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete
---
kind: ConfigMap
apiVersion: v1
metadata:
name: local-path-config
namespace: local-path-storage
data:
config.json: |-
{
"nodePathMap":[
{
"node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",
"paths":["/app/pod/mysql/data"]
}
]
}
setup: |-
#!/bin/sh
set -eu
mkdir -m 0777 -p "$VOL_DIR"
teardown: |-
#!/bin/sh
set -eu
rm -rf "$VOL_DIR"
helperPod.yaml: |-
apiVersion: v1
kind: Pod
metadata:
name: helper-pod
spec:
priorityClassName: system-node-critical
tolerations:
- key: node.kubernetes.io/disk-pressure
operator: Exists
effect: NoSchedule
containers:
- name: helper-pod
image: 192.168.31.243/plug/repository/busybox:latest # 需要该插件辅助创建,换成本地仓库地址
imagePullPolicy: IfNotPresent
命令:
# 查看运行状态
kubectl get deployment -n local-path-storage
kubectl get pods -n local-path-storage
# 查看日志
kubectl logs -n local-path-storage -l app=local-path-provisioner
# 验证状态
kubectl describe storageclass local-path
# 查看pods状态
kubectl get pods -n local-path-storage -o wide
卸载
# 删除相关配置
kubectl delete configmap local-path-config -n local-path-storage
# 删除相关Deployment
kubectl delete deployment local-path-provisioner -n local-path-storage
# 删除相关pod
kubectl delete pod -l app=local-path-provisioner -n local-path-storage
# 删除命名空间
kubectl delete namespace local-path-storage
# 删除StorageClass
kubectl delete storageclass local-path
# 清理挂载的pv和pvc
kubectl delete pvc -n <namespace> -l app=local-path-provisioner
kubectl delete pv -l app=local-path-provisioner
插件安装完成后,重新部署mysql工作负载
mysql-statefulset.yaml
# Headless Service 定义
apiVersion: v1
kind: Service
metadata: # 资源名称和资源标签
name: mysql-headless
labels:
app: mysql
spec:
clusterIP: None # 设置为 Headless Service
selector:
app: mysql # 定义目标Pod,通过目标Pod的标签匹配到对应的Pod
ports:
- port: 3306 # 暴露service的端口,访问时通过该端口访问
targetPort: 3306 # 访问该service资源的上面端口时,流量会转发到对应Pod的对应端口上
---
# StatefulSet 定义
apiVersion: apps/v1 # 有状态工作负载版本号
kind: StatefulSet # 声明为有状态工作负载
metadata: # 定义资源的名称和标签
name: mysql # 资源名称
labels:
app: mysql # 资源标签
spec:
selector:
matchLabels:
app: mysql # 和 Service 的 selector 匹配
serviceName: mysql-headless # 绑定到 Headless Service
replicas: 3 # 副本数
template: # 定义pod模板,生成每个Pod都使用这个模板
metadata:
labels:
app: mysql
spec:
containers: # 设置容器
- name: mysql # 容器名
image: 192.168.31.243/data/reposttory/mysql:8.4.3 # 容器所使用的镜像
ports: # 暴露端口
- containerPort: 3306
env: # 运行时容器参数
- name: MYSQL_ROOT_PASSWORD
value: "123456" # 根密码
volumeMounts: # 挂载数据卷
- name: mysql-local-pvc # 挂载到名为mysql-local-pvc的pvc资源对象
mountPath: /var/lib/mysql # 数据挂载路径(容器内路径)
volumeClaimTemplates: # 用于存储卷的动态持久化配置,为每一个Pod独立存储需求
- metadata:
name: mysql-local-pvc # 指定挂载的pvc对象
spec:
accessModes: ["ReadWriteOnce"] # PVC 访问模式,这里是每个卷只能被一个pod进行读写
resources:
requests:
storage: 10Gi # 每个副本的存储需求
storageClassName: local-path # 指定使用的 StorageClass,这里使用Local Persistent Volume 插件
配置对外访问
方法1:使用NodePort类型的service,暴露端口实现外网访问
mysql-out-service.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql-external
namespace: default
spec:
type: NodePort
selector:
app: mysql # 选择与 MySQL Pod 匹配的标签
ports:
- port: 3306 # MySQL 的内部端口
targetPort: 3306 # Pod 中 MySQL 的容器端口
nodePort: 30000 # 外部暴露的端口
命令:
# 创建service
kubectl apply -f mysql-out-service.yaml
# 查看暴露端口
kubectl get svc
通过Navicat访问,通过节点ip+端口即可访问集群,节点可以是任意节点的ip
<node-ip>:端口
用户名+密码
方法2:使用LoadBalancer类型的service,进行负载均衡,但是需要云服务提供商提供负载均衡器
mysql-external-service-lb.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql-external-lb
namespace: default
spec:
type: LoadBalancer
selector:
app: mysql # 选择与 MySQL Pod 匹配的标签
ports:
- port: 3306 # MySQL 的服务端口
targetPort: 3306 # Pod 中 MySQL 的容器端口
命令:
# 创建service
kubectl apply -f mysql-external-service-lb.yaml
# 查看服务
kubectl get svc
# 查看详情
kubectl describe svc mysql-external-lb
方法3:使用Ingress