概述
在测试过程中发现,直接使用本地存储,当节点机器损坏了,对应机器的etcd数据也丢失了,故而做了利用K8S PV,PVC以及NFS来存储数据的尝试,经过一番折腾,测试成功,博文记录,用以备忘。
本地存储可以参考博文- 利用K8S Statefulset搭建Etcd集群 - 本地存储
测试环境
minikube
Client Version: v1.29.3
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.3
NFS配置
- 根据参考链接搭建NFS服务器
- 创建NFS目录
mkdir -p /{etcd0,etcd1,etcd2}
chmod 666 /{etcd0,etcd1,etcd2} #可选操作
- 修改NFS配置
vim /etc/exports
/etcd0 *(rw,no_root_squash,sync) # *号表示所有机器都可以访问
/etcd1 *(rw,no_root_squash,sync)
/etcd2 *(rw,no_root_squash,sync)
- 重新加载配置
exportfs -rv
yaml配置
pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: etcd0-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: nfs
nfs:
path: /etcd0
server: 192.168.52.128 #指定nfs目录所在的机器的地址
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: etcd1-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: nfs
nfs:
path: /etcd1
server: 192.168.52.128 #指定nfs目录所在的机器的地址
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: etcd2-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: nfs
nfs:
path: /etcd2
server: 192.168.52.128 #指定nfs目录所在的机器的地址
pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: etcd0-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: nfs
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: etcd1-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: nfs
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: etcd2-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: nfs
cluster.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: etcd0
spec:
replicas: 1
selector:
matchLabels:
name: etcd-operator
serviceName: etcdsrv
template:
metadata:
labels:
name: etcd-operator
spec:
containers:
- name: app
image: quay.io/coreos/etcd:v3.5.9
imagePullPolicy: Always
volumeMounts:
- mountPath: /data/etcd_data
name: etcd-volume
command:
- /usr/local/bin/etcd
- --data-dir
- /data/etcd_data
- --auto-compaction-retention
- '1'
- --quota-backend-bytes
- '8589934592'
- --listen-client-urls
- http://0.0.0.0:2379
- --advertise-client-urls
- http://etcd0-0.etcdsrv:2379
- --listen-peer-urls
- http://0.0.0.0:2380
- --initial-advertise-peer-urls
- http://etcd0-0.etcdsrv:2380
- --initial-cluster-token
- etcd-cluster
- --initial-cluster
- etcd0=http://etcd0-0.etcdsrv:2380,etcd1=http://etcd1-0.etcdsrv:2380,etcd2=http://etcd2-0.etcdsrv:2380
- --initial-cluster-state
- new
- --enable-pprof
- --election-timeout
- '5000'
- --heartbeat-interval
- '250'
- --name
- etcd0
- --logger
- zap
# volumes:
# - name: etcd-volume
# hostPath:
# path: /var/tmp/etcd2
# type: Directory
volumes:
- name: etcd-volume
persistentVolumeClaim:
claimName: etcd0-pvc
...
Q&A
Q: 第一次NFS的配置如下
/etcd0 192.168.52.128/24(rw,no_root_squash,sync)
/etcd1 192.168.52.128/24(rw,no_root_squash,sync)
/etcd2 192.168.52.128/24(rw,no_root_squash,sync)
在创建pv资源的时候,会报错:连接拒绝
A:
因为minikube运行的docker和主机IP并不是同一网段,但是配置里面又限制了对应的IP地址,故而导致访问不了
将配置中的
192.168.52.128/24改成 “*” 即可