报错
本地测试环境k8s重启后,stateful set报错了
# 报错信息
MountVolume.NewMounter initialization failed for volume "pvc-61dedc85-ea5a-4ac7-aaf3-e072e2e46e18" : path "/var/openebs/local/pvc-61dedc85-ea5a-4ac7-aaf3-e072e2e46e18" does not exist
原因
观察到的现象就是本机的目录文件不存在,也就是docker里面的文件没有保存到本地文件系统上。
我们来想想可能的原因,第一个想到就是volumeMounts,这里涉及到挂载
volumeMounts:
- name: proxysql-data
mountPath: /var/lib/proxysql
volumeMounts:
- name: proxysql-data
mountPath: /var/lib/proxysql
mountPropagation: HostToContainer
# 结果还是空,说明不是这个问题。看下官网对mountPropagation的解释
# HostToContainer - 此卷挂载将会感知到主机后续针对此卷或其任何子目录的挂载操作。
ls /var/openebs/local
在看看第二个,创建sc的时候有一个RECLAIM POLICY
-> % kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-5ea2d181-27b7-4d92-8488-c6f966fab97b 1Gi RWO Delete Bound proxysql/proxysql-data-proxysql4406-1 sc-file-hdd 6m58s
pvc-72e28c0a-7c65-4920-917d-a9a47841968c 1Gi RWO Delete Bound proxysql/proxysql-data-proxysql4406-0 sc-file-hdd 7m3s
-> % kubectl patch pv pvc-5ea2d181-27b7-4d92-8488-c6f966fab97b -p "{
\"spec\":{
\"persistentVolumeReclaimPolicy\":\"Retain\"}}"
persistentvolume/pvc-5ea2d181-27b7-4d92-8488-c6f966fab97b patched
-> % kubectl patch pv pvc-72e28c0a-7c65-4920-917d-a9a47841968c -p "{
\"spec\":{
\"persistentVolumeReclaimPolicy\":\"Retain\"}}"
persistentvolume/pvc-72e28c0a-7c65-4920-917d-a9a47841968c patched
-> % kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-5ea2d181-27b7-4d92-8488-c6f966fab97b 1Gi RWO Retain Bound proxysql/proxysql-data-proxysql4406-1 sc-file-hdd 8m6s
pvc-72e28c0a-7c65-4920-917d-a9a47841968c 1Gi RWO Retain Bound proxysql/proxysql-data-proxysql4406-0 sc-file-hdd 8m11s
然后重启minikube,还是报一样的错。
RECLAIM POLICY的说明
- 默认回收策略为 “Delete”。 这表示当用户删除对应的 PersistentVolumeClaim 时,动态配置的 volume 将被自动删除
- 使用 “Retain” 时,如果用户删除 PersistentVolumeClaim,对应的 PersistentVolume 不会被删除
上面的这个参数是对PVC操作的时候有用。但是当k8s重启后就没用了
做到这里,只能是和openebs的配置有关了,为什么本机hostpath
kubectl get pv pvc-5ea2d181-27b7-4d92-8488-c6f966fab97b -o yaml
local:
fsType: ""
path: /var/openebs/local/pvc-5ea2d181-27b7-4d92-8488-c6f966fab97b
-> % ls /var/openebs/local/pvc-5ea2d181-27b7-4d92-8488-c6f966fab97b
ls: /var/openebs/local/pvc-5ea2d181-27b7-4d92-8488-c6f966fab97b: No such file or directory
明明绑定的时候这个目录,但是本地的文件系统上确没有。
最初安装的时候是最简化安装, 官方安装地址说明
helm install openebs openebs/openebs -n openebs --create-namespace \
--set legacy.enabled=false \
--set ndm.enabled=false \
--set ndmOperator.enabled=false \
--set localprovisioner.enableDeviceClass=false \
--set localprovisioner.basePath=/var/openebs/test
# 重新按照官方上推荐的部署openebs
If you would like to use only Local PV (hostpath and device), you can install a lite version of OpenEBS using the following command.
kubectl apply -f https://openebs.github.io/charts/openebs-operator-lite.yaml
kubectl apply -f https://openebs.github.io/charts/openebs-lite-sc.yaml
# 结论是重新部署已经没有用
-> % ll -h /var/openebs/local/pvc-6361f69b-a77e-4a18-9851-be8a2b8183a8
ls: /var/openebs/local/pvc-6361f69b-a77e-4a18-9851-be8a2b8183a8: No such file or directory
# 下一步只能把涉及到的容器的日志都看一遍
-> % kubectl get pods -n openebs | awk '{print $1}' | tail -n +2 | while read pod;do echo "--------------$pod-
------------";kubectl logs $pod --tail=20 -n openebs;echo "\n";done
## 这里都是成功的日志,Successfully provisioned volume pvc-6361f69b-a77e-4a18-9851-be8a2b8183a8,不报错太扎眼了,不知道从哪差起了
--------------openebs-localpv-provisioner-6f686f7697-dvjvb-------------
I1030 03:30:40.038022 1 start.go:75] Starting Provisioner...
I1030 03:30:43.162527 1 start.go:139] Leader election enabled for localpv-provisioner via leaderElectionKey
I1030 03:30:43.358356 1 leaderelection.go:248] attempting to acquire leader lease openebs/openebs.io-local...
I1030