kubelet_volumes.go:128] Orphaned pod "86d60ee9-9fae-11e8-8cfc-525400290b20" found, but volume paths are still present on disk. : There were a total of 1 errors similar to this. Turn up verbosity to see them.
kubelet_volumes.go:128] Orphaned pod "86d60ee9-9fae-11e8-8cfc-525400290b20" found, but volume paths are still present on disk. : There were a total of 1 errors similar to this. Turn up verbosity to see them.
kubelet_volumes.go:128] Orphaned pod "86d60ee9-9fae-11e8-8cfc-525400290b20" found, but volume paths are still present on disk. : There were a total of 1 errors similar to this. Turn up verbosity to see them.
通过id号,进入kubelet的目录,可以发现里面装的是容器的数据,etc-hosts
文件中还保留着podname。
# cd /var/lib/kubelet/pods/86d60ee9-9fae-11e8-8cfc-525400290b20
/var/lib/kubelet/pods/86d60ee9-9fae-11e8-8cfc-525400290b20# ls
containers etc-hosts plugins volumes
/var/lib/kubelet/pods/86d60ee9-9fae-11e8-8cfc-525400290b20# cat etc-hosts
# Kubernetes-managed hosts file.
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
fe00::0 ip6-mcastprefix
fe00::1 ip6-allnodes
fe00::2 ip6-allrouters
172.16.1.180 omc-test-2509590746-mw56s
解决问题
首先通过etc-hosts文件的pod name 发现已经没有相关的实例在运行了,然后按照issue中的提示,删除pod
rm -rf 86d60ee9-9fae-11e8-8cfc-525400290b20
但是这个方法有一定的危险性
,还不确认是否有数据丢失
的风险,如果可以确认,再执行。或在issue中寻找更好的解决方法。
再去查看日志,就会发现syslog不会再刷类似的日志了。