0-glusterfs: failed to set volfile server: File exists

场景:k8s glusterfs | 存储后端 | sc动态存储
zookeeper 无头服务 kafka-zookeeper-0 一直处于ContainerCreating状态查看kubectl describe pod kafka-zookeeper-0
循环打印一下错误…

Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/625c27d3-1951-428c-b4b3-dd5c341a5b73/volumes/kubernetes.io~glusterfs/pvc-2715b689-9d36-46ee-a429-39a350af0dfe --scope -- mount -t glusterfs -o auto_unmount,backup-volfile-servers=10.0.2.34:10.0.2.35:10.0.2.33,log-file=/var/lib/kubelet/plugins/kubernetes.io/glusterfs/pvc-2715b689-9d36-46ee-a429-39a350af0dfe/kafka-zookeeper-0-glusterfs.log,log-level=ERROR 10.0.2.34:vol_2ca30e3ac75b9ae09b0cb1a0add66dbe /var/lib/kubelet/pods/625c27d3-1951-428c-b4b3-dd5c341a5b73/volumes/kubernetes.io~glusterfs/pvc-2715b689-9d36-46ee-a429-39a350af0dfe
Output: Running scope as unit run-8680.scope.
[2022-01-24 09:30:40.006404] E [glusterfsd.c:828:gf_remember_backup_volfile_server] 0-glusterfs: failed to set volfile server: File exists
Mount failed. Check the log file /var/lib/kubelet/plugins/kubernetes.io/glusterfs/pvc-2715b689-9d36-46ee-a429-39a350af0dfe/kafka-zookeeper-0-glusterfs.log for more details.
, the following error information was pulled from the glusterfs log to help diagnose this issue:
[2022-01-24 09:30:40.019085] E [MSGID: 108006] [afr-common.c:5313:__afr_handle_child_down_event] 0-vol_2ca30e3ac75b9ae09b0cb1a0add66dbe-replicate-0: All subvolumes are down. Going offline until at least one of them comes back up.
[2022-01-24 09:30:42.430281] E [fuse-bridge.c:5298:fuse_first_lookup] 0-fuse: first lookup on root failed (Transport endpoint is not connected)
  Warning  FailedMount  119s  kubelet  MountVolume.SetUp failed for volume "pvc-2715b689-9d36-46ee-a429-39a350af0dfe" : mount failed: mount failed: exit status 1

在这里插入图片描述

https://stackoverflow.com/questions/60683873/0-glusterfs-failed-to-set-volfile-server-file-exists

在自己电脑上、服务起不来、又TM给我自动断开连接.索性再次卸载安装(pvc/pv 什么的没删)
在这里插入图片描述
在这里插入图片描述

… 起来了 莫名其妙的问题
在这里插入图片描述

在这里插入图片描述

[root@k8s-storage-02 /]# gluster volume info vol_ed8913602dbbc0510a6ecf8ad3bf941b

Volume Name: vol_ed8913602dbbc0510a6ecf8ad3bf941b
Type: Replicate
Volume ID: 7f334d29-70fc-49e8-aec3-d74b470865df
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.0.2.45:/var/lib/heketi/mounts/vg_094f45cfcd0ff5815c3914723876c69e/brick_5a398d6130306007bd2c72d0cf53d947/brick
Brick2: 10.0.2.46:/var/lib/heketi/mounts/vg_1f3f18f36dae1a1d968713e583b7972a/brick_44590fc606e77a08c405fe789abbe514/brick
Brick3: 10.0.2.44:/var/lib/heketi/mounts/vg_a04ab1f3fafb5b88709b6354df22ea3e/brick_5a423d1cdee47f93f0f3a62b7a329ef2/brick
Options Reconfigured:
user.heketi.id: ed8913602dbbc0510a6ecf8ad3bf941b
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
[root@k8s-storage-02 /]# gluster volume status vol_ed8913602dbbc0510a6ecf8ad3bf941b
Status of volume: vol_ed8913602dbbc0510a6ecf8ad3bf941b
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.0.2.45:/var/lib/heketi/mounts/vg_0
94f45cfcd0ff5815c3914723876c69e/brick_5a398
d6130306007bd2c72d0cf53d947/brick           N/A       N/A        N       N/A
Brick 10.0.2.44:/var/lib/heketi/mounts/vg_a
04ab1f3fafb5b88709b6354df22ea3e/brick_5a423
d1cdee47f93f0f3a62b7a329ef2/brick           49531     0          Y       4117
Self-heal Daemon on localhost               N/A       N/A        N       N/A
Self-heal Daemon on k8s-storage-01          N/A       N/A        N       N/A

Task Status of Volume vol_ed8913602dbbc0510a6ecf8ad3bf941b
------------------------------------------------------------------------------
There are no active volume tasks

[root@k8s-storage-02 /]# gluster volume status vol_ed8913602dbbc0510a6ecf8ad3bf941b
Status of volume: vol_ed8913602dbbc0510a6ecf8ad3bf941b
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.0.2.45:/var/lib/heketi/mounts/vg_0
94f45cfcd0ff5815c3914723876c69e/brick_5a398
d6130306007bd2c72d0cf53d947/brick           N/A       N/A        N       N/A
Brick 10.0.2.44:/var/lib/heketi/mounts/vg_a
04ab1f3fafb5b88709b6354df22ea3e/brick_5a423
d1cdee47f93f0f3a62b7a329ef2/brick           49531     0          Y       4117
Self-heal Daemon on localhost               N/A       N/A        N       N/A
Self-heal Daemon on k8s-storage-01          N/A       N/A        N       N/A

Task Status of Volume vol_ed8913602dbbc0510a6ecf8ad3bf941b
------------------------------------------------------------------------------
There are no active volume tasks

[root@k8s-storage-02 /]# gluster volume stop vol_ed8913602dbbc0510a6ecf8ad3bf941b
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: vol_ed8913602dbbc0510a6ecf8ad3bf941b: success
[root@k8s-storage-02 /]# gluster volume status vol_ed8913602dbbc0510a6ecf8ad3bf941b
Volume vol_ed8913602dbbc0510a6ecf8ad3bf941b is not started
[root@k8s-storage-02 /]# gluster volume start vol_ed8913602dbbc0510a6ecf8ad3bf941b
volume start: vol_ed8913602dbbc0510a6ecf8ad3bf941b: success
[root@k8s-storage-02 /]# gluster volume status vol_ed8913602dbbc0510a6ecf8ad3bf941b
Status of volume: vol_ed8913602dbbc0510a6ecf8ad3bf941b
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.0.2.45:/var/lib/heketi/mounts/vg_0
94f45cfcd0ff5815c3914723876c69e/brick_5a398
d6130306007bd2c72d0cf53d947/brick           49167     0          Y       75428
Brick 10.0.2.44:/var/lib/heketi/mounts/vg_a
04ab1f3fafb5b88709b6354df22ea3e/brick_5a423
d1cdee47f93f0f3a62b7a329ef2/brick           49531     0          Y       76051
Self-heal Daemon on localhost               N/A       N/A        Y       75449
Self-heal Daemon on k8s-storage-01          N/A       N/A        N       N/A

Task Status of Volume vol_ed8913602dbbc0510a6ecf8ad3bf941b
------------------------------------------------------------------------------
There are no active volume tasks

#####################

flink-jobmanager-bak-0                                      1/1     Running     0                 7s


  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值