【2023-07-27】k8s修改了网络范围,重新部署pod报错failed to delegate add: failed to set bridge addr: “cni0“ already ha

在K8s中修改网络范围后,尝试重新部署Pod时遇到错误,Pod状态为ContainerCreating。错误信息显示CNI插件在设置Pod网络时因IP冲突无法创建桥接地址。解决方法是删除错误配置的网卡cni0并让系统自动重建,成功更新网卡IP后,Pod可以正常运行。
摘要由CSDN通过智能技术生成

k8s修改了网络范围,重新部署pod报错:

[root@master ~]$kubectl  get pods -A -o wide
NAMESPACE      NAME                             READY   STATUS              RESTARTS   AGE     IP             NODE     NOMINATED NODE   READINESS GATES
default        test-k8s-68bb74d654-5mvwd        1/1     Running             0          2d19h   10.244.0.6     node1    <none>           <none>
default        test-k8s-68bb74d654-b4ntt        1/1     Running             0          2d19h   10.244.0.6     node3    <none>           <none>
default        test-k8s-68bb74d654-cfgh7        1/1     Running             0          2d19h   10.244.0.7     node3    <none>           <none>
default        test-k8s-68bb74d654-lzdv7        1/1     Running             0          2d19h   10.244.0.5     node1    <none>           <none>
default        test-k8s-68bb74d654-sc5js        1/1     Running             0          2d19h   10.244.0.7     node2    <none>           <none>
default        test-pod                         0/1     ContainerCreating   0          4m5s    <none>         node2    <none>           <none>
default        testapp                          0/1     ContainerCreating   0          81s     <none>         node1    <none>           <none>
kube-flannel   kube-flannel-ds-gcm48            1/1     Running             0          15m     10.10.26.118   node3    <none>           <none>
kube-flannel   kube-flannel-ds-lfh57            1/1     Running             0          15m     10.10.26.116   node1    <none>           <none>
kube-flannel   kube-flannel-ds-q68vj            1/1     Running             0          15m     10.10.26.117   node2    <none>           <none>
kube-flannel   kube-flannel-ds-vxpd4            1/1     Running             0          15m     10.10.26.115   master   <none>           <none>
kube-system    coredns-7f6cbbb7b8-5smwq         1/1     Running             0          8d      10.244.0.2     master   <none>           <none>
kube-system    coredns-7f6cbbb7b8-fnqq8         1/1     Running             0          8d      10.244.0.3     master   <none>           <none>
kube-system    etcd-master                      1/1     Running             0          8d      10.10.26.115   master   <none>           <none>
kube-system    kube-apiserver-master            1/1     Running             0          8d      10.10.26.115   master   <none>           <none>
kube-system    kube-controller-manager-master   1/1     Running             0          8d      10.10.26.115   master   <none>           <none>
kube-system    kube-proxy-8f955                 1/1     Running             0          8d      10.10.26.115   master   <none>           <none>
kube-system    kube-proxy-knvqv                 1/1     Running             0          8d      10.10.26.118   node3    <none>           <none>
kube-system    kube-proxy-tx75f                 1/1     Running             0          8d      10.10.26.116   node1    <none>           <none>
kube-system    kube-proxy-wjdvl                 1/1     Running             0          8d      10.10.26.117   node2    <none>           <none>
kube-system    kube-scheduler-master            1/1     Running             0          8d      10.10.26.115   master   <none>           <none>

查看pod日志,发现有报错:

[root@master ~]$kubectl describe pod testapp
Name:         testapp
Namespace:    default
Priority:     0
Node:         node1/10.10.26.116
Start Time:   Thu, 27 Jul 2023 11:12:24 +0800
Labels:       run=testapp
Annotations:  <none>
Status:       Pending
IP:
IPs:          <none>
Containers:
  testapp:
    Container ID:
    Image:          ccr.ccs.tencentyun.com/k8s-tutorial/test-k8s:v1
    Image ID:
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qvfw7 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  kube-api-access-qvfw7:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason                  Age                  From               Message
  ----     ------                  ----                 ----               -------
  Normal   Scheduled               110s                 default-scheduler  Successfully assigned default/testapp to node1
  Warning  FailedCreatePodSandBox  109s                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "1ff628b02425798ccc88f2880db61848e64605480ca9ea3ccb97a19193b71ac5" network for pod "testapp": networkPlugin cni failed to set up pod "testapp_default" network: failed to delegate add: failed to set bridge addr: "cni0" already has an IP address different from 10.100.0.1/24
  Warning  FailedCreatePodSandBox  108s                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "09e6e121e1acba0422a63d9de1930583fd807a435ae9a19da1e50eacc4af34a7" network for pod "testapp": networkPlugin cni failed to set up pod "testapp_default" network: failed to delegate add: failed to set bridge addr: "cni0" already has an IP address different from 10.100.0.1/24
  Warning  FailedCreatePodSandBox  107s                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "90defd5fae80935e7a0b25f0354faf7607b47fced7e3a2e235d2f06098214897" network for pod "testapp": networkPlugin cni failed to set up pod "testapp_default" network: failed to delegate add: failed to set bridge addr: "cni0" already has an IP address different from 10.100.0.1/24
  Warning  FailedCreatePodSandBox  106s                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "3a3a816d57bb7f0e95f8d2f8b6abc3de743cc93e56459b0462fd072b79dc0951" network for pod "testapp": networkPlugin cni failed to set up pod "testapp_default" network: failed to delegate add: failed to set bridge addr: "cni0" already has an IP address different from 10.100.0.1/24
  Warning  FailedCreatePodSandBox  104s                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "a67dede7c0d84ba12b1becc8a2964da879666dbc119f407799cc0050c59301e9" network for pod "testapp": networkPlugin cni failed to set up pod "testapp_default" network: failed to delegate add: failed to set bridge addr: "cni0" already has an IP address different from 10.100.0.1/24
  Warning  FailedCreatePodSandBox  103s                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "807d42dcc6c6cc7a7c14b69ac5b2e966a57d413afe86fbdd28dd97be6a90160e" network for pod "testapp": networkPlugin cni failed to set up pod "testapp_default" network: failed to delegate add: failed to set bridge addr: "cni0" already has an IP address different from 10.100.0.1/24
  Warning  FailedCreatePodSandBox  102s                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "53804a5085258232c1395f89d2ba877c6a810a4db14bcb681c1b8dedbd7bd8d8" network for pod "testapp": networkPlugin cni failed to set up pod "testapp_default" network: failed to delegate add: failed to set bridge addr: "cni0" already has an IP address different from 10.100.0.1/24
  Warning  FailedCreatePodSandBox  101s                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "ec8c2ae4b1f2679c202fa40967c789b8371b4cd945f7179bd33a1b90eb3ee875" network for pod "testapp": networkPlugin cni failed to set up pod "testapp_default" network: failed to delegate add: failed to set bridge addr: "cni0" already has an IP address different from 10.100.0.1/24
  Warning  FailedCreatePodSandBox  100s                 kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "336db115dbc1be122101a5d21882c2005b3d887f55051dd37852360171d19757" network for pod "testapp": networkPlugin cni failed to set up pod "testapp_default" network: failed to delegate add: failed to set bridge addr: "cni0" already has an IP address different from 10.100.0.1/24
  Normal   SandboxChanged          97s (x12 over 108s)  kubelet            Pod sandbox changed, it will be killed and re-created.
  Warning  FailedCreatePodSandBox  96s (x4 over 99s)    kubelet            (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "0a50914bccdd5f177115f1309f2f2bd1767d9dcb23f210c8fede297c15e7dbb1" network for pod "testapp": networkPlugin cni failed to set up pod "testapp_default" network: failed to delegate add: failed to set bridge addr: "cni0" already has an IP address different from 10.100.0.1/24

查看网卡ip,发现还是旧的ip:cni0 10.244.0.1

[root@node1 ~]$ifconfig
cni0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.244.0.1  netmask 255.255.255.0  broadcast 10.244.0.255
        inet6 fe80::f880:cff:fed5:dda5  prefixlen 64  scopeid 0x20<link>
        ether fa:80:0c:d5:dd:a5  txqueuelen 1000  (Ethernet)
        RX packets 498  bytes 14104 (13.7 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 425  bytes 26730 (26.1 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        ether 02:42:e7:db:fd:9e  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

解决方法:将这个错误的网卡删除掉,系统会自动重建。


[root@node1 ~]$ifconfig cni0 down
[root@node1 ~]$ip link delete cni0
[root@node1 ~]$
[root@node1 ~]$ifconfig
cni0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.100.0.1  netmask 255.255.255.0  broadcast 10.100.0.255
        inet6 fe80::849c:66ff:fef0:50ce  prefixlen 64  scopeid 0x20<link>
        ether 86:9c:66:f0:50:ce  txqueuelen 1000  (Ethernet)
        RX packets 1  bytes 28 (28.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 5  bytes 446 (446.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        ether 02:42:e7:db:fd:9e  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

再次检查就正常了:

[root@master ~]$kubectl  get pods -A -o wide
NAMESPACE      NAME                             READY   STATUS    RESTARTS      AGE     IP             NODE     NOMINATED NODE   READINESS GATES
default        test-k8s-68bb74d654-5mvwd        1/1     Running   0             2d19h   10.244.0.6     node1    <none>           <none>
default        test-k8s-68bb74d654-b4ntt        1/1     Running   0             2d19h   10.244.0.6     node3    <none>           <none>
default        test-k8s-68bb74d654-cfgh7        1/1     Running   0             2d19h   10.244.0.7     node3    <none>           <none>
default        test-k8s-68bb74d654-lzdv7        1/1     Running   0             2d19h   10.244.0.5     node1    <none>           <none>
default        test-k8s-68bb74d654-sc5js        1/1     Running   0             2d19h   10.244.0.7     node2    <none>           <none>
default        test-pod                         1/1     Running   0             16m     10.100.0.149   node2    <none>           <none>
default        testapp                          1/1     Running   0             14m     10.100.0.2     node1    <none>           <none>
kube-flannel   kube-flannel-ds-gcm48            1/1     Running   0             28m     10.10.26.118   node3    <none>           <none>
kube-flannel   kube-flannel-ds-lfh57            1/1     Running   0             28m     10.10.26.116   node1    <none>           <none>
kube-flannel   kube-flannel-ds-q68vj            1/1     Running   0             28m     10.10.26.117   node2    <none>           <none>
kube-flannel   kube-flannel-ds-vxpd4            1/1     Running   0             28m     10.10.26.115   master   <none>           <none>
kube-system    coredns-7f6cbbb7b8-5smwq         0/1     Running   3 (28s ago)   8d      10.244.0.2     master   <none>           <none>
kube-system    coredns-7f6cbbb7b8-fnqq8         0/1     Running   3 (52s ago)   8d      10.244.0.3     master   <none>           <none>
kube-system    etcd-master                      1/1     Running   0             8d      10.10.26.115   master   <none>           <none>
kube-system    kube-apiserver-master            1/1     Running   0             8d      10.10.26.115   master   <none>           <none>
kube-system    kube-controller-manager-master   1/1     Running   0             8d      10.10.26.115   master   <none>           <none>
kube-system    kube-proxy-8f955                 1/1     Running   0             8d      10.10.26.115   master   <none>           <none>
kube-system    kube-proxy-knvqv                 1/1     Running   0             8d      10.10.26.118   node3    <none>           <none>
kube-system    kube-proxy-tx75f                 1/1     Running   0             8d      10.10.26.116   node1    <none>           <none>
kube-system    kube-proxy-wjdvl                 1/1     Running   0             8d      10.10.26.117   node2    <none>           <none>
kube-system    kube-scheduler-master            1/1     Running   0             8d      10.10.26.115   master   <none>           <none>

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值