记NVIDIA显卡A100在K8S POD中“Failed to initialize NVML: Unknown Error“问题解决

记NVIDIA显卡A100在K8S POD中"Failed to initialize NVML: Unknown Error"问题解决

问题描述

因项目原因需要在k8s上跑GPU相关的代码,优选使用NVIDIA A100显卡,但在根据官方文档简单并部署GitHub - NVIDIA/k8s-device-plugin:适用于 Kubernetes 的 NVIDIA 设备插件后,出现了pod中GPU运行一段时间后丢失的问题,进入容器后发现nvidia-smi命令报错"Failed to initialize NVML: Unknown Error"。尝试删除并且重建容器后,刚开始nvidia-smi命令正常,但是在大约10秒过后,重复出现以上异常。

问题分析

对于出现的问题,github中有多人提到,如:

nvidia-smi command in container returns “Failed to initialize NVML: Unknown Error” after couple of times · Issue #1678 · NVIDIA/nvidia-docker · GitHub

“Failed to initialize NVML: Unknown Error” after random amount of time · Issue #1671 · NVIDIA/nvidia-docker · GitHub

通过讨论可以发现,我们的现象与其他人是相同的,该命令失效的原因为一段时间后,devices.list中丢失了GPU的设备(路径:/sys/fs/cgroup/devices/devices.list)

导致问题的原因为k8s的cpu管理策略为static,并且修改cpu的管理策略为none,该问题确实可以解决,建议对CPU管理策略研究没有那么严格时,操作到此即可。但是我们对于CPU的管理策略要求为static,所以我们继续追溯到github上以下issue。

Updating cpu-manager-policy=static causes NVML unknown error · Issue #966 · NVIDIA/nvidia-docker · GitHub

问题原因可以参考https://zhuanlan.zhihu.com/p/344561710

在https://github.com/NVIDIA/nvidia-docker/issues/966#issuecomment-610928514作者提到了解决方式,并且官方在几个版本之前提供了相关的解决方案,在部署官方插件的时候添加参数**–pass-device-specs=ture**,至此重新阅读官方部署文档,确实发现了相关参数的说明。但是在部署之后发现问题还是没有解决,再次阅读相关讨论,发现runc版本有限制(https://github.com/NVIDIA/nvidia-docker/issues/1671#issuecomment-1330466432),我们的版本为1.14,再次对runc降级后,该问题解决。

解决步骤

  1. 检查runc版本,如果版本小于1.1.3可以直接跳到第3步操作

    # runc -v
    runc version 1.1.4
    commit: v1.1.4-0-xxxxx
    spec: 1.0.2-dev
    go: go1.17.10
    libseccomp: 2.5.3
    
  2. 更新runc版本:

    • 下载指定版本的runc版本,本文下载的为1.1.2版本(https://github.com/opencontainers/runc/releases/tag/v1.1.2)

      [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-yW8c0HGZ-1675244222972)(C:\Users\jia\AppData\Roaming\Typora\typora-user-images\image-20230201155651434.png)]

    • 将下载好的runc.amd64文件上传到服务器、修改文件名并赋权

      mv runc.amd64 runc && chmod +x runc
      
    • 备份原有的runc

    mv /usr/bin/runc /home/runcbak
    
    • 停止docker

      systemctl stop docker
      
    • 替换新版本runc

      cp runc /usr/bin/runc
      
    • 启动docker

      systemctl start docker
      
    • 检查runc是否升级成功

      # runc -v
      runc version 1.1.2
      commit: v1.1.2-0-ga916309f
      spec: 1.0.2-dev
      go: go1.17.10
      libseccomp: 2.5.3
      
  3. 安装NVIDIA GPU插件

    • 创建plugin.yml,该yaml文件中跟普通部署的区别主要为PASS_DEVICE_SPECS

      # You may obtain a copy of the License at
      #
      #     http://www.apache.org/licenses/LICENSE-2.0
      #
      # Unless required by applicable law or agreed to in writing, software
      # distributed under the License is distributed on an "AS IS" BASIS,
      # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
      # See the License for the specific language governing permissions and
      # limitations under the License.
      
      apiVersion: apps/v1
      kind: DaemonSet
      metadata:
        name: nvidia-device-plugin-daemonset
        namespace: kube-system
      spec:
        selector:
          matchLabels:
            name: nvidia-device-plugin-ds
        updateStrategy:
          type: RollingUpdate
        template:
          metadata:
            labels:
              name: nvidia-device-plugin-ds
          spec:
            tolerations:
            - key: nvidia.com/gpu
              operator: Exists
              effect: NoSchedule
            # Mark this pod as a critical add-on; when enabled, the critical add-on
            # scheduler reserves resources for critical add-on pods so that they can
            # be rescheduled after a failure.
            # See https://kubernetes.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/
            priorityClassName: "system-node-critical"
            containers:
            - image: nvcr.io/nvidia/k8s-device-plugin:v0.13.0
              name: nvidia-device-plugin-ctr
              env:
                - name: FAIL_ON_INIT_ERROR
                  value: "false"
                - name: PASS_DEVICE_SPECS
                  value: "true"
              securityContext:
                privileged: true
              volumeMounts:
              - name: device-plugin
                mountPath: /var/lib/kubelet/device-plugins
            volumes:
            - name: device-plugin
              hostPath:
                path: /var/lib/kubelet/device-plugins
      
    • 创建插件

      $ kubectl create -f plugin.yml
      
  4. 创建GPU POD并且验证

SEO切换cpu管理策略

  1. 关闭kubelet

    systemctl stop kubelet
    
  2. 删除cpu_manager_state

    rm /var/lib/kubelet/cpu_manager_state
    
  3. 修改config.yaml

    vi /var/lib/kubelet/config.yaml
    
    apiVersion: kubelet.config.k8s.io/v1beta1
    authentication:
      anonymous:
        enabled: false
      webhook:
        cacheTTL: 0s
        enabled: true
      x509:
        clientCAFile: /etc/kubernetes/pki/ca.crt
    authorization:
      mode: Webhook
      webhook:
        cacheAuthorizedTTL: 0s
        cacheUnauthorizedTTL: 0s
    cgroupDriver: systemd
    clusterDNS:
    - 10.96.0.10
    clusterDomain: cluster.local
    
    # 修改cpu管理策略,none或者static
    cpuManagerPolicy: static
    
    cpuManagerReconcilePeriod: 0s
    evictionPressureTransitionPeriod: 0s
    featureGates:
      TopologyManager: true
    fileCheckFrequency: 0s
    healthzBindAddress: 127.0.0.1
    healthzPort: 10248
    httpCheckFrequency: 0s
    imageMinimumGCAge: 0s
    kind: KubeletConfiguration
    logging: {}
    memorySwap: {}
    nodeStatusReportFrequency: 0s
    nodeStatusUpdateFrequency: 0s
    podPidsLimit: 4096
    reservedSystemCPUs: 0,1
    resolvConf: /run/systemd/resolve/resolv.conf
    rotateCertificates: true
    runtimeRequestTimeout: 0s
    shutdownGracePeriod: 0s
    shutdownGracePeriodCriticalPods: 0s
    staticPodPath: /etc/kubernetes/manifests
    streamingConnectionIdleTimeout: 0s
    syncFrequency: 0s
    tlsCipherSuites:
    - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
    - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
    - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
    - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
    tlsMinVersion: VersionTLS12
    topologyManagerPolicy: best-effort
    volumeStatsAggPeriod: 0s
    
  4. 启动kubelet

    systemctl start kubelet
    

变更containerd版本

https://github.com/NVIDIA/nvidia-docker/issues/1671#issuecomment-1238644201

参考https://blog.csdn.net/Ivan_Wz/article/details/111932120

  1. github下载二进制containerd(https://github.com/containerd/containerd/releases/tag/v1.6.16)
    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-J2Du8dbI-1675244222972)(C:\Users\jia\AppData\Roaming\Typora\typora-user-images\image-20230201172032663.png)]

  2. 解压containerd

    tar -zxvf containerd-1.6.16-linux-amd64.tar.gz 
    
  3. 检查当前containerd版本

    docker info 
    containerd -v
    
  4. 暂停docker

    systemctl stop docker
    
  5. 替换containerd二进制文件

    cp containerd /usr/bin/containerd
    cp containerd-shim /usr/bin/containerd-shim
    cp containerd-shim-runc-v1 /usr/bin/containerd-shim-runc-v1
    cp containerd-shim-runc-v2 /usr/bin/containerd-shim-runc-v2
    cp ctr /usr/bin/ctr
    
  6. 重启docker 检查containerd版本是否替换成功

  • 2
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值