$ chmod +x NVIDIA-Linux-x86_64-450.80.02.run && ./NVIDIA-Linux-x86_64-450.80.02.run
- 验证 使用如下命令验证是否安装成功 nvidia-smi 如果输出类似下图则驱动安装成功。
2. CUDA 驱动
===========
CUDA(Compute Unified Device Architecture)是显卡厂商 NVIDIA 推出的运算平台。CUDA™ 是一种由 NVIDIA 推出的通用并行计算架构,该架构使 GPU 能够解决复杂的计算问题。它包含了 CUDA 指令集架构(ISA)以及 GPU 内部的并行计算引擎。这里安装的方式和显卡驱动安装类似。
-
访问**官网[2]**下载
-
下载对应版本如下图
-
配置环境变量
$ echo ‘export PATH=/usr/local/cuda/bin:$PATH’ | sudo tee /etc/profile.d/cuda.sh
$ source /etc/profile
3. nvidia-container-runtime
============================
nvidia-container-runtime 是在 runc 基础上多实现了 nvidia-container-runime-hook(现在叫 nvidia-container-toolkit),该 hook 是在容器启动后(Namespace 已创建完成),容器自定义命令(Entrypoint)启动前执行。当检测到 NVIDIA_VISIBLE_DEVICES 环境变量时,会调用 libnvidia-container 挂载 GPU Device 和 CUDA Driver。如果没有检测到 NVIDIA_VISIBLE_DEVICES 就会执行默认的 runc。
下面分两步安装:
先设置 repository 和 GPG key:
$ curl -s -L https://nvidia.github.io/nvidia-container-runtime/gpgkey | sudo apt-key add -
$ curl -s -L https://nvidia.github.io/nvidia-container-runtime/$(. /etc/os-release;echo I D ID IDVERSION_ID)/nvidia-container-runtime.list | sudo tee /etc/apt/sources.list.d/nvidia-container-runtime.list
安装:
$ apt install nvidia-container-runtime -y
配置 Containerd 使用 Nvidia container runtime
=========================================
如果 /etc/containerd 目录不存在,就先创建它:
$ mkdir /etc/containerd
生成默认配置:
$ containerd config default > /etc/containerd/config.toml
Kubernetes 使用设备插件(Device Plugins)[3] 来允许 Pod 访问类似 GPU 这类特殊的硬件功能特性,但前提是默认的 OCI runtime 必须改成 nvidia-container-runtime,需要修改的内容如下:
/etc/containerd/config.toml
…
[plugins.“io.containerd.grpc.v1.cri”.containerd]
snapshotter = “overlayfs”
default_runtime_name = “runc”
no_pivot = false
…
[plugins.“io.containerd.grpc.v1.cri”.containerd.runtimes]
[plugins.“io.containerd.grpc.v1.cri”.containerd.runtimes.runc]
runtime_type = “io.containerd.runtime.v1.linux” # 将此处 runtime_type 的值改成 io.containerd.runtime.v1.linux
…
[plugins.“io.containerd.runtime.v1.linux”]
shim = “containerd-shim”
runtime = “nvidia-container-runtime” # 将此处 runtime 的值改成 nvidia-container-runtime
…
重启 containerd 服务:
$ systemctl restart containerd
4. 部署 NVIDIA GPU 设备插件
======================
一条命令解决战斗:
$ kubectl apply -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v0.7.1/nvidia-device-plugin.yml
查看日志:
$ kubectl -n kube-system logs nvidia-device-plugin-daemonset-xxx
2020/12/04 06:30:28 Loading NVML
2020/12/04 06:30:28 Starting FS watcher.
2020/12/04 06:30:28 Starting OS watcher.
2020/12/04 06:30:28 Retreiving plugins.
2020/12/04 06:30:28 Starting GRPC server for ‘nvidia.com/gpu’
2020/12/04 06:30:28 Starting to serve ‘nvidia.com/gpu’ on /var/lib/kubelet/device-plugins/nvidia-gpu.sock
2020/12/04 06:30:28 Registered device plugin for ‘nvidia.com/gpu’ with Kubelet
可以看到设备插件部署成功了。在 Node 上面可以看到设备插件目录下的 socket:
$ ll /var/lib/kubelet/device-plugins/
total 12
drwxr-xr-x 2 root root 4096 Dec 4 01:30 ./
drwxr-xr-x 8 root root 4096 Dec 3 05:05 …/
-rw-r–r-- 1 root root 0 Dec 4 01:11 DEPRECATION
-rw------- 1 root root 3804 Dec 4 01:30 kubelet_internal_checkpoint
srwxr-xr-x 1 root root 0 Dec 4 01:11 kubelet.sock=
srwxr-xr-x 1 root root 0 Dec 4 01:11 kubevirt-kvm.sock=
srwxr-xr-x 1 root root 0 Dec 4 01:11 kubevirt-tun.sock=
srwxr-xr-x 1 root root 0 Dec 4 01:11 kubevirt-vhost-net.sock=
srwxr-xr-x 1 root root 0 Dec 4 01:30 nvidia-gpu.sock=
5. 测试 GPU
==========
首先测试本地命令行工具 ctr,这个应该没啥问题:
$ ctr images pull docker.io/nvidia/cuda:9.0-base
$ ctr run --rm -t --gpus 0 docker.io/nvidia/cuda:9.0-base nvidia-smi nvidia-smi
Fri Dec 4 07:01:38 2020
±----------------------------------------------------------------------------+
| NVIDIA-SMI 440.95.01 Driver Version: 440.95.01 CUDA Version: 10.2 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|=++==============|
| 0 GeForce RTX 208… Off | 00000000:A1:00.0 Off | N/A |
| 30% 33C P8 9W / 250W | 0MiB / 11019MiB | 0% Default |
±------------------------------±---------------------±---------------------+
±----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
±----------------------------------------------------------------------------+
最后进入终极测试:在 Pod 中测试 GPU 可用性。先创建部署清单:
gpu-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: cuda-vector-add
spec:
restartPolicy: OnFailure
containers:
- name: cuda-vector-add
image: “k8s.gcr.io/cuda-vector-add:v0.1”
resources:
limits:
nvidia.com/gpu: 1
执行 kubectl apply -f ./gpu-pod.yaml 创建 Pod。使用 kubectl get pod 可以看到该 Pod 已经启动成功:
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
cuda-vector-add 0/1 Completed 0 3s