一、nvidia-docker2 run vs docker run --runtime=nvidia
示例:docker run -d -it -p 8888:8888 tensorflow/tensorflow:2.0.0-gpu-py3-jupyter bash
这样启动容器后是不能调用nvidia显卡的。会提示
在使用nvidia显卡支持需要用如下命令
nvidia-docker run
and
docker run --runtime=nvidia
docker run --runtime nvidia nvidia/cuda:9.0-base nvidia-smi
还可以使用一下命令:
#### Test nvidia-smi with the latest official CUDA image
$ docker run --gpus all nvidia/cuda:9.0-base nvidia-smi
# Start a GPU enabled container on two GPUs
$ docker run --gpus 2 nvidia/cuda:9.0-base nvidia-smi
# Starting a GPU enabled container on specific GPUs
$ docker run --gpus '"device=1,2"' nvidia/cuda:9.0-base nvidia-smi
$ docker run --gpus '"device=UUID-ABCDEF,1"' nvidia/cuda:9.0-base nvidia-smi
# Specifying a capability (graphics, compute, ...) for my container
# Note this is rarely if ever used this way
$ docker run --gpus all,capabilities=utility nvidia/cuda:9.0-base nvidia-smi
都可以设置和启动NV
Backward compatibility
To help transitioning code from 1.0 to 2.0, a bash script is provided in /usr/bin/nvidia-docker
for backward compatibility.
It will automatically inject the --runtime=nvidia
argument and convert NV_GPU
to NVIDIA_VISIBLE_DEVICES
.
Existing daemon.json
If you have a custom /etc/docker/daemon.json
, the nvidia-docker2
package will override it.
In this case, it is recommended to install nvidia-container-runtime instead and register the new runtime manually.
Default runtime
The default runtime used by the Docker® Engine is runc, our runtime can become the default one by configuring the docker daemon with --default-runtime=nvidia
. Doing so will remove the need to add the --runtime=nvidia
argument to docker run
. It is also the only way to have GPU access during docker build
.
Environment variables
The behavior of the runtime can be modified through environment variables (such as NVIDIA_VISIBLE_DEVICES
).
Those environment variables are consumed by nvidia-container-runtime and are documented here.
Our official CUDA images use default values for these variables.