nvidia-docker常见问题解决

Setting up

How do I register the new runtime to the Docker daemon?

Refer to the documentation of nvidia-container-runtime

Which Docker packages are supported?
  • All the stable releases of docker-ce.
  • Edge releases are not supported.
  • The legacy official package docker-engine.
  • The package provided by Canonical: docker.io.
  • The package provided by Red Hat: docker.
How do I install 2.0 if I'm not using the latest Docker version?

You must pin the versions of both nvidia-docker2 and nvidia-container-runtime when installing, for instance:

sudo apt-get install -y nvidia-docker2=2.0.2+docker1.12.6-1 nvidia-container-runtime=1.1.1+docker1.12.6-1

Use apt-cache madison nvidia-docker2 nvidia-container-runtime or yum search --showduplicates nvidia-docker2 nvidia-container-runtime to list the available versions.

What is the minimum supported Docker version?

Docker 1.12 which adds support for custom container runtimes.

How do I install the NVIDIA driver?

The recommended way is to use your package manager and install the cuda-drivers package (or equivalent).
When no packages are available, you should use an official "runfile".

Can I use 2.0 and 1.0 side-by-side?

Yes, but packages nvidia-docker2 and nvidia-docker conflict. You need to install nvidia-container-runtime instead of nvidia-docker2 and register the new runtime manually.

Why do I get the error Unknown runtime specified nvidia?

Make sure the runtime was registered to dockerd. You also need to reload the configuration of the Docker daemon.

Why do I get the error flag provided but not defined: -console?

Your version of nvidia-container-runtime probably doesn't match your version of Docker. You need to pin the version of nvidia-container-runtime when installing the package.

Why do I get the error Depends: docker [...] but it is not installable or nothing provides docker [...]

This issue can usually occur in one of the following circumstances:

  • Docker is not installed on your machine and/or the official Docker package repository hasn't been set up (see also prerequisites).
  • Docker is installed or is about to be upgraded and its version is not supported by NVIDIA Docker (see also supported Docker packages).
  • Docker is installed and its version supported, but it isn't the latest version available on the Docker package repository. In this case, package pinning is required (see also not the latest Docker version and older version of Docker).

Platform support

Is macOS supported?

No, we do not support macOS (regardless of the version), however you can use the native macOS Docker client to deploy your containers remotely (refer to the dockerd documentation).

Is Microsoft Windows supported?

No, we do not support Microsoft Windows (regardless of the version), however you can use the native Microsoft Windows Docker client to deploy your containers remotely (refer to the dockerd documentation).

Do you support Microsoft native container technologies (e.g. Windows server, Hyper-v)?

No, we do not support native Microsoft container technologies.

Do you support Optimus (i.e. NVIDIA dGPU + Intel iGPU)?

Yes, from the CUDA perspective there is no difference as long as your dGPU is powered-on and you are following the official driver instructions.

Do you support Tegra platforms (arm64)?

No, we do not support Tegra platforms and can’t easily port the code to it.
The driver stack on arm64 is radically different and would require a complete architecture overhaul.

What distributions are officially supported?

For your host distribution, the list of supported platforms is available here.
For your container images, both the Docker Hub and NGC registry images are officially supported.

Do you support PowerPC64 (ppc64)?

Not yet for 2.0 but we are actively working with IBM on this.

How do I use this in on my Cloud service provider (e.g. AWS, Azure, GCP)?

We have a tutorial for AWS and a tutorial for Azure.They haven’t been updated for 2.0 yet but we are working on it and we plan to release a similar tutorial for GCP soon.
Alternatively, you can leverage NGC to deploy optimized container images on AWS.

Container Runtime

Does it have a performance impact on my GPU workload?

No, usually the impact should be in the order of less than 1% and hardly noticeable.
However be aware of the following (list non exhaustive):

  • GPU topology and CPU affinity
    You can query it using nvidia-smi topo and use Docker CPU sets to pin CPU cores.
  • Compiling your code for your device architecture
    Your container might be compiled for the wrong achitecture and could fallback to the JIT compilation of PTX code (refer to the official documentation for more information).
    Note that you can express these constraints in your container image.
  • Container I/O overhead
    By default Docker containers rely on an overlay filesystem and bridged/NATed networking.
    Depending on your workload this can be a bottleneck, we recommend using Docker volumes and experiment with different Docker networks.
  • Linux kernel accounting and security overhead
    In rare cases, you may notice than some kernel subsystems induce overhead.
    This will likely depend on your kernel version and can include things like: cgroups, LSMs, seccomp filters, netfilter...
Is OpenGL supported?

Yes, EGL is supported for headless rendering, but this is a beta feature. There is no plan to support GLX in the near future.
Images are available at nvidia/opengl. If you need CUDA+OpenGL, use nvidia/cudagl.
If you are a NGC subscriber and require GLX for your workflow, please fill out a feature request for support consideration.

How do I fix unsatisfied condition: cuda >= X.Y?

Your CUDA container image is incompatible with your driver version.
Upgrade your driver or choose an image tag which is supported by your driver (see also CUDA requirements)

Do you support CUDA Multi Process Service (a.k.a. MPS)?

No, MPS is not supported at the moment. However we plan on supporting this feature in the future, and this issue will be updated accordingly.

Do you support running a GPU-accelerated X server inside the container?

No, running a X server inside the container is not supported at the moment and there is no plan to support it in the near future (see also OpenGL support).

I have multiple GPU devices, how can I isolate them between my containers?

GPU isolation is achieved through a container environment variable called NVIDIA_VISIBLE_DEVICES.
Devices can be referenced by index (following the PCI bus order) or by UUID (refer to the documentation).

Why is nvidia-smi inside the container not listing the running processes?

nvidia-smi and NVML are not compatible with PID namespaces.
We recommend monitoring your processes on the host or inside a container using --pid=host.

Can I share a GPU between multiple containers?

Yes. This is no different than sharing a GPU between multiple processes outside of containers.
Scheduling and compute preemption vary from one GPU architecture to another (e.g. CTA-level, instruction-level).

Can I limit the GPU resources (e.g. bandwidth, memory, CUDA cores) taken by a container?

No. Your only option is to set the GPU clocks at a lower frequency before starting the container.

Can I enforce exclusive access for a GPU?

This is not currently supported but you can enforce it:

  • At the container orchestration layer (Kubernetes, Swarm, Mesos, Slurm…) since this is tied to resource allocation.
  • At the driver level by setting the compute mode of the GPU.
Why is my container slow to start with 2.0?

You probably need to enable persistence mode to keep the kernel modules loaded and the GPUs initialized.
The recommended way is to start the nvidia-persistenced daemon on your host.

Can I use it with Docker-in-Docker (a.k.a. DinD)?

If you are running a Docker client inside a container: simply mount the Docker socket and proceed as usual.
If you are running a Docker daemon inside a container: this case is untested.

Why is my application inside the container slow to initialize?

Your application was probably not compiled for the compute architecture of your GPU and thus the driver must JIT all the CUDA kernels from PTX.In addition to a slow start, the JIT compiler might generate less efficient code than directly targeting your compute architecture (see also performance impact).

Is the JIT cache shared between containers?

No. You would have to handle this manually with Docker volumes.

What is causing the CUDA invalid device function error?

Your application was not compiled for the compute architecture of your GPU, and no PTX was generated during build time. Thus, JIT compiling is impossible (see also slow to initialize).

Why do I get Insufficient Permissions for some nvidia-smi operations?

Some device management operations require extra privileges (e.g. setting clocks frequency).
After learning about the security implications of doing so, you can add extra capabilities to your container using --cap-add on the command-line (--cap-add=SYS_ADMIN will allow most operations).

Can I profile and debug my GPU code inside a container?

Yes but as stated above, you might need extra privileges, meaning extra capabilities like CAP_SYS_PTRACE or tweak the seccomp profile used by Docker to allow certain syscalls.

Is OpenCL supported?

Yes, we now provide images on DockerHub.

Is Vulkan supported?

No, Vulkan is not supported at the moment. However we plan on supporting this feature in the future.

Container images

What do I have to install in my container images?

Library dependencies vary from one application to another. In order to make things easier for developers, we provide a set of official images to base your images on.

Do you provide official Docker images?

Yes, container images are available on Docker Hub and on the NGC registry.

Can I use the GPU during a container build (i.e. docker build)?

Yes, as long as you configure your Docker daemon to use the nvidia runtime as the default, you will be able to have build-time GPU support. However, be aware that this can render your images non-portable (see also invalid device function).

Are my container images built for version 1.0 compatible with 2.0?

Yes, for most cases. The main difference being that we don’t mount all driver libraries by default in 2.0. You might need to set the CUDA_DRIVER_CAPABILITIES environment variable in your Dockerfile or when starting the container. Check the documentation of nvidia-container-runtime.

How do I link against driver APIs at build time (e.g. libcuda.so or libnvidia-ml.so)?

Use the library stubs provided in /usr/local/cuda/lib64/stubs/. Our official images already take care of setting LIBRARY_PATH.However, do not set LD_LIBRARY_PATH to this folder, the stubs must not be used at runtime.

The official CUDA images are too big, what do I do?

The devel image tags are large since the CUDA toolkit ships with many libraries, a compiler and various command-line tools.
As a general rule of thumb, you shouldn’t ship your application with its build-time dependencies. We recommend to use multi-stage builds for this purpose. Your final container image should use our runtime or base images.
As of CUDA 9.0 we now ship a base image tag which bundles the strict minimum of dependencies.

Ecosystem enablement

Do you support Docker Swarm mode?

Not currently, support for Swarmkit is still being worked on in the upstream Moby project. You can track our progress here.

Do you support Docker Compose?

Yes, as long as you configure your Docker daemon to use the nvidia runtime as the default, you will be able to use docker-compose with GPU support. There is however an issue to relax this requirement.

Do you support Kubernetes?

Since Kubernetes 1.8, the recommended way is to use our official device plugin. Note that this is still alpha support.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值