Frequently used Docker commands on Ubuntu

Basic Docker Commands

  • docker version

    • Displays Docker version information.
    docker version
    
  • docker info

    • Provides more detailed information about the Docker installation.
    docker info
    
  • docker run

    • Runs a command in a new container.
    docker run hello-world
    
  • docker ps

    • Lists running containers. Use -a to list all containers (running and stopped).
    docker ps
    docker ps -a
    
  • docker stop

    • Stops one or more running containers.
    docker stop <container_id_or_name>
    
  • docker start

    • Starts one or more stopped containers.
    docker start <container_id_or_name>
    
  • docker restart

    • Restarts a running container.
    docker restart <container_id_or_name>
    
  • docker rm

    • Removes one or more containers.
    docker rm <container_id_or_name>
    
  • docker rmi

    • Removes one or more images.
    docker rmi <image_id_or_name>
    

Image Management

  • docker images

    • Lists the Docker images available locally.
    docker images
    
  • docker pull

    • Pulls an image or a repository from a registry.
    docker pull ubuntu
    
  • docker build

    • Builds Docker images from a Dockerfile.
    docker build -t myimage .
    

Network Management

  • docker network ls

    • Lists networks.
    docker network ls
    
  • docker network create

    • Creates a new network.
    docker network create my-network
    
  • docker network rm

    • Removes one or more networks.
    docker network rm my-network
    

Docker Compose Commands

  • docker-compose up

    • Builds, (re)creates, starts, and attaches to containers for a service.
    docker-compose up
    
  • docker-compose down

    • Stops and removes containers, networks, volumes, and images created by up.
    docker-compose down
    

These commands are quite common for daily use in Docker environments and are essential for managing Docker containers and images effectively. Remember to replace placeholders (like <container_id_or_name> or <image_id_or_name>) with actual values from your Docker environment.

With GPU

When you have already created and started a Docker container with NVIDIA GPU support, using it through a terminal involves a similar process to accessing any Docker container, as described previously. The difference lies in ensuring that the container was properly set up to use the NVIDIA GPU, which involves having the appropriate NVIDIA Docker configurations.

Below are detailed steps on how to access and use your NVIDIA GPU-enabled Docker container from a terminal:

1. Verify GPU Access in the Container

Before diving into accessing the container, it’s useful to first confirm that your container has access to the GPU. You can check this by running a command like nvidia-smi inside the container:

docker exec -it <container_name_or_id> nvidia-smi

This command should output information about the GPU, indicating that the container has access to it. If it does, you can proceed to interact with the container normally.

2. Accessing the Container

To access the container, you use the docker exec command to start an interactive shell session:

docker exec -it <container_name_or_id> /bin/bash

Replace <container_name_or_id> with the actual name or ID of your container. You can find this by listing all running containers with docker ps.

3. Running GPU-Accelerated Programs

Inside the container, you can execute any installed GPU-accelerated programs. For example, if you have TensorFlow installed in a container configured for GPU, you can start a Python session and import TensorFlow to verify it recognizes the GPU:

import tensorflow as tf
print(tf.config.list_physical_devices('GPU'))

This Python code should list the available GPUs if TensorFlow is set up correctly to use the GPU.

4. Exiting the Container

To exit the container terminal without stopping the container, you can simply type exit or press Ctrl-D.

Example Session

Here’s a quick recap of how the flow might look:

  1. List Containers (to find your specific container):

    docker ps
    
  2. Check GPU Access (using nvidia-smi):

    docker exec -it my_gpu_container nvidia-smi
    
  3. Access the Container:

    docker exec -it my_gpu_container /bin/bash
    
  4. Run Python and Check TensorFlow GPU (inside the container):

    python
    >>> import tensorflow as tf
    >>> print(tf.config.list_physical_devices('GPU'))
    
  5. Exit When Done:

    exit
    

Troubleshooting

If the nvidia-smi command does not show the GPUs or if TensorFlow does not recognize the GPU, ensure that:

  • Your container was started with the --gpus all flag or similar GPU specification.
  • The NVIDIA Docker runtime is correctly installed and configured on your host system.
  • The Docker image you are using is CUDA-capable and has the necessary NVIDIA libraries.

By following these steps, you can effectively use and interact with your NVIDIA GPU-accelerated Docker container from the terminal.

  • 7
    点赞
  • 30
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值