Basic Docker Commands
-
docker version
- Displays Docker version information.
docker version
-
docker info
- Provides more detailed information about the Docker installation.
docker info
-
docker run
- Runs a command in a new container.
docker run hello-world
-
docker ps
- Lists running containers. Use
-a
to list all containers (running and stopped).
docker ps docker ps -a
- Lists running containers. Use
-
docker stop
- Stops one or more running containers.
docker stop <container_id_or_name>
-
docker start
- Starts one or more stopped containers.
docker start <container_id_or_name>
-
docker restart
- Restarts a running container.
docker restart <container_id_or_name>
-
docker rm
- Removes one or more containers.
docker rm <container_id_or_name>
-
docker rmi
- Removes one or more images.
docker rmi <image_id_or_name>
Image Management
-
docker images
- Lists the Docker images available locally.
docker images
-
docker pull
- Pulls an image or a repository from a registry.
docker pull ubuntu
-
docker build
- Builds Docker images from a Dockerfile.
docker build -t myimage .
Network Management
-
docker network ls
- Lists networks.
docker network ls
-
docker network create
- Creates a new network.
docker network create my-network
-
docker network rm
- Removes one or more networks.
docker network rm my-network
Docker Compose Commands
-
docker-compose up
- Builds, (re)creates, starts, and attaches to containers for a service.
docker-compose up
-
docker-compose down
- Stops and removes containers, networks, volumes, and images created by
up
.
docker-compose down
- Stops and removes containers, networks, volumes, and images created by
These commands are quite common for daily use in Docker environments and are essential for managing Docker containers and images effectively. Remember to replace placeholders (like <container_id_or_name>
or <image_id_or_name>
) with actual values from your Docker environment.
With GPU
When you have already created and started a Docker container with NVIDIA GPU support, using it through a terminal involves a similar process to accessing any Docker container, as described previously. The difference lies in ensuring that the container was properly set up to use the NVIDIA GPU, which involves having the appropriate NVIDIA Docker configurations.
Below are detailed steps on how to access and use your NVIDIA GPU-enabled Docker container from a terminal:
1. Verify GPU Access in the Container
Before diving into accessing the container, it’s useful to first confirm that your container has access to the GPU. You can check this by running a command like nvidia-smi
inside the container:
docker exec -it <container_name_or_id> nvidia-smi
This command should output information about the GPU, indicating that the container has access to it. If it does, you can proceed to interact with the container normally.
2. Accessing the Container
To access the container, you use the docker exec
command to start an interactive shell session:
docker exec -it <container_name_or_id> /bin/bash
Replace <container_name_or_id>
with the actual name or ID of your container. You can find this by listing all running containers with docker ps
.
3. Running GPU-Accelerated Programs
Inside the container, you can execute any installed GPU-accelerated programs. For example, if you have TensorFlow installed in a container configured for GPU, you can start a Python session and import TensorFlow to verify it recognizes the GPU:
import tensorflow as tf
print(tf.config.list_physical_devices('GPU'))
This Python code should list the available GPUs if TensorFlow is set up correctly to use the GPU.
4. Exiting the Container
To exit the container terminal without stopping the container, you can simply type exit
or press Ctrl-D
.
Example Session
Here’s a quick recap of how the flow might look:
-
List Containers (to find your specific container):
docker ps
-
Check GPU Access (using
nvidia-smi
):docker exec -it my_gpu_container nvidia-smi
-
Access the Container:
docker exec -it my_gpu_container /bin/bash
-
Run Python and Check TensorFlow GPU (inside the container):
python >>> import tensorflow as tf >>> print(tf.config.list_physical_devices('GPU'))
-
Exit When Done:
exit
Troubleshooting
If the nvidia-smi
command does not show the GPUs or if TensorFlow does not recognize the GPU, ensure that:
- Your container was started with the
--gpus all
flag or similar GPU specification. - The NVIDIA Docker runtime is correctly installed and configured on your host system.
- The Docker image you are using is CUDA-capable and has the necessary NVIDIA libraries.
By following these steps, you can effectively use and interact with your NVIDIA GPU-accelerated Docker container from the terminal.