Docker Manual

This post demonstrates the concepts of container and Docker. The basic operations of Docker, including container data volume, Docker file, Docker Compose and other operation in relation to images and container will also be inspected.

Overview

In brief, docker is a platform that enables developers to exploit, run and deliver applications. Docker makes it convenient to deliver the tested applications by dividing the applications and infrastructure apart. By taking the advantages of the methodologies provided by docker, the shipping, testing, and deploying of code will be significantly simplified.

Principle

Docker is a open-source application container, which enables the developers to pack their applications and dependencies into a paotable image for distribution on any other popular Windows or Linux or Mac machine, as well as virtulization. The containers are seperated from each other without any interfaces, So that the contains can not influence other operating environments. Take the freighter as a metaphar, docker assumes that delivering the operating environment is ilke ocean freight, operating system is like a freight, every softwares that rended in the OS is like a container, and users can assemble the operating environment through standerlized means freely. Meanwhile, the content in the container can be defined by user freely, as well as can be manufactured by prefessional crews.

Docker VS VM

Due to the fact that docker and virture machine both provide a seperated operating environment for application, especially indicated by the flexibility of the docker, it is easy to recognize it as a light-weighted virture machine. However, they are completely different in terms of the infrustructure.

Docker can accomplish all the tasks that VM can be competent to finish. VM virtualizes all the independent system kernels when creating the virtual machine, in contrast, all the virtual machines that docker creats utilize the system kernel of host machine, which saves the resources needed in the deployment of virtualized environment. Thus, it leaves better performance to applications and it takes less system resources than VM.

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-h31ir4ER-1596166770501)(https://www.adminadminpodcast.co.uk/wp-content/uploads/2016/09/dockervsvm.jpg)]

Docker VS VM

The fundamental Archetecture of Docker

Image

Image is like a template, through these templates can you form container serveices. One single image can creat many containers.
To demostrate, image is an light-weighted, executable, independent software package, which is applied to pack the environment that application requires, as well as the software that is developed based on the operating environment. It contains all the codes, library, environment variables and configuration files that needed by the execution of the particular software.

Principle of Image Loading
UnionFS (Union File System)

UnionFS is a file system that has the feature of hierarchical, light-weighted and high-performance. It enables the files to be modified layer by layer, meanwhile, it can mount different directories under the same virtual filesystem. UnionFS is the basis of Docker image system, so that images can be ingerited layer by layer.

All the Docker images are originated from a fundamental image layer, when changing or adding new contents, the new operation will be based on the parent layeres.

container

Docker use container technique to run a set of or a single application, which is created by image.

A container is a runnable instance of an image. You can create, start, stop, move, or delete a container using the Docker API or CLI. You can connect a container to one or more networks, attach storage to it, or even create a new image based on its current state.

Fundementally, a container is nothing but a running process, with some added encapsulation features applied to it in order to keep it isolated from the host and from other containers. One of the most important aspects of container isolation is that each container interacts with its own private filesystem.

By default, a container is relatively well isolated from other containers and its host machine. You can control how isolated a container’s network, storage, or other underlying subsystems are from other containers or from the host machine.

A container is defined by its image as well as any configuration options you provide to it when you create or start it. When a container is removed, any changes to its state that are not stored in persistent storage disappear.

repository

Repository is the place where the images are stored, and the repositories are divided into public repository and private repositary.

How does docker work

Docker is a Client-Server archetecture system, the deamon of Docker is on the host, and it is accesible from the client. When DockerServer recieves the the order from Docker-Client, it will excecute the order.

The docker client talks to the docker daemon, which does the heavy lifting of building, running, and distributing your Docker containers. The Docker client and daemon can run on the same system, or you can connect a Docker client to a remote Docker daemon.

DockerArch

Docker Architecture

Docker Network

Docker0

When starting Docker, there is a original virtual bridge named Docker0 will be created on the host, and the Docker containers started on this host will connect to this virtual bridge. A virtual bridge works like a physical network switch, so that all the conainers on the host machine are connected to a two-layer network via the switch.

Next, to assign the IP to the containers, Docker will select a different IP address and subnet from the private IP network segment defined by RFC1918 and assign it to Docker0. The container connected to Docker0 will select an unused IP from the subnet for use.As usual, Docker will use the network segment 172.17.0.0/16 and allocate 172.17.0.1/16 to docker0 bridge (Docker0 can be seen when using ifconfig command on the host, it can be considered as the management interface of the bridge and used as a virtual network card on the host).The network topology under stand-alone environment is as follows, and the host address is 10.10.0.186/24.

Docker0 Virtual network

Docker0 Virtual network

Evth-pair

When the docker1 and docker2 containers are created, the evth-pair technique will connet the IP address of docker1 between doker0 and that of docker2 between docker0.(evth-pair is a vitrual network interface in pairs)

Network Bridge

Network Bridge

Network Drivers

Docker’s networking subsystem is pluggable, using drivers. Several drivers exist by default, and provide core networking functionality:

  • Bridge

    • The default network driver. If you don’t specify a driver, this is the type of network you are creating. Bridge networks are usually used when your applications run in standalone containers that need to communicate.
      This is usually the best choice when you need multiple containers to communicate on the same Docker host.
  • Host

    • For standalone containers, this will remove network isolation between the container and the Docker host, and use the host’s networking directly. host is only available for swarm services on Docker 17.06 and higher.
      This is the best choice when the network stack should not be isolated from the Docker host, but you want other aspects of the container to be isolated.
  • None

    • For this container, disable all networking. Usually used in conjunction with a custom network driver. none is not available for swarm services.
  • Network Plugins

    • You can install and use third-party network plugins with Docker. These plugins are available from Docker Hub or from third-party vendors. See the vendor’s documentation for installing and using a given network plugin.

Using Bridging Networks

In terms of Docker, a bridge network uses a software bridge which allows containers connected to the same bridge network to communicate, while providing isolation from containers which are not connected to that bridge network.

The Docker bridge driver automatically installs rules in the host machine so that containers on different bridge networks cannot communicate directly with each other. However, bridge networks apply to containers running on the same Docker daemon host. For communication among containers running on different Docker daemon hosts, you can either manage routing at the OS level, or you can use an overlay network.

Now, for bridge networks, we can have either the default bridge network or the user-defined bridge network.

When you start Docker, a default bridge network (also called bridge) is created automatically, and newly-started containers connect to it unless otherwise specified. You can also create user-defined custom bridge networks. User-defined bridge networks are superior to the default bridge network.

  • User-defined bridges provide automatic DNS resolution between containers. Containers on the default bridge network can only access each other by IP addresses, unless you use the --linkoption, which is considered legacy. On a user-defined bridge network, containers can resolve each other by name or alias.
    Imagine an application with a web front-end and a database back-end. If you call your containers weband db, the web container can connect to the dbcontainer at db, no matter which Docker host the application stack is running on.
    If you run the same application stack on the default bridge network, you need to manually create links between the containers (using the legacy --linkflag). These links need to be created in both directions, so you can see this gets complex with more than two containers which need to communicate. Alternatively, you can manipulate the /etc/hostsfiles within the containers, but this creates problems that are difficult to debug.

  • User-defined bridges provide better isolation. All containers without a --networkspecified, are attached to the default bridge network. This can be a risk, as unrelated stacks/services/containers are then able to communicate.
    Using a user-defined network provides a scoped network in which only containers attached to that network are able to communicate.
    If you use a user-defined bridge, during a container’s lifetime, you can connect or disconnect it from user-defined networks on the fly. To remove a container from the default bridge network, you need to stop the container and recreate it with different network options.

In general, containers connected to the same user-defined bridge network effectively expose all ports to each other. For port to be accesible to containers or non-Docker hosts on differnet networks, that port must be published using -p or --publish flag.

Basic Commands


Install Docker

# 1. Uninstall old version
yum remove docker
# 2. Install the pakages
yum install -y yum-utils
# 3. Set the image repository using aliyun
yum -config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# 4. Refresh the yum
yum makecahe fast
# 5. Install the docker
yum install docker-ce docker-ce-cli containerd.io
# 6. Start docker
systemctl docker
# 6. Check the installation of docker
docker version
docker run hello-world
# 7. Uninstall docker 
yum remove docker-ce docker-ce-cli containerd.io
rm -rf /var/lib/docker #/var/lib/docker is the default directory of docker

The process of “docker run”

docker run hello-world
Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/

Operation Commands

Help Commands

docker version
docker info

Image Commands

# 1. Check the information of all the images
[root@wenhao /]# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
centos              latest              831691599b88        3 weeks ago         215MB
nginx               latest              2622e6cca7eb        4 weeks ago         132MB
mysql               latest              be0dbf01a0f3        4 weeks ago         541MB
hello-world         latest              bf756fb1ae65        6 months ago        13.3kB
# Parameters:
   -a, --all   # List all the images
   -q, --quiet # Only list the images ID

# Images Commands
    # Search for particular images
[root@wenhao /]# docker search mysql
NAME                              DESCRIPTION                                     STARS               OFFICIAL            AUTOMATED
mysql                             MySQL is a widely used, open-source relation…   9716                [OK]             
mariadb                           MariaDB is a community-developed fork of MyS…   3539                [OK]             
mysql/mysql-server                Optimized MySQL Server Docker images. Create…   708                                     [OK]
centos/mysql-57-centos7           MySQL 5.7 SQL database server                   77                                    
mysql/mysql-cluster               Experimental MySQL Cluster Docker images. Cr…   72                                   
centurylink/mysql                 Image containing mysql. Optimized to be link…   61                                      [OK]
    # Download images
[root@wenhao /]# docker pull mysql
    # Delete images
[root@wenhao /]# docker rmi -f [ID]
[root@wenhao /]# docker rmi -f $(docker images -aq)  #Delete all the images

Container commands

# Run the images as container
docker run [parameter] image
# Parameters:
--name="Name" The name of the container
-d            Run in backgroud mode
-it           Run in interactive mode
-p            To determine the port of container
              -p Host port:Container port

# Test and go inside a container
[root@wenhao /]# docker run -it centos /bin/bash
[root@08978cfe5976 /]# ls # Check the internal centos
bin  dev  etc  home  lib  lib64  lost+found  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var
[root@08978cfe5976 /]# exit
exit

# To list the running containers
docker ps
# parameters
-a            # List all the containers preciously runned
-n=?          # Display the latest-created container 
-q            # Only display the serial number of the container
-aq           # Display the serial numbers of all the containers

# Exit from the container
exit          # Stop the container and exit
Ctrl + P + Q  # Leave the container running and exit

# Delete tha Container
docker rm [Container ID]      # Delete the particular container, cannot delete the running container   
docker rm -f $(docker ps -aq) # Delete all the container

# Start and stop a container
docker start [container ID]
docker restart [container ID]
docker stop [container ID]
docker kill [container ID]

# Check the logs
docker logs -f -t --tail 
# Parameters
-tf              # To display the log
--tail number    # To display the last number of logs

# Check the info of the container
docker inspect [Container ID]

# Go inside a running container
docker exec -it [Container ID] /bin/bash   # Go into a new terminal in the container, and it is operational inside
docker attach [Container ID]               # Go into a executing terminal, cannot reboot a new process

[root@wenhao /]docker run -it centos /bin/bash
[root@d09e549f73bc /]#
[root@wenhao /]docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
d09e549f73bc        centos              "/bin/bash"         19 seconds ago      Up 16 seconds                           cranky_kalam

[root@wenhao /]docker exec -it d09 /bin/bash
[root@d09e549f73bc /]# ls
bin  dev  etc  home  lib  lib64  lost+found  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var

To copy one file from the container to host machine

docker cp [Container ID][Location] [Host Location] 

The instance below copies test.java in container fff at the location /home to

CP Command

cp Command

Commit images

docker commit -m="Description info" [Container ID] [Image name: [TAG]] # Commit the container as a new copy

Container Data volume

Container Data Volume is a technique that enables the data in the container to sychronize with the host machine or other containers. Configuration passing between containers and the lifecycle of the Container data volume end until there are no conainers are using the source. However, once the source are copied to the host machine, the data can not be deleted.

Connection Between Host & Container
# Apply -v to generate the Container Data Volume**
[root@wenhao /]docker run -it -v [local machine directory]:[container dorectory] /bin.bash

For example, connect the mysql container with the local machine. To connect the directory /home/mysql/conf in the host machine with /etc/mysql/conf.d in the container
and connect the directory /home/mysql/data in the host machine with /var/lib/mysql in the container.

-e means set the configuration value for the container.

After the connection, it is shown that the files in the container are sychronized with the directory connected with it.

Container Data Volume

Container Data Volume Command

To create our own images: Dockerfile

# Make a file named dockerfile
# The content in the file
FROM centos

VOLUME ["volume01","volume02"]

CMD echo "----end----"

在这里插入图片描述

Create New Images Using Dockerfile

Then, to generate the container we just created, we can see the two data volumes we just created.

在这里插入图片描述

The Two Volumes

Then, when we inspect into the container, there are “Mounts” that indicate that the data volumes connections are successful

在这里插入图片描述

Two Volumes in Mounts

Next, after creating a file in the container, we can find that the file also apears in the host directory.

在这里插入图片描述

Creating container.TXT in Container

在这里插入图片描述

Appearence of the File
Connections Between Two Containers

After demonstrating the process of data volume between host and containers, the process of container data volumes is also applied frequently between multiple containers.

Firstly, create one host container named docker01 made by the image “dannycentos”.

在这里插入图片描述

create the first host container

Next, create the child container named docker2 using --volume-from.
在这里插入图片描述

create the second child container

After making a file named docker1 in the volume01 directory in the host container, we cd into the volume folder in the child container, we can find the docker1 file. The result indicate that the data volume is working.
在这里插入图片描述

Check the Connection Between Two Container

DockerFile

DockerFile is the files that needed to create docker image which contains the command scripts.
Steps:

  1. Write a dockerfile
  2. “docker build” the file, to make it a image
  3. “docker run” to run the image
  4. “docker push” to publish the image(To DockerHub or Aliyun image repository)

When writing DockerFile,

  1. Every command has to be capital letter
  2. The executions run from the top to the button
  3. #indicate the annotation
  4. Every command will commit a new image layer.

DockerFile Commands

FROM               # Fundamental image, all the below things begin here
MAINTAINER         # The producer of the image, usually name + e-mail
RUN                # The commands needed when running the image
ADD                # Add stuffs
WORKDIR            # The working directory of the image
VOLUME             # The directories that is being mounted
EXPOSE             # Keep the configuration of the port
CMD                # To determine the command when run the container, only the last one will work, and it can be replaced
ENTRYPOINT         # To determine the command when run the container, and it can be added command at its tail
COPY               # Similar to add, copy the file into the image
ENV                # Set the environment virables

Create one unique centos:

  1. Create a dockerfile in the home directory and write the file.
    In this file, there enables the vim & ifconfig commands.
    在这里插入图片描述

    Write DockerFile
  2. build this image, and name it as mycentos0.1
    在这里插入图片描述

    Build image
  3. Run it and test it
    在这里插入图片描述

    Run The Image
  4. Run the original centos image, and it indicated that the basic image does not support the command vim and config
    在这里插入图片描述

    Original Centos

The Fundamental Commands

cp command

Creating and Removing a user-defined bridge

  • You can use the docker network create <networkName>command to create a user-defined bridge network.

For example:

$ docker network create my-net

You can also specify the subnet, the IP address range, the gateway, and other options. See the Docker network create reference or the output of docker network create --helpfor details.

  • If you don’t need a user-defined brigde, you can use the docker network rm <networkName>command to remove a user-defined bridge network. If containers are currently connected to the network, disconnect them first.

For example:

$ docker network rm my-net

Connecting and Disconnecting from a user-defined bridge

  • You can specify the connection to a user-defined network via 2 ways:

    • with the --network flag when you create a container

    • with the docker network connect <networkName> <containerName> when you have a running container

For example:

  • When you create a new container, you can specify one or more --network flags. This example connects a Nginx container to the my-net network. It also publishes port 80 in the container to port 8080 on the Docker host, so external clients can access that port. Any other container connected to the my-net network has access to all ports on the my-nginx container, and vice versa.
$ docker create --name my-nginx \
  --network my-net \
  --publish 8080:80 \
 nginx:latest
  • To connect a running container to an existing user-defined bridge, use the docker network connect command. The following command connects an already-running my-nginx container to an already-existing my-net network:
$ docker network connect my-net my-nginx

viewing and Configuring user-defined bridge

  1. First, after you have created your user-defined bridge, you can use docker network ls to view the networks that docker has:

    $ docker network ls
    NETWORK ID          NAME                DRIVER              SCOPE
    048f791e7d38        bridge              bridge              local
    da8d919a1530        host                host                local
    3ed7ac5fab75        my-net              bridge              local
    72fae1701634        none                null                local
    
  2. Then you can use docker inspect to see details about your bridge:

    $ docker inspect my-net
    [
     {
         "Name": "my-net",
        "Id": "3ed7ac5fab75bec54ccbc6ed2b54aeba9cec66674f56adb3a5e42432cb7fc94b",
        "Created": "2020-07-16T06:32:34.068088939Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "172.18.0.0/16",
                    "Gateway": "172.18.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
     }
    ]
    

    Here you see that:

    • There are no containers attached to it: "Containers": {}

    • The Gateway is: "Gateway": "172.18.0.1"

  3. Now, to test your user-defined bridge, you can create 4 containers: the first two will be connected to the bridge you created, the third will only be connected to be default bridge, and the fourth connected to both:

    $ docker run -dit --name centos1 --network my-net centos /bin/bash
    b3a8427c7365079a124d7058428d363ff8f581a71bdc3a7a9ac2210033bca875
    
    $ docker run -dit --name centos2 --network my-net centos /bin/bash
    57f2af0d01207e3a1b0da4574b9beee129430c26228eae710871346fc4641cc8
    
    $ docker run -dit --name centos3 centos /bin/bash
    a9ab787762990fc741971f22fee228e2ca3eed529e2f5c734e0aeb21dc1dbd31
    
    $ docker run -dit --name centos4 --network my-net centos /bin/bash
    8ef8f86922235bbc7dd51b4db4c447b89930620537000db77fde786be33fde02
    
    $ docker network connet bridge centos4
    
  4. Now, if all the 4 containers are running properly, you can then inspect the bridge and the my-net to see that the containers are included properly in the "containers" section. You should also be able to see the IP address that each one has been assigned.

  5. Now, you can test their communication ability by connecting each one of them to the terminal (so that you can do the stdin and see the stdout in your terminal) with the docker container attach <container-id> command.

    On user-defined networks like my-net, containers can not only communicate by IP address, but can also resolve a container name to an IP address. This capability is called automatic service discovery. However, you will see that containers not connected to the same bridge cannot communicate with each other (both centos1 and centos2 cannot communicate with centos3).

    C:\Users\pc> docker container attach centos1
    
    [root@b3a8427c7365 /]# ping -c 2 centos2
    
    PING centos2 (172.18.0.3) 56(84) bytes of data.
    64 bytes from centos2.my-net (172.18.0.3): icmp_seq=1   ttl=64 time=0.060 ms
    64 bytes from centos2.my-net (172.18.0.3): icmp_seq=2 ttl=64 time=0.045 ms
    
    --- centos2 ping statistics ---
    2 packets transmitted, 2 received, 0% packet loss, time 3ms
    rtt min/avg/max/mdev = 0.045/0.052/0.060/0.010 ms
    
    [root@b3a8427c7365 /]# ping -c 2 centos4
    PING centos4 (172.18.0.4) 56(84) bytes of data.
    64 bytes from centos4.my-net (172.18.0.4): icmp_seq=1 ttl=64 time=0.064 ms
    64 bytes from centos4.my-net (172.18.0.4): icmp_seq=2 ttl=64 time=0.044 ms
    
    --- centos4 ping statistics ---
    2 packets transmitted, 2 received, 0% packet loss, time 14ms
    rtt min/avg/max/mdev = 0.044/0.054/0.064/0.010 ms
    
    [root@b3a8427c7365 /]# ping -c 2 centos1
    PING centos1 (172.18.0.2) 56(84) bytes of data.
    64 bytes from b3a8427c7365 (172.18.0.2): icmp_seq=1 ttl=64 time=0.015 ms
    64 bytes from b3a8427c7365 (172.18.0.2): icmp_seq=2 ttl=64 time=0.028 ms
    
    --- centos1 ping statistics ---
    2 packets transmitted, 2 received, 0% packet loss, time 2ms
    rtt min/avg/max/mdev = 0.015/0.021/0.028/0.008 ms
    
    [root@b3a8427c7365 /]# ping -c 2 centos3
    ping: centos3: Name or service not known
    
    

    To exit the attached terminal, use Ctrl+p then Ctrl+q.

  6. Finally, remember that centos4 is connected to both the default bridge network and test-net. It should be able to reach all of the other containers. However, you will need to address alpine3 by its IP address, which is the behavior of a default bridge. Attach to it and run the tests.

    C:\Users\pc>docker container attach centos4
    
    [root@8ef8f8692223 /]# ping -c 2 centos1
    PING centos1 (172.18.0.2) 56(84) bytes of data.
    64 bytes from centos1.my-net (172.18.0.2): icmp_seq=1 ttl=64 time=0.112 ms
    64 bytes from centos1.my-net (172.18.0.2): icmp_seq=2 ttl=64 time=0.046 ms
    
    --- centos1 ping statistics ---
    2 packets transmitted, 2 received, 0% packet loss, time 8ms
    rtt min/avg/max/mdev = 0.046/0.079/0.112/0.033 ms
    
    [root@8ef8f8692223 /]# ping -c 2 centos2
    PING centos2 (172.18.0.3) 56(84) bytes of data.
    64 bytes from centos2.my-net (172.18.0.3): icmp_seq=1 ttl=64 time=0.079 ms
    64 bytes from centos2.my-net (172.18.0.3): icmp_seq=2 ttl=64 time=0.081 ms
    
    --- centos2 ping statistics ---
    2 packets transmitted, 2 received, 0% packet loss, time 40ms
    rtt min/avg/max/mdev = 0.079/0.080/0.081/0.001 ms
    
    [root@8ef8f8692223 /]# ping -c 2 centos3
    ping: centos3: Name or service not known
    
    [root@8ef8f8692223 /]# ping -c 2 172.17.0.2
    PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
    
    --- 172.17.0.2 ping statistics ---
    2 packets transmitted, 0 received, 100% packet loss, time 5ms
    
    [root@8ef8f8692223 /]# ping -c 2 centos4
    PING centos4 (172.18.0.4) 56(84) bytes of data.
    64 bytes from 8ef8f8692223 (172.18.0.4): icmp_seq=1 ttl=64 time=0.017 ms
    64 bytes from 8ef8f8692223 (172.18.0.4): icmp_seq=2 ttl=64 time=0.045 ms
    
    --- centos4 ping statistics ---
    2 packets transmitted, 2 received, 0% packet loss, time 64ms
    rtt min/avg/max/mdev = 0.017/0.031/0.045/0.014 ms
    
  7. Now, you have finished your testings, and you can stop and reove/disconnect all containers cnd the my-net network.

    C:\Users\pc>docker stop centos1 centos2 centos3 centos4
    centos1
    centos2
    centos3
    centos4
    
    C:\Users\pc>docker container rm centos1 centos2 centos3 centos4
    centos1
    centos2
    centos3
    centos4
    
    C:\Users\pc>docker network rm my-net
    my-net
    

    or if you still need those containers:

    C:\Users\pc> docker container stop centos1 centos2 centos3 centos4
    
    C:\Users\pc> docker network disconnect test-net centos1
    
    C:\Users\pc> docker network disconnect test-net centos2
    
    C:\Users\pc> docker network disconnect test-net centos4
    
    C:\Users\pc> docker network rm my-net
    

DOcker Compose

Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.

Compose works in all environments: production, staging, development, testing, as well as CI workflows.

Using Compose is basically a three-step process:

  1. Define your app’s environment with a Dockerfile so it can be reproduced anywhere.
  2. Define the services that make up your app in docker-compose.yml so they can be run together in an isolated environment.
  3. Run docker-compose up and Compose starts and runs your entire app.

A docker-compose.yml looks like this:

version: '2.0'
services:
  web:
    build: .
    ports:
    - "5000:5000"
    volumes:
    - .:/code
    - logvolume01:/var/log
    links:
    - redis
  redis:
    image: redis
volumes:
  logvolume01: {}

Compose has commands for managing the whole lifecycle of your application:

  • Start, stop, and rebuild services
  • View the status of running services
  • Stream the log output of running services
  • Run a one-off command on a service
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值