容器管理工具 Docker

容器管理工具 Docker生态架构及部署

一、Docker生态架构

1.1 Docker Containers Are Everywhere

在这里插入图片描述

1.2 生态架构

在这里插入图片描述

1.2.1 Docker Host

用于安装Docker daemon的主机,即为Docker Host,并且该主机中可基于容器镜像运行容器。

1.2.2 Docker daemon

用于管理Docker Host中运行的容器、容器镜像、容器网络等,管理由Containerd.io提供的容器。

1.2.3 Registry

容器镜像仓库,用于存储已生成容器运行模板的仓库,用户使用时,可直接从容器镜像仓库中下载容器镜像,即容器运行模板,就可以运行容器镜像中包含的应用了。例如:Docker Hub,也可以使用Harbor实现企业私有的容器镜像仓库。

1.2.4 Docker client

Docker Daemon客户端工具,用于同Docker Daemon进行通信,执行用户指令,可部署在Docker Host上,也可以部署在其它主机,能够连接到Docker Daemon即可操作。

1.2.5 Image

把应用运行环境及计算资源打包方式生成可再用于启动容器的不可变的基础设施的模板文件,主要用于基于其启动一个容器。

1.2.6 Container

由容器镜像生成,用于应用程序运行的环境,包含容器镜像中所有文件及用户后添加的文件,属于基于容器镜像生成的可读写层,这也是应用程序活跃的空间。

1.2.7 Docker Dashboard

仅限于MAC与Windows操作系统上安装使用。

Docker Dashboard 提供了一个简单的界面,使您能够直接从您的机器管理您的容器、应用程序和映像,而无需使用 CLI 来执行核心操作。

在这里插入图片描述

1.3 Docker版本

  • Docker-ce Docker社区版,主要用于个人开发者测试使用,免费版本
  • Docker-ee Docker企业版,主要用于为企业开发及应用部署使用,收费版本,免费试用一个月,2020年因国际政治原因曾一度限制中国企业使用。

二、Docker部署

安装Docker-ce版本。

2.1 使用YUM源部署

YUM源可以使用官方YUM源、清华大学开源镜像站配置YUM源,也可以使用阿里云开源镜像站提供的YUM源,建议选择使用阿里云开源镜像站提供的YUM源,原因速度快。

2.1.1 获取阿里云开源镜像站YUM源文件

在这里插入图片描述

在这里插入图片描述

在这里插入图片描述

在docker host上使用 wget下载到/etc/yum.repos.d目录中即可。
# wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

在这里插入图片描述

2.1.2 安装Docker-ce

在docker host上安装即可,本次使用YUM源中稳定版本,由于版本在不断更新,不同的时间安装版本也不相同,使用方法基本一致。

直接安装docker-ce,此为docker daemon,所有依赖将被yum自动安装,含docker client等。
# yum -y install docker-ce

在这里插入图片描述

2.1.3 配置Docker Daemon启动文件

由于Docker使用过程中会对Centos操作系统中的Iptables防火墙中的FORWARD链默认规划产生影响及需要让Docker Daemon接受用户自定义的daemon.json文件,需要要按使用者要求的方式修改。

# vim /usr/lib/systemd/system/docker.service

在这里插入图片描述
在这里插入图片描述

2.1.4 启动Docker服务并查看已安装版本

重启加载daemon文件
# systemctl daemon-reload

启动docker daemon
# systemctl start docker

设置开机自启动
# systemctl enable docker
使用docker version客户端命令查看已安装docker软件版本
# docker version
Client: Docker Engine - Community 客户端
 Version:           20.10.12
 API version:       1.41
 Go version:        go1.16.12
 Git commit:        e91ed57
 Built:             Mon Dec 13 11:45:41 2021
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community Docker管理引擎
 Engine:
  Version:          20.10.12
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.16.12
  Git commit:       459d0df
  Built:            Mon Dec 13 11:44:05 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.12
  GitCommit:        7b11cfaabd73bb80907dd23182b9347b4245eb5d
 runc:
  Version:          1.0.2
  GitCommit:        v1.0.2-0-g52b36a2
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

2.2 使用二进制文件部署

官方不建议此种部署方式,主因为不能自动更新,在条件有限制的情况下使用。

二进制安装参考网址:https://docs.docker.com/engine/install/binaries/

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

获取二进制文件,此文件中包含dockerd与docker 2个文件。
# wget https://download.docker.com/linux/static/stable/x86_64/docker-20.10.9.tgz
解压下载的文件
# tar xf docker-20.10.9.tgz
查看解压出的目录
# ls docker
containerd       containerd-shim-runc-v2  docker   docker-init   runc
containerd-shim  ctr                      dockerd  docker-proxy
安装解压后的所有二进制文件
# cp docker/* /usr/bin/
运行Daemon
# dockerd &

会有大量的信息输出,停止后,直接回车即可使用。

如果您需要使用其他选项启动守护程序,请相应地修改上述命令或创建并编辑文件/etc/docker/daemon.json 以添加自定义配置选项。

确认是否可以使用docker客户端命令
# which docker
/usr/bin/docker

使用二进制安装的docker客户端
# docker version
Client:
 Version:           20.10.9
 API version:       1.41
 Go version:        go1.16.8
 Git commit:        c2ea9bc
 Built:             Mon Oct  4 16:03:22 2021
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.9
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.16.8
  Git commit:       79ea9d3
  Built:            Mon Oct  4 16:07:30 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          v1.4.11
  GitCommit:        5b46e404f6b9f661a205e28d59c982d3634148f8
 runc:
  Version:          1.0.2
  GitCommit:        v1.0.2-0-g52b36a2d
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

使用容器运行Nginx应用及Docker命令

一、使用容器运行Nginx应用

1.1 使用docker run命令运行Nginx应用

1.1.1 观察下载容器镜像过程

查找本地容器镜像文件

执行命令过程一:下载容器镜像
# docker run -d nginx:latest
Unable to find image 'nginx:latest' locally
latest: Pulling from library/nginx
a2abf6c4d29d: Downloading  1.966MB/31.36MB 下载中
a9edb18cadd1: Downloading  1.572MB/25.35MB
589b7251471a: Download complete 下载完成
186b1aaa4aa6: Download complete
b4df32aa5a72: Waiting 等待下载
a0bcbecc962e: Waiting
执行命令过程二:下载容器镜像
[root@localhost ~]# docker run -d nginx:latest
Unable to find image 'nginx:latest' locally
latest: Pulling from library/nginx
a2abf6c4d29d: Downloading  22.87MB/31.36MB
a9edb18cadd1: Downloading  22.78MB/25.35MB
589b7251471a: Waiting
186b1aaa4aa6: Waiting
b4df32aa5a72: Waiting
执行命令过程三:下载容器镜像
[root@localhost ~]# docker run -d nginx:latest
Unable to find image 'nginx:latest' locally
latest: Pulling from library/nginx
a2abf6c4d29d: Pull complete 下载完成
a9edb18cadd1: Pull complete
589b7251471a: Pull complete
186b1aaa4aa6: Pull complete
b4df32aa5a72: Waiting 等待下载

1.1.2 观察容器运行情况

# docker run -d nginx:latest
9834c8c18a7c7c89ab0ea4119d11bafe9c18313c8006bc02ce57ff54d9a1cc0c
命令解释
docker run 启动一个容器
-d 把容器镜像中需要执行的命令以daemon(守护进程)的方式运行
nginx 应用容器镜像的名称,通常表示该镜像为某一个软件
latest 表示上述容器镜像的版本,表示最新版本,用户可自定义其标识,例如v1或v2等
# docker ps
CONTAINER ID   IMAGE        COMMAND                  CREATED          STATUS        PORTS     NAMES
9834c8c18a7c   nginx:latest "/docker-entrypoint.…"   24 seconds ago   Up 23 seconds 80/tcp condescending_pare
命令解释
docker ps 类似于Linux系统的ps命令,查看正在运行的容器,如果想查看没有运行的容器,需要在此命令后使用--all

输出内容解释

CONTAINERIDIMAGECOMMANDCREATEDSTATUSPORTSNAMES
9834c8c18a7cnginx:latest“/docker-entrypoint.…”24 seconds agoUp 23 seconds80/tcpcondescending_pare

1.2 访问容器中运行的Nginx服务

1.2.1 确认容器IP地址

实际工作中不需要此步操作。

 # docker inspect 9834
 
 "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "172.17.0.2",    ## `容器IP地址`
            "IPPrefixLen": 16,
            "IPv6Gateway": "",
            "MacAddress": "02:42:ac:11:00:02",
            "Networks": {
                "bridge": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "d3de2fdbc30ee36a55c1431ef3ae4578392e552009f00b2019b4720735fe5a60",
                    "EndpointID": "d91f47c9f756ff22dc599a207164f2e9366bd0c530882ce0f08ae2278fb3d50c",
                    "Gateway": "172.17.0.1",
                    "IPAddress": "172.17.0.2",   容器IP地址
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:ac:11:00:02",
                    "DriverOpts": null
                }
            }
        }
    }
]

命令解释
docker inspect 为查看容器结构信息命令
9834 为前面生成的容器ID号前4位,使用这个ID号时,由于其较长,使用时能最短识别即可。

1.2.2 使用curl命令访问

# curl http://172.17.0.2

返回结果,表示访问成功!
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

二、Docker命令

2.1 Docker命令获取帮助方法

# docker -h
Flag shorthand -h has been deprecated, please use --help

Usage:  docker [OPTIONS] COMMAND  用法

A self-sufficient runtime for containers 功能介绍

Options: 选项
      --config string      Location of client config files (default "/root/.docker")
  -c, --context string     Name of the context to use to connect to the daemon (overrides
                           DOCKER_HOST env var and default context set with "docker context use")
  -D, --debug              Enable debug mode
  -H, --host list          Daemon socket(s) to connect to
  -l, --log-level string   Set the logging level ("debug"|"info"|"warn"|"error"|"fatal")
                           (default "info")
      --tls                Use TLS; implied by --tlsverify
      --tlscacert string   Trust certs signed only by this CA (default "/root/.docker/ca.pem")
      --tlscert string     Path to TLS certificate file (default "/root/.docker/cert.pem")
      --tlskey string      Path to TLS key file (default "/root/.docker/key.pem")
      --tlsverify          Use TLS and verify the remote
  -v, --version            Print version information and quit

Management Commands: 管理类命令
  app*        Docker App (Docker Inc., v0.9.1-beta3)
  builder     Manage builds
  buildx*     Docker Buildx (Docker Inc., v0.7.1-docker)
  config      Manage Docker configs
  container   Manage containers
  context     Manage contexts
  image       Manage images
  manifest    Manage Docker image manifests and manifest lists
  network     Manage networks
  node        Manage Swarm nodes
  plugin      Manage plugins
  scan*       Docker Scan (Docker Inc., v0.12.0)
  secret      Manage Docker secrets
  service     Manage services
  stack       Manage Docker stacks
  swarm       Manage Swarm
  system      Manage Docker
  trust       Manage trust on Docker images
  volume      Manage volumes

Commands: 未分组命令
  attach      Attach local standard input, output, and error streams to a running container
  build       Build an image from a Dockerfile
  commit      Create a new image from a container's changes
  cp          Copy files/folders between a container and the local filesystem
  create      Create a new container
  diff        Inspect changes to files or directories on a container's filesystem
  events      Get real time events from the server
  exec        Run a command in a running container
  export      Export a container's filesystem as a tar archive
  history     Show the history of an image
  images      List images
  import      Import the contents from a tarball to create a filesystem image
  info        Display system-wide information
  inspect     Return low-level information on Docker objects
  kill        Kill one or more running containers
  load        Load an image from a tar archive or STDIN
  login       Log in to a Docker registry
  logout      Log out from a Docker registry
  logs        Fetch the logs of a container
  pause       Pause all processes within one or more containers
  port        List port mappings or a specific mapping for the container
  ps          List containers
  pull        Pull an image or a repository from a registry
  push        Push an image or a repository to a registry
  rename      Rename a container
  restart     Restart one or more containers
  rm          Remove one or more containers
  rmi         Remove one or more images
  run         Run a command in a new container
  save        Save one or more images to a tar archive (streamed to STDOUT by default)
  search      Search the Docker Hub for images
  start       Start one or more stopped containers
  stats       Display a live stream of container(s) resource usage statistics
  stop        Stop one or more running containers
  tag         Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE
  top         Display the running processes of a container
  unpause     Unpause all processes within one or more containers
  update      Update configuration of one or more containers
  version     Show the Docker version information
  wait        Block until one or more containers stop, then print their exit codes

2.2 Docker官网提供的命令说明

网址链接:https://docs.docker.com/reference/

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

2.3 docker命令应用

2.3.1 docker run

# docker run -i -t --name c1 centos:latest bash
[root@948f234e22a1 /]#
命令解释
docker run 运行一个命令在容器中,命令是主体,没有命令容器就会消亡
-i 交互式
-t 提供终端
--name c1 把将运行的容器命名为c1
centos:latest 使用centos最新版本容器镜像
bash 在容器中执行的命令

# -i -t   直接进入容器中
#  按住ctrl键,再按p键与q键,可以退出交互式的容器,容器会处于运行状态。
查看主机名
[root@948f234e22a1 /]#
查看网络信息
[root@948f234e22a1 /]# ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
12: eth0@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
查看进程
[root@948f234e22a1 /]# ps aux
USER        PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root          1  0.0  0.1  12036  2172 pts/0    Ss   09:58   0:00 bash
root         16  0.0  0.0  44652  1784 pts/0    R+   10:02   0:00 ps aux
查看用户
[root@948f234e22a1 /]# cat /etc/passwd
root:x:0:0:root:/root:/bin/bash
bin:x:1:1:bin:/bin:/sbin/nologin
daemon:x:2:2:daemon:/sbin:/sbin/nologin
adm:x:3:4:adm:/var/adm:/sbin/nologin
lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin
sync:x:5:0:sync:/sbin:/bin/sync
shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown
halt:x:7:0:halt:/sbin:/sbin/halt
mail:x:8:12:mail:/var/spool/mail:/sbin/nologin
operator:x:11:0:operator:/root:/sbin/nologin
games:x:12:100:games:/usr/games:/sbin/nologin
ftp:x:14:50:FTP User:/var/ftp:/sbin/nologin
nobody:x:65534:65534:Kernel Overflow User:/:/sbin/nologin
dbus:x:81:81:System message bus:/:/sbin/nologin
systemd-coredump:x:999:997:systemd Core Dumper:/:/sbin/nologin
systemd-resolve:x:193:193:systemd Resolver:/:/sbin/nologin
查看目录
[root@948f234e22a1 /]# pwd
/
[root@948f234e22a1 /]# ls
bin  etc   lib    lost+found  mnt  proc  run   srv  tmp  var
dev  home  lib64  media       opt  root  sbin  sys  usr
退出命令执行,观察容器运行情况
[root@948f234e22a1 /]# exit
exit
[root@localhost ~]#

2.3.2 docker ps

# docker ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
命令解释
docker ps 查看正在运行的容器,本案例由于没有命令在容器中运行,因此容器被停止了,所以本次查看没有结果。
# docker ps --a 
# docker ps --all
CONTAINER ID   IMAGE           COMMAND     CREATED             STATUS                         PORTS     NAMES
948f234e22a1   centos:latest   "bash"    10 minutes ago      Exited (0) 2 minutes ago                    c1
CONTAINERIDIMAGECOMMANDCREATEDSTATUSPORTSNAMES
948f234e22a1centos:latest“bash”10 minutes agoExited (0) 2 minutes agoc1
命令解释
docker ps --all 可以查看正在运行的和停止运行的容器

2.3.3 docker inspect

了解容器内部的信息,细节

# docker run -it --name c2 centos:latest bash
[root@9f2eea16da4c /]# 
操作说明
`在上述提示符处按住ctrl键,再按p键与q键,可以退出交互式的容器,容器会处于运行状态。`
# docker ps
CONTAINER ID   IMAGE           COMMAND   CREATED          STATUS          PORTS     NAMES
9f2eea16da4c   centos:latest   "bash"    37 seconds ago   Up 35 seconds             c2
命令解释
可以看到容器处于运行状态
# docker inspect c2

"Networks": {
                "bridge": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "d3de2fdbc30ee36a55c1431ef3ae4578392e552009f00b2019b4720735fe5a60",
                    "EndpointID": "d1a2b7609f2f73a6cac67229a4395eef293f695c0ac4fd6c9c9e6913c9c85c1c",
                    "Gateway": "172.17.0.1",
                    "IPAddress": "172.17.0.2",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:ac:11:00:02",
                    "DriverOpts": null
                }
            }
        }
    }
]

命令解释
docker inpect 查看容器详细信息

2.3.4 docker exec

# docker exec -it c2 ls /root
anaconda-ks.cfg  anaconda-post.log  original-ks.cfg
命令解释
docker exec 在容器外实现与容器交互执行某命令
-it 交互式
c2 正在运行的容器名称
ls /root 在正在运行的容器中运行相关的命令
下面命令与上面命令执行效果一致
# docker exec c2 ls /root
anaconda-ks.cfg
anaconda-post.log
original-ks.cfg

2.3.5 docker attach

查看正在运行的容器
# docker ps
CONTAINER ID   IMAGE           COMMAND   CREATED          STATUS          PORTS     NAMES
9f2eea16da4c   centos:latest   "bash"    13 minutes ago   Up 13 minutes             c2
[root@localhost ~]# docker attach c2
[root@9f2eea16da4c /]#
命令解释
docker attach 类似于ssh命令,可以进入到容器中
c2 正在运行的容器名称
说明
docker attach 退出容器时,如不需要容器再运行,可直接使用exit退出;如需要容器继续运行,可使用ctrl+p+q

2.3.6 docker stop

# docker ps
CONTAINER ID   IMAGE           COMMAND   CREATED          STATUS          PORTS     NAMES
9f2eea16da4c   centos:latest   "bash"    22 minutes ago   Up 22 minutes             c2
# docker stop 9f2eea
9f2eea
# docker ps --all
CONTAINER ID   IMAGE           COMMAND                  CREATED          STATUS                       PORTS     NAMES
9f2eea16da4c   centos:latest   "bash"                   22 minutes ago   Exited (137) 4 seconds ago             c2

2.3.7 docker start

# docker ps --all
CONTAINER ID   IMAGE           COMMAND     CREATED          STATUS                       PORTS     NAMES
9f2eea16da4c   centos:latest   "bash"      22 minutes ago   Exited (137) 4 seconds ago              c2
# docker start 9f2eea
9f2eea
# docker ps
CONTAINER ID   IMAGE           COMMAND   CREATED          STATUS          PORTS     NAMES
9f2eea16da4c   centos:latest   "bash"    24 minutes ago   Up 16 seconds             c2

2.3.8 docker top

在Docker Host查看容器中运行的进程信息

# docker top c2
UID    PID     PPID      C      STIME        TTY              TIME                CMD
root  69040   69020      0      18:37       pts/0           00:00:00              bash
UIDPIDPPIDCSTIMETTYTIMECMD
root6904069020018:37pts/000:00:00bash
命令解释
docker top 查看container内进程信息,指在docker host上查看,与docker exec -it c2 ps -ef不同。
输出说明
UID 容器中运行的命令用户ID
PID 容器中运行的命令PID
PPID 容器中运行的命令父PID,由于PPID是一个容器,此可指为容器在Docker Host中进程ID
C     占用CPU百分比
STIME 启动时间
TTY   运行所在的终端
TIME  运行时间
CMD   执行的命令

2.3.9 docker rm

如果容器已停止,使用此命令可以直接删除;如果容器处于运行状态,则需要提前关闭容器后,再删除容器。下面演示容器正在运行关闭后删除的方法。

2.3.9.1 指定删除容器
# docker ps
CONTAINER ID   IMAGE           COMMAND   CREATED      STATUS         PORTS     NAMES
9f2eea16da4c   centos:latest   "bash"    2 days ago   Up 3 seconds             c2
# docker stop c2# docker stop 9f2eea16da4c
# docker rm c2# docker rm 9f2eea16da4c
2.3.9.2 批量删除容器
# docker ps --all
CONTAINER ID   IMAGE           COMMAND          CREATED      STATUS                  PORTS    NAMES
948f234e22a1   centos:latest   "bash"           2 days ago   Exited (0) 2 days ago            c1
01cb3e01273c   centos:latest   "bash"           2 days ago   Exited (0) 2 days ago            systemimage1
46d950fdfb33   nginx:latest    "/docker-ent..." 2 days ago   Exited (0) 2 days ago            upbeat_goldberg
# docker ps --all | awk '{if (NR>=2){print $1}}' | xargs docker rm    
#  删除所有的docker容器  

awk ‘{if (NR>=2){print $1}}’ 过滤第一行
在这里插入图片描述

Docker容器镜像

一、Docker容器镜像操作

2.1 查看本地容器镜像

2.1.1 使用docker images命令查看

# docker images
REPOSITORY   TAG       IMAGE ID       CREATED        SIZE
bash         latest    5557e073f11c   2 weeks ago    13MB
nginx        latest    605c77e624dd   3 weeks ago    141MB
centos       latest    5d0da3dc9764   4 months ago   231MB

2.1.2 使用docker image命令查看

# docker image list
REPOSITORY   TAG       IMAGE ID       CREATED        SIZE
bash         latest    5557e073f11c   2 weeks ago    13MB
nginx        latest    605c77e624dd   3 weeks ago    141MB
centos       latest    5d0da3dc9764   4 months ago   231MB

2.1.3 查看docker容器镜像本地存储位置

考虑到docker容器镜像会占用本地存储空间,建议搭建其它存储系统挂载到本地以便解决占用大量本地存储的问题。

# ls /var/lib/docker
buildkit  containers  image  network  overlay2  plugins  runtimes  swarm  tmp  trust  volumes

2.2 搜索Docker Hub容器镜像

2.2.1 命令行搜索

# docker search centos
输出
NAME                              DESCRIPTION                                     STARS     OFFICIAL   AUTOMATED
centos                            The official build of CentOS.                   6987      [OK]
ansible/centos7-ansible           Ansible on Centos7                              135                  [OK]
consol/centos-xfce-vnc            Centos container with "headless" VNC session…   135                  [OK]
jdeathe/centos-ssh                OpenSSH / Supervisor / EPEL/IUS/SCL Repos - …   121                  [OK]

2.2.2 Docker Hub Web界面搜索

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

2.3 Docker 容器镜像下载

# docker pull centos

2.4 Docker容器镜像删除方法

# docker images
REPOSITORY   TAG       IMAGE ID       CREATED        SIZE
bash         latest    5557e073f11c   2 weeks ago    13MB
nginx        latest    605c77e624dd   3 weeks ago    141MB
centos       latest    5d0da3dc9764   4 months ago   231MB
# docker rmi centos
Untagged: centos:latest
Untagged: centos@sha256:a27fd8080b517143cbbbab9dfb7c8571c40d67d534bbdee55bd6c473f432b177
Deleted: sha256:5d0da3dc976460b72c77d94c8a1ad043720b0416bfc16c52c45d4847e53fadb6
Deleted: sha256:74ddd0ec08fa43d09f32636ba91a0a3053b02cb4627c35051aff89f853606b59

# docker images
REPOSITORY   TAG       IMAGE ID       CREATED        SIZE
centos       latest    5d0da3dc9764   4 months ago   231MB
# docker rmi 5d0da3dc9764

二、Docker容器镜像介绍

2.1 Docker Image

  • Docker 镜像是只读的容器模板,是Docker容器基础
  • 为Docker容器提供了静态文件系统运行环境(rootfs)
  • 是容器的静止状态
  • 容器是镜像的运行状态

2.2 联合文件系统

2.2.1 联合文件系统定义

  • 联合文件系统(union filesystem)
  • 联合文件系统是实现联合挂载技术的文件系统
  • 联合挂载技术可以实现在一个挂载点同时挂载多个文件系统,将挂载点的原目录与被挂载内容进行整合,使得最终可见的文件系统包含整合之后的各层文件和目录

2.2.2 图解

在这里插入图片描述

2.3 Docker Overlay2

容器文件系统有多种存储驱动实现方式:aufs,devicemapper,overlay,overlay2 等,本次以overlay2为例进行说明。

2.3.1 概念

  • registry/repository: registry 是 repository 的集合,repository 是镜像的集合。
  • image:image 是存储镜像相关的元数据,包括镜像的架构,镜像默认配置信息,镜像的容器配置信息等等。它是“逻辑”上的概念,并无物理上的镜像文件与之对应。
  • layer:layer(镜像层) 组成了镜像,单个 layer 可以被多个镜像共享。

在这里插入图片描述

2.3.2 查看Docker Host存储驱动方式

[root@k8s-master ~]# docker info
Client: Docker Engine - Community
 Version:    26.1.4
 Context:    default
 Debug Mode: false
 Plugins:
  buildx: Docker Buildx (Docker Inc.)
    Version:  v0.14.1
    Path:     /usr/libexec/docker/cli-plugins/docker-buildx
  compose: Docker Compose (Docker Inc.)
    Version:  v2.27.1
    Path:     /usr/libexec/docker/cli-plugins/docker-compose

Server:
 Containers: 59
  Running: 29
  Paused: 0
  Stopped: 30
 Images: 22
 Server Version: 26.1.4
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Using metacopy: false
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: systemd
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: d2d58213f83a351ca8f528a95fbd145f5654e957
 runc version: v1.1.12-0-g51d5e94
 init version: de40ad0
 Security Options:
  seccomp
   Profile: builtin
 Kernel Version: 3.10.0-1127.19.1.el7.x86_64
 Operating System: CentOS Linux 7 (Core)
 OSType: linux
 Architecture: x86_64
 CPUs: 4
 Total Memory: 7.375GiB
 Name: k8s-master
 ID: eb880da5-c88e-43a5-b155-8f5d488134d2
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

在这里插入图片描述

# docker info | grep overlay
 Storage Driver: overlay2

在这里插入图片描述

2.3.3 了解images分层

# docker pull nginx
Using default tag: latest
latest: Pulling from library/nginx
a2abf6c4d29d: Pull complete
a9edb18cadd1: Pull complete
589b7251471a: Pull complete
186b1aaa4aa6: Pull complete
b4df32aa5a72: Pull complete
a0bcbecc962e: Pull complete
Digest: sha256:0d17b565c37bcbd895e9d92315a05c1c3c9a29f762b011a10c54a66cd53c9b31
Status: Downloaded newer image for nginx:latest
docker.io/library/nginx:latest

可以看到上述下载的镜像分为6层,如何找到这6层存储在Docker Host哪个位置呢?

首先查看nginx镜像

# docker images
REPOSITORY   TAG       IMAGE ID       CREATED        SIZE
nginx        latest    605c77e624dd   3 weeks ago    141MB

通过其Image ID 605c77e624dd 就可以找到存储位置

# ls /var/lib/docker/image/overlay2/
distribution  imagedb  layerdb  repositories.json

这个目录是查找的入口,非常重要。它存储了镜像管理的元数据。

  • repositories.json 记录了 repo 与镜像 ID 的映射关系
  • imagedb 记录了镜像架构,操作系统,构建镜像的容器 ID 和配置以及 rootfs 等信息
  • layerdb 记录了每层镜像层的元数据。

通过短 ID 查找 repositories.json 文件,找到镜像 nginx 的长 ID,通过长 ID 在 imagedb 中找到该镜像的元数据:

# cat /var/lib/docker/image/overlay2/repositories.json | grep 605c77e624dd
{"Repositories":"nginx":{"nginx:latest":"sha256:605c77e624ddb75e6110f997c58876baa13f8754486b461117934b24a9dc3a85","nginx@sha256:0d17b565c37bcbd895e9d92315a05c1c3c9a29f762b011a10c54a66cd53c9b31":"sha256:605c77e624ddb75e6110f997c58876baa13f8754486b461117934b24a9dc3a85"}}}}
# cat /var/lib/docker/image/overlay2/imagedb/content/sha256/605c77e624ddb75e6110f997c58876baa13f8754486b461117934b24a9dc3a85
......
"os":"linux","rootfs":{"type":"layers","diff_ids":["sha256:2edcec3590a4ec7f40cf0743c15d78fb39d8326bc029073b41ef9727da6c851f","sha256:e379e8aedd4d72bb4c529a4ca07a4e4d230b5a1d3f7a61bc80179e8f02421ad8","sha256:b8d6e692a25e11b0d32c5c3dd544b71b1085ddc1fddad08e68cbd7fda7f70221","sha256:f1db227348d0a5e0b99b15a096d930d1a69db7474a1847acbc31f05e4ef8df8c","sha256:32ce5f6a5106cc637d09a98289782edf47c32cb082dc475dd47cbf19a4f866da","sha256:d874fd2bc83bb3322b566df739681fbd2248c58d3369cb25908d68e7ed6040a6"]}}

这里仅保留我们想要的元数据 rootfs。在 rootfs 中看到 layers 有6层,这6层即对应镜像的6层镜像层。并且,自上而下分别映射到容器的底层到顶层。找到了镜像的6层,接下来的问题是每层的文件内容在哪里呢?

layerdb 元数据会给我们想要的信息,通过底层 diff-id: 2edcec3590a4ec7f40cf0743c15d78fb39d8326bc029073b41ef9727da6c851f 我们查到最底层镜像层的 cache_id,通过 cache_id 即可查找到镜像层的文件内容:

# ls /var/lib/docker/image/overlay2/layerdb/sha256/2edcec3590a4ec7f40cf0743c15d78fb39d8326bc029073b41ef9727da6c851f
cache-id  diff  size  tar-split.json.gz
# cat /var/lib/docker/image/overlay2/layerdb/sha256/2edcec3590a4ec7f40cf0743c15d78fb39d8326bc029073b41ef9727da6c851f/cache-id
85c4c5ecdac6c0d197f899dac227b9d493911a9a5820eac501bb5e9ae361f4c7
# cat /var/lib/docker/image/overlay2/layerdb/sha256/2edcec3590a4ec7f40cf0743c15d78fb39d8326bc029073b41ef9727da6c851f/diff
sha256:2edcec3590a4ec7f40cf0743c15d78fb39d8326bc029073b41ef9727da6c851f

使用 cacheID 查找文件内容

# ls /var/lib/docker/overlay2/85c4c5ecdac6c0d197f899dac227b9d493911a9a5820eac501bb5e9ae361f4c7
committed  diff  link
# ls /var/lib/docker/overlay2/85c4c5ecdac6c0d197f899dac227b9d493911a9a5820eac501bb5e9ae361f4c7/diff
bin   dev  home  lib64  mnt  proc  run   srv  tmp  var
boot  etc  lib   media  opt  root  sbin  sys  usr

上示例中,镜像元数据和镜像层内容是分开存储的。因此通过 cache-id 我们需要到 /var/lib/docker/overlay2 目录下查看镜像层内容,它就存在 diff 目录下,其中 link 存储的是镜像层对应的短 ID,后面会看到它的用场。

找到了镜像层的最底层,接着查找镜像层的“中间层”,发现在 layerdb 目录下没有 diff-id e379e8aedd4d72bb4c529a4ca07a4e4d230b5a1d3f7a61bc80179e8f02421ad8的镜像层:

# ls /var/lib/docker/image/overlay2/layerdb/sha256/e379e8aedd4d72bb4c529a4ca07a4e4d230b5a1d3f7a61bc80179e8f02421ad8
ls: 无法访问/var/lib/docker/image/overlay2/layerdb/sha256/e379e8aedd4d72bb4c529a4ca07a4e4d230b5a1d3f7a61bc80179e8f02421ad8: 没有那个文件或目录

这是因为 docker 引入了内容寻址机制,该机制会根据文件内容来索引镜像和镜像层。docker 利用 rootfs 中的 diff_id 计算出内容寻址的 chainID,通过 chainID 获取 layer 相关信息,最终索引到镜像层文件内容。

对于最底层镜像层其 diff_id 即是 chainID。因此我们可以查找到它的文件内容。除最底层外,chainID 需通过公式 chainID(n) = SHA256(chain(n-1) diffID(n)) 计算得到,计算“中间层” chainID:

# echo -n "sha256:2edcec3590a4ec7f40cf0743c15d78fb39d8326bc029073b41ef9727da6c851f sha256:e379e8aedd4d72bb4c529a4ca07a4e4d230b5a1d3f7a61bc80179e8f02421ad8" | sha256sum -
780238f18c540007376dd5e904f583896a69fe620876cabc06977a3af4ba4fb5  -

根据 “中间层” chainID 查找文件内容:

# ls /var/lib/docker/image/overlay2/layerdb/sha256/780238f18c540007376dd5e904f583896a69fe620876cabc06977a3af4ba4fb5
cache-id  diff  parent  size  tar-split.json.gz
# cat /var/lib/docker/image/overlay2/layerdb/sha256/780238f18c540007376dd5e904f583896a69fe620876cabc06977a3af4ba4fb5/cache-id
57e1f1b11e26f748161b7fccbf2ba6b24c2f98dc8a821729f0be215ad267498c
# cat /var/lib/docker/image/overlay2/layerdb/sha256/780238f18c540007376dd5e904f583896a69fe620876cabc06977a3af4ba4fb5/diff
sha256:e379e8aedd4d72bb4c529a4ca07a4e4d230b5a1d3f7a61bc80179e8f02421ad8
# cat /var/lib/docker/image/overlay2/layerdb/sha256/780238f18c540007376dd5e904f583896a69fe620876cabc06977a3af4ba4fb5/parent
sha256:2edcec3590a4ec7f40cf0743c15d78fb39d8326bc029073b41ef9727da6c851f
镜像层文件内容
# ls /var/lib/docker/overlay2/57e1f1b11e26f748161b7fccbf2ba6b24c2f98dc8a821729f0be215ad267498c
committed  diff  link  lower  work
# ls /var/lib/docker/overlay2/57e1f1b11e26f748161b7fccbf2ba6b24c2f98dc8a821729f0be215ad267498c/diff/
docker-entrypoint.d  etc  lib  tmp  usr  var
镜像层文件内容短 ID
# cat /var/lib/docker/overlay2/57e1f1b11e26f748161b7fccbf2ba6b24c2f98dc8a821729f0be215ad267498c/link
24GM2IZVPTUROAG7AWJO5ZWE6B
“父”镜像层文件内容短 ID
# cat /var/lib/docker/overlay2/57e1f1b11e26f748161b7fccbf2ba6b24c2f98dc8a821729f0be215ad267498c/lower
l/SICZO4QNVZEVOIJ4HDXVDKNYA2

找到最底层文件内容和“中间层”文件内容,再去找最顶层文件内容就变的不难了

2.4 Docker容器与镜像

通过 docker run 命令启动一个镜像为 nginx的容器:

# docker run -d nginx:latest
3272831107a3499afe8160b0cd423e2ac4223522f1995b7be3504a1d3d272878
# docker ps | grep nginx
3272831107a3   nginx:latest   "/docker-entrypoint.…"   11 seconds ago   Up 9 seconds   80/tcp    angry_beaver
# mount | grep overlay
overlay on /var/lib/docker/overlay2/b3f5c8b42ac055c715216e376cfe44571f618a876f481533ec1434aa0bc4f8ed/merged type overlay (rw,relatime,seclabel,lowerdir=/var/lib/docker/overlay2/l/MS2X66BYF6UZ7EKUWMZJKCF4HO:/var/lib/docker/overlay2/l/ODJROQUGY3WQMOGQ3BLYZGIAG4:/var/lib/docker/overlay2/l/Q5LOBFJRH5M7M5CMSWW5L4VYOY:/var/lib/docker/overlay2/l/ZR35FN2E3WEARZV4HLRU373FT7:/var/lib/docker/overlay2/l/NSM2PTAT6TIT2H6G3HFNGZJH5N:/var/lib/docker/overlay2/l/24GM2IZVPTUROAG7AWJO5ZWE6B:/var/lib/docker/overlay2/l/SICZO4QNVZEVOIJ4HDXVDKNYA2,upperdir=/var/lib/docker/overlay2/b3f5c8b42ac055c715216e376cfe44571f618a876f481533ec1434aa0bc4f8ed/diff,workdir=/var/lib/docker/overla 2/b3f5c8b42ac055c715216e376cfe44571f618a876f481533ec1434aa0bc4f8ed/work)

可以看到,启动容器会 mount 一个 overlay 的联合文件系统到容器内。这个文件系统由三层组成:

  • lowerdir:只读层,即为镜像的镜像层。
  • upperdir:读写层,该层是容器的读写层,对容器的读写操作将反映在读写层。
  • workdir: overlayfs 的内部层,用于实现从只读层到读写层的 copy_up 操作。
  • merge:容器内作为同一视图联合挂载点的目录。

这里需要着重介绍的是容器的 lowerdir 镜像只读层,查看只读层的短 ID:

lowerdir=/var/lib/docker/overlay2/l/MS2X66BYF6UZ7EKUWMZJKCF4HO
/var/lib/docker/overlay2/l/ODJROQUGY3WQMOGQ3BLYZGIAG4
/var/lib/docker/overlay2/l/Q5LOBFJRH5M7M5CMSWW5L4VYOY
/var/lib/docker/overlay2/l/ZR35FN2E3WEARZV4HLRU373FT7
/var/lib/docker/overlay2/l/NSM2PTAT6TIT2H6G3HFNGZJH5N
/var/lib/docker/overlay2/l/24GM2IZVPTUROAG7AWJO5ZWE6B
/var/lib/docker/overlay2/l/SICZO4QNVZEVOIJ4HDXVDKNYA2

镜像层只有6层这里的短 ID 却有7个?
在 /var/lib/docker/overlay2/l 目录下我们找到了答案:

# cd /var/lib/docker/overlay2/l
# pwd
/var/lib/docker/overlay2/l
# ls
24GM2IZVPTUROAG7AWJO5ZWE6B  LZEAXJGRW6HKBBGGB2N4CWMSVJ  R2XTGODAA67NQJM44MIKMDUF4W
5OI5WMJ2FP7QI7IFWDMHLBRDDN  MS2X66BYF6UZ7EKUWMZJKCF4HO  SICZO4QNVZEVOIJ4HDXVDKNYA2
644ISPHLTBSSC2KLP6BGHHHZPR  NSM2PTAT6TIT2H6G3HFNGZJH5N  ZR35FN2E3WEARZV4HLRU373FT7
6CQUILQSJNVTMFFV3ABCCOGOYG  ODJROQUGY3WQMOGQ3BLYZGIAG4
BQENAYC44O2ZCZFT5URMH5OADK  Q5LOBFJRH5M7M5CMSWW5L4VYOY
# ls -l MS2X66BYF6UZ7EKUWMZJKCF4HO/
总用量 0
drwxr-xr-x. 4 root root 43 1月  25 01:27 dev
drwxr-xr-x. 2 root root 66 1月  25 01:27 etc
[root@192 l]# ls -l ODJROQUGY3WQMOGQ3BLYZGIAG4/
总用量 0
drwxr-xr-x. 2 root root 41 12月 30 03:28 docker-entrypoint.d

[root@192 l]# ls -l Q5LOBFJRH5M7M5CMSWW5L4VYOY/
总用量 0
drwxr-xr-x. 2 root root 41 12月 30 03:28 docker-entrypoint.d
[root@192 l]# ls -l ZR35FN2E3WEARZV4HLRU373FT7/
总用量 0
drwxr-xr-x. 2 root root 45 12月 30 03:28 docker-entrypoint.d
[root@192 l]# ls -l NSM2PTAT6TIT2H6G3HFNGZJH5N/
总用量 4
-rwxrwxr-x. 1 root root 1202 12月 30 03:28 docker-entrypoint.sh
[root@192 l]# ls -l 24GM2IZVPTUROAG7AWJO5ZWE6B/
总用量 4
drwxr-xr-x.  2 root root    6 12月 30 03:28 docker-entrypoint.d
drwxr-xr-x. 18 root root 4096 12月 30 03:28 etc
drwxr-xr-x.  4 root root   45 12月 20 08:00 lib
drwxrwxrwt.  2 root root    6 12月 30 03:28 tmp
drwxr-xr-x.  7 root root   66 12月 20 08:00 usr
drwxr-xr-x.  5 root root   41 12月 20 08:00 var
[root@192 l]# ls -l SICZO4QNVZEVOIJ4HDXVDKNYA2/
总用量 12
drwxr-xr-x.  2 root root 4096 12月 20 08:00 bin
drwxr-xr-x.  2 root root    6 12月 12 01:25 boot
drwxr-xr-x.  2 root root    6 12月 20 08:00 dev
drwxr-xr-x. 30 root root 4096 12月 20 08:00 etc
drwxr-xr-x.  2 root root    6 12月 12 01:25 home
drwxr-xr-x.  8 root root   96 12月 20 08:00 lib
drwxr-xr-x.  2 root root   34 12月 20 08:00 lib64
drwxr-xr-x.  2 root root    6 12月 20 08:00 media
drwxr-xr-x.  2 root root    6 12月 20 08:00 mnt
drwxr-xr-x.  2 root root    6 12月 20 08:00 opt
drwxr-xr-x.  2 root root    6 12月 12 01:25 proc
drwx------.  2 root root   37 12月 20 08:00 root
drwxr-xr-x.  3 root root   30 12月 20 08:00 run
drwxr-xr-x.  2 root root 4096 12月 20 08:00 sbin
drwxr-xr-x.  2 root root    6 12月 20 08:00 srv
drwxr-xr-x.  2 root root    6 12月 12 01:25 sys
drwxrwxrwt.  2 root root    6 12月 20 08:00 tmp
drwxr-xr-x. 11 root root  120 12月 20 08:00 usr
drwxr-xr-x. 11 root root  139 12月 20 08:00 var

镜像层ODJROQUGY3WQMOGQ3BLYZGIAG4/Q5LOBFJRH5M7M5CMSWW5L4VYOY/ZR35FN2E3WEARZV4HLRU373FT7/NSM2PTAT6TIT2H6G3HFNGZJH5N/24GM2IZVPTUROAG7AWJO5ZWE6B/SICZO4QNVZEVOIJ4HDXVDKNYA2 分别对应镜像的6层镜像层文件内容,它们分别映射到镜像层的 diff 目录。而 MS2X66BYF6UZ7EKUWMZJKCF4HO映射的是容器的初始化层 init,该层内容是和容器配置相关的文件内容,它是只读的。

启动了容器,docker 将镜像的内容 mount 到容器中。那么,如果在容器内写文件会对镜像有什么影响呢?

2.5 容器内写文件

不难理解,镜像层是只读的,在容器中写文件其实是将文件写入到 overlay 的可读写层。

这里有几个 case 可以测试:

  • 读写层不存在该文件,只读层存在。
  • 读写层存在该文件,只读层不存在。
  • 读写层和只读层都不存在该文件。

我们简单构建一种读写层和只读层都不存在的场景:

# docker run -it centos:latest bash
[root@355e99982248 /]# touch yuyang.txt
[root@355e99982248 /]# ls
bin  etc   lib    lost+found  mnt      opt   root  sbin  sys  usr
dev  home  lib64  media       yuyang.txt  proc  run   srv   tmp  var

查看读写层是否有该文件:

查看镜像是否有变化
# docker images
REPOSITORY   TAG       IMAGE ID       CREATED        SIZE
ubuntu       latest    d13c942271d6   2 weeks ago    72.8MB
bash         latest    5557e073f11c   2 weeks ago    13MB
nginx        latest    605c77e624dd   3 weeks ago    141MB
centos       latest    5d0da3dc9764   4 months ago   231MB

[root@localhost ~]# cat /var/lib/docker/image/overlay2/repositories.json | grep 5d0da3dc9764
{"Repositories"{"centos:latest":"sha256:5d0da3dc976460b72c77d94c8a1ad043720b0416bfc16c52c45d4847e53fadb6","centos@sha256:a27fd8080b517143cbbbab9dfb7c8571c40d67d534bbdee55bd6c473f432b177":"sha256:5d0da3dc976460b72c77d94c8a1ad043720b0416bfc16c52c45d4847e53fadb6"}}}



[root@localhost ~]# cat /var/lib/docker/image/overlay2/imagedb/content/sha256/5d0da3dc976460b72c77d94c8a1ad043720b0416bfc16c52c45d4847e53fadb6
{"os":"linux","rootfs":{"type":"layers","diff_ids":["sha256:74ddd0ec08fa43d09f32636ba91a0a3053b02cb4627c35051aff89f853606b59"]}}


[root@localhost ~]# ls 
/var/lib/docker/image/overlay2/layerdb/sha256/74ddd0ec08fa43d09f32636ba91a0a3053b02cb4627c35051aff89f853606b59:
cache-id  diff  size  tar-split.json.gz
[root@localhost ~]# cat /var/lib/docker/image/overlay2/layerdb/sha256/74ddd0ec08fa43d09f32636ba91a0a3053b02cb4627c35051aff89f853606b59/cache-id
b17bc5c5103514923a30983c48f909e06f366b7aa1e85f112b67abb3ef5cd0cb

[root@localhost ~]# cat /var/lib/docker/image/overlay2/layerdb/sha256/74ddd0ec08fa43d09f32636ba91a0a3053b02cb4627c35051aff89f853606b59/diff
sha256:74ddd0ec08fa43d09f32636ba91a0a3053b02cb4627c35051aff89f853606b59


[root@localhost ~]# ls /var/lib/docker/overlay2/b17bc5c5103514923a30983c48f909e06f366b7aa1e85f112b67abb3ef5cd0cb
committed  diff  link
[root@localhost ~]# ls /var/lib/docker/overlay2/b17bc5c5103514923a30983c48f909e06f366b7aa1e85f112b67abb3ef5cd0cb/diff/
bin  etc   lib    lost+found  mnt  proc  run   srv  tmp  var
dev  home  lib64  media       opt  root  sbin  sys  usr


查看容器是否有变化
[root@localhost ~]# mount | grep overlay
type overlay (rw,relatime,seclabel,lowerdir=/var/lib/docker/overlay2/l/R2W2LEMDPRIUFYDVSLIQSCYTGX:/var/lib/docker/overlay2/l/R2XTGODAA67NQJM44MIKMDUF4W,upperdir=/var/lib/docker overlay2/7f0b54c748171872ce564305e394547555cb1182abf802c2262384be3dc78a8f/diff,workdir=/var/lib/docker/overlay2/7f0b54c748171872ce564305e394547555cb1182abf802c2262384be3dc78a8f/work)


[root@localhost ~]# ls -l /var/lib/docker/overlay2/l/
总用量 0

lrwxrwxrwx. 1 root root 77 1月  25 01:41 R2W2LEMDPRIUFYDVSLIQSCYTGX -> ../7f0b54c748171872ce564305e394547555cb1182abf802c2262384be3dc78a8f-init/diff
lrwxrwxrwx. 1 root root 72 1月  25 00:29 R2XTGODAA67NQJM44MIKMDUF4W -> ../b17bc5c5103514923a30983c48f909e06f366b7aa1e85f112b67abb3ef5cd0cb/diff


[root@localhost ~]# ls /var/lib/docker/overlay2/7f0b54c748171872ce564305e394547555cb1182abf802c2262384be3dc78a8f/diff
yuyang.txt


[root@localhost ~]# ls /var/lib/docker/overlay2/7f0b54c748171872ce564305e394547555cb1182abf802c2262384be3dc78a8f/merged/
bin  etc   lib    lost+found  mnt         opt   root  sbin  sys  usr
dev  home  lib64  media       yuyang.txt  proc  run   srv   tmp  var

三、Docker容器镜像操作命令

3.1 docker commit

上节提到容器内写文件会反映在 overlay 的可读写层,那么读写层的文件内容可以做成镜像吗?

可以。docker 通过 commit 和 build 操作实现镜像的构建。commit 将容器提交为一个镜像,build 在一个镜像的基础上构建镜像。

提交容器到镜像,实现容器持久化;

使用 commit 将上节的容器提交为一个镜像:

[root@355e99982248 /]#   ctrl+p+q
# docker ps
CONTAINER ID   IMAGE           COMMAND                  CREATED          STATUS          PORTS     NAMES
355e99982248   centos:latest   "bash"                   21 minutes ago   Up 21 minutes             fervent_perlman
# docker commit 355e99982248
sha256:8965dcf23201ed42d4904e2f10854d301ad93b34bea73f384440692e006943de
# docker images
REPOSITORY   TAG       IMAGE ID       CREATED              SIZE
<none>       <none>    8965dcf23201   About a minute ago   231MB

image 短 ID 8965dcf23201 即为容器提交的镜像,查看镜像的 imagedb 元数据:

# cat  /var/lib/docker/image/overlay2/imagedb/content/sha256/8965dcf23201ed42d4904e2f10854d301ad93b34bea73f384440692e006943de
......
"os":"linux","rootfs":{"type":"layers","diff_ids":["sha256:74ddd0ec08fa43d09f32636ba91a0a3053b02cb4627c35051aff89f853606b59","sha256:551c3089b186b4027e949910981ff1ba54114610f2aab9359d28694c18b0203b"]}}

可以看到镜像层自上而下的前1个镜像层 diff_id 和 centos 镜像层 diff_id 是一样的,说明每层镜像层可以被多个镜像共享。而多出来的一层镜像层内容即是上节我们写入文件的内容:

# echo -n "sha256:74ddd0ec08fa43d09f32636ba91a0a3053b02cb4627c35051aff89f853606b59 sha256:551c3089b186b4027e949910981ff1ba54114610f2aab9359d28694c18b0203b" | sha256sum -
92f7208b1cc0b5cc8fe214a4b0178aa4962b58af8ec535ee7211f335b1e0ed3b  -
# cd /var/lib/docker/image/overlay2/layerdb/sha256/92f7208b1cc0b5cc8fe214a4b0178aa4962b58af8ec535ee7211f335b1e0ed3b
[root@192 92f7208b1cc0b5cc8fe214a4b0178aa4962b58af8ec535ee7211f335b1e0ed3b]# ls
cache-id  diff  parent  size  tar-split.json.gz



[root@192 92f7208b1cc0b5cc8fe214a4b0178aa4962b58af8ec535ee7211f335b1e0ed3b]# cat cache-id
250dc0b4f2c5f27952241a55cd4c286bfaaf8af4b77c9d0a38976df4c147cb95


[root@192 92f7208b1cc0b5cc8fe214a4b0178aa4962b58af8ec535ee7211f335b1e0ed3b]# ls /var/lib/docker/overlay2/250dc0b4f2c5f27952241a55cd4c286bfaaf8af4b77c9d0a38976df4c147cb95
diff  link  lower  work


[root@192 92f7208b1cc0b5cc8fe214a4b0178aa4962b58af8ec535ee7211f335b1e0ed3b]# ls /var/lib/docker/overlay2/250dc0b4f2c5f27952241a55cd4c286bfaaf8af4b77c9d0a38976df4c147cb95/diff
yuyang.txt

3.2 docker save

导出容器镜像,方便分享。
导出镜像文件,实现镜像内容持久化。
与load一起使用

# docker save -o centos.tar centos:latest  
# ls

centos.tar  

3.3 docker load

把他人分享的容器镜像导入到本地,这通常是容器镜像分发方式之一。

# docker load -i centos.tar

3.4 docker export

把正在运行的容器导出
导出容器和镜像,实现容器内容持久化

# docker ps
CONTAINER ID   IMAGE           COMMAND                  CREATED       STATUS       PORTS     NAMES
355e99982248   centos:latest   "bash"                   7 hours ago   Up 7 hours             fervent_perlman
# docker export -o centos7.tar 355e99982248
# ls
centos7.tar

3.5 docker import

导入使用docker export导入的容器做为本地容器镜像。
与export一起使用

# ls
centos7.tar 
# docker import centos7.tar centos7:v1
# docker images
REPOSITORY   TAG       IMAGE ID       CREATED              SIZE
centos7      v1        3639f9a13231   17 seconds ago       231MB

docker容器镜像加速器及本地容器镜像仓库

一、容器镜像加速器

由于国内访问国外的容器镜像仓库速度比较慢,因此国内企业创建了容器镜像加速器,以方便国内用户使用容器镜像。

1.1 获取阿里云容器镜像加速地址

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

1.2 配置docker daemon使用加速器

添加daemon.json配置文件
# vim /etc/docker/daemon.json
# cat /etc/docker/daemon.json
{
        "registry-mirrors": ["https://s27w6kze.mirror.aliyuncs.com"]
}
重启docker
# systemctl daemon-reload
# systemctl restart docker
尝试下载容器镜像
# docker pull centos

二、容器镜像仓库

2.1 docker hub

2.1.1 注册

准备邮箱及用户ID

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

2.1.2 登录

在这里插入图片描述
在这里插入图片描述

2.1.3 创建容器镜像仓库

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

2.1.4 在本地登录Docker Hub

默认可以不添加docker hub容器镜像仓库地址
# docker login 
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username: dockersmartyuyang
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded 成功
登出
# docker logout
Removing login credentials for https://index.docker.io/v1/

2.1.5 上传容器镜像

在登录Docker Hub主机上传容器镜像,向全球用户共享容器镜像。

为容器镜像重新打标记

原始容器镜像
# docker images
REPOSITORY   TAG       IMAGE ID       CREATED        SIZE
centos       latest    5d0da3dc9764   4 months ago   231MB

重新为容器镜像打标记
# docker tag centos:latest dockersmartyuyang/centos:v1

重新打标记后容器镜像
# docker images
REPOSITORY              TAG       IMAGE ID       CREATED        SIZE
dockersmartyuyang/centos   v1        5d0da3dc9764   4 months ago   231MB
centos                  latest    5d0da3dc9764   4 months ago   231MB
上传容器镜像至docker hub
# docker push dockersmartyuyang/centos:v1
The push refers to repository [docker.io/dockersmartyuyang/centos]
74ddd0ec08fa: Mounted from library/centos
v1: digest: sha256:a1801b843b1bfaf77c501e7a6d3f709401a1e0c83863037fa3aab063a7fdb9dc size: 529

在这里插入图片描述

2.1.6 下载容器镜像

在其它主机上下载

下载
# docker pull dockersmartyuyang/centos:v1
v1: Pulling from dockersmartyuyang/centos
a1d0c7532777: Pull complete
Digest: sha256:a1801b843b1bfaf77c501e7a6d3f709401a1e0c83863037fa3aab063a7fdb9dc
Status: Downloaded newer image for dockersmartyuyang/centos:v1
docker.io/dockersmartyuyang/centos:v1


查看下载后容器镜像
# docker images
REPOSITORY                 TAG       IMAGE ID       CREATED        SIZE
dockersmartyuyang/centos   v1        5d0da3dc9764   4 months ago   231MB

2.2 harbor

2.2.1 获取 docker compose二进制文件

下载docker-compose二进制文件
# wget https://github.com/docker/compose/releases/download/1.25.0/docker-compose-Linux-x86_64
查看已下载二进制文件
# ls
docker-compose-Linux-x86_64
移动到系统文件目录
# echo $PATH

在这里插入图片描述

移动二进制文件到/usr/local/sbin目录或/usr/local/bin或/usr/bin:/root/bin,并更名为docker-compose
# mv docker-compose-Linux-x86_64 /usr/bin/docker-compose
为二进制文件添加可执行权限
# chmod +x /usr/bin/docker-compose
安装完成后,查看docker-compse版本
# docker-compose version
docker-compose version 1.25.0, build 0a186604
docker-py version: 4.1.0
CPython version: 3.7.4
OpenSSL version: OpenSSL 1.1.0l  10 Sep 2019

2.2.2 获取harbor安装文件

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

下载harbor离线安装包
# wget https://github.com/goharbor/harbor/releases/download/v2.4.1/harbor-offline-installer-v2.4.1.tgz
查看已下载的离线安装包
# ls
harbor-offline-installer-v2.4.1.tgz

2.2.3 获取TLS文件

查看准备好的证书
# ls
kubeyuyang.com_nginx.zip
解压证书压缩包文件
# unzip kubeyuyang.com_nginx.zip
Archive:  kubeyuyang.com_nginx.zip
Aliyun Certificate Download
  inflating: 6864844_kubeyuyang.com.pem
  inflating: 6864844_kubeyuyang.com.key
查看解压出的文件
# ls
6864844_kubeyuyang.com.key
6864844_kubeyuyang.com.pem

2.2.4 修改配置文件

解压harbor离线安装包
# tar xf harbor-offline-installer-v2.4.1.tgz
查看解压出来的目录
# ls
harbor 
移动证书到harbor目录
# # mv 6864844_kubeyuyang.com.* harbor

查看harbor目录
# ls harbor
6864844_kubeyuyang.com.key  6864844_kubeyuyang.com.pem  common.sh  harbor.v2.4.1.tar.gz  harbor.yml.tmpl  install.sh  LICENSE  prepare
创建配置文件
# cd harbor/
# mv harbor.yml.tmpl harbor.yml
修改配置文件内容

# vim harbor.yml

# Configuration file of Harbor

# The IP address or hostname to access admin UI and registry service.
# DO NOT use localhost or 127.0.0.1, because Harbor needs to be accessed by external clients.
hostname: www.kubeyuyang.com #修改为域名,而且一定是证书签发的域名

# http related config
http:
  # port for http, default is 80. If https enabled, this port will redirect to https port
  port: 80

# https related config
https:
  # https port for harbor, default is 443
  port: 443
  # The path of cert and key files for nginx
  certificate: /root/harbor/6864844_kubeyuyang.com.pem  #证书
  private_key: /root/harbor/6864844_kubeyuyang.com.key  #密钥

# # Uncomment following will enable tls communication between all harbor components
# internal_tls:
#   # set enabled to true means internal tls is enabled
#   enabled: true
#   # put your cert and key files on dir
#   dir: /etc/harbor/tls/internal

# Uncomment external_url if you want to enable external proxy
# And when it enabled the hostname will no longer used
# external_url: https://reg.mydomain.com:8433

# The initial password of Harbor admin
# It only works in first time to install harbor
# Remember Change the admin password from UI after launching Harbor.
harbor_admin_password: 12345 #访问密码
......

2.2.5 执行预备脚本

# ./prepare
输出
prepare base dir is set to /root/harbor
Clearing the configuration file: /config/portal/nginx.conf
Clearing the configuration file: /config/log/logrotate.conf
Clearing the configuration file: /config/log/rsyslog_docker.conf
Generated configuration file: /config/portal/nginx.conf
Generated configuration file: /config/log/logrotate.conf
Generated configuration file: /config/log/rsyslog_docker.conf
Generated configuration file: /config/nginx/nginx.conf
Generated configuration file: /config/core/env
Generated configuration file: /config/core/app.conf
Generated configuration file: /config/registry/config.yml
Generated configuration file: /config/registryctl/env
Generated configuration file: /config/registryctl/config.yml
Generated configuration file: /config/db/env
Generated configuration file: /config/jobservice/env
Generated configuration file: /config/jobservice/config.yml
Generated and saved secret to file: /data/secret/keys/secretkey
Successfully called func: create_root_cert
Generated configuration file: /compose_location/docker-compose.yml
Clean up the input dir

2.2.6 执行安装脚本

# ./install.sh
输出
[Step 0]: checking if docker is installed ...

Note: docker version: 20.10.12

[Step 1]: checking docker-compose is installed ...

Note: docker-compose version: 1.25.0

[Step 2]: loading Harbor images ...

[Step 3]: preparing environment ...

[Step 4]: preparing harbor configs ...
prepare base dir is set to /root/harbor

[Step 5]: starting Harbor ...
Creating network "harbor_harbor" with the default driver
Creating harbor-log ... done
Creating harbor-db     ... done
Creating registry      ... done
Creating registryctl   ... done
Creating redis         ... done
Creating harbor-portal ... done
Creating harbor-core   ... done
Creating harbor-jobservice ... done
Creating nginx             ... done
✔ ----Harbor has been installed and started successfully.----

2.2.7 验证运行情况

# docker ps
CONTAINER ID   IMAGE                                COMMAND                  CREATED              STATUS                        PORTS                                                                            NAMES
71c0db683e4a   goharbor/nginx-photon:v2.4.1         "nginx -g 'daemon of…"   About a minute ago   Up About a minute (healthy)   0.0.0.0:80->8080/tcp, :::80->8080/tcp, 0.0.0.0:443->8443/tcp, :::443->8443/tcp   nginx
4e3b53a86f01   goharbor/harbor-jobservice:v2.4.1    "/harbor/entrypoint.…"   About a minute ago   Up About a minute (healthy)                                                                                    harbor-jobservice
df76e1eabbf7   goharbor/harbor-core:v2.4.1          "/harbor/entrypoint.…"   About a minute ago   Up About a minute (healthy)                                                                                    harbor-core
eeb4d224dfc4   goharbor/harbor-portal:v2.4.1        "nginx -g 'daemon of…"   About a minute ago   Up About a minute (healthy)                                                                                    harbor-portal
70e162c38b59   goharbor/redis-photon:v2.4.1         "redis-server /etc/r…"   About a minute ago   Up About a minute (healthy)                                                                                    redis
8bcc0e9b06ec   goharbor/harbor-registryctl:v2.4.1   "/home/harbor/start.…"   About a minute ago   Up About a minute (healthy)                                                                                    registryctl
d88196398df7   goharbor/registry-photon:v2.4.1      "/home/harbor/entryp…"   About a minute ago   Up About a minute (healthy)                                                                                    registry
ed5ba2ba9c82   goharbor/harbor-db:v2.4.1            "/docker-entrypoint.…"   About a minute ago   Up About a minute (healthy)                                                                                    harbor-db
dcb4b57c7542   goharbor/harbor-log:v2.4.1           "/bin/sh -c /usr/loc…"   About a minute ago   Up About a minute (healthy)   127.0.0.1:1514->10514/tcp                                                        harbor-log

2.2.8 访问harbor UI界面

2.2.8.1 在物理机通过浏览器访问

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

2.2.8.2 在Docker Host主机通过域名访问
添加域名解析
# vim /etc/hosts
# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.10.155 www.kubeyuyang.com
#  ip地址   域名

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

三、docker镜像上传至Harbor及从harbor下载

3.1 修改docker daemon使用harbor

添加/etc/docker/daemon.json文件,默认不存在,需要手动添加
# vim /etc/docker/daemon.json
# cat /etc/docker/daemon.json
{
        "insecure-registries": ["www.kube.com"]
}
重启加载daemon配置
# systemctl daemon-reload
重启docker
# systemctl restart docker

3.2 docker tag

查看已有容器镜像文件
# docker images
REPOSITORY                      TAG       IMAGE ID       CREATED        SIZE
centos                          latest    5d0da3dc9764   4 months ago   231MB
为已存在镜像重新添加tag
# docker tag centos:latest www.kube.com/library/centos:v1
再次查看本地容器镜像
# docker images
REPOSITORY                       TAG       IMAGE ID       CREATED        SIZE
centos                           latest    5d0da3dc9764   4 months ago   231MB
www.kube.com/library/centos   v1        5d0da3dc9764   4 months ago   231MB

3.3 docker push

# docker login www.kube.com
Username: admin  用户名 admin
Password:        密码   12345
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded 登陆成功
推送本地容器镜像到harbor仓库
# docker push www.kube.com/library/centos:v1

在这里插入图片描述

3.4 docker pull

在其它主机上下载或使用harbor容器镜像仓库中的容器镜像

在本地添加域名解析
# vim /etc/hosts
# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.10.155 www.kube.com
在本地添加/etc/docker/daemon.json文件,其中为本地主机访问的容器镜像仓库
# vim /etc/docker/daemon.json
# cat /etc/docker/daemon.json
{
        "insecure-registries": ["www.kube.com"]
}
# systemctl daemon-reload
# systemctl restart docker
下载容器镜像
# docker pull www.kube.com/library/centos:v1
v1: Pulling from library/centos
Digest: sha256:a1801b843b1bfaf77c501e7a6d3f709401a1e0c83863037fa3aab063a7fdb9dc
Status: Downloaded newer image for www.kube.com/library/centos:v1
www.kube.com/library/centos:v1
查看已下载的容器镜像
# docker images
REPOSITORY                       TAG       IMAGE ID       CREATED        SIZE
www.kube.com/library/centos   v1        5d0da3dc9764   4 months ago   231MB

Docker容器化部署企业级应用集群

一、Docker容器化部署企业级应用

1.1 使用Docker容器化部署企业级应用必要性

  • 有利于快速实现企业级应用部署
  • 有利于快速实现企业级应用恢复

1.2 使用Docker容器化部署企业级应用参考资料

在这里插入图片描述

二、使用Docker容器实现Nginx部署

2.1 获取参考资料

在这里插入图片描述

在这里插入图片描述
在这里插入图片描述

2.2 运行Nginx应用容器

不在docker host暴露端口

# docker run -d --name nginx-server -v /opt/nginx-server:/usr/share/nginx/html:ro nginx
664cd1bbda4ad2a71cbd09f0c6baa9b34db80db2d69496670a960be07b9521cb

# /opt/nginx-server  nginx-server文件夹不存在会创建的
# /usr/share/nginx/html    nginx挂载的目录存放在此
# docker ps
CONTAINER ID   IMAGE       COMMAND                  CREATED          STATUS          PORTS                                                  NAMES
664cd1bbda4a   nginx       "/docker-entrypoint.…"   4 seconds ago    Up 3 seconds    80/tcp                                                 nginx-server
# docker inspect 664 | grep IPAddress
            "SecondaryIPAddresses": null,
            "IPAddress": "172.17.0.3",
                    "IPAddress": "172.17.0.3",
# curl http://172.17.0.3
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.21.6</center>
</body>
</html>
# ls /opt
nginx-server
# echo "nginx is working" > /opt/nginx-server/index.html
# curl http://172.17.0.3
nginx is working

2.3 运行Nginx应用容器

在docker host暴露80端口

# docker run -d -p 80:80 --name nginx-server-port -v /opt/nginx-server-port:/usr/share/nginx/html:ro nginx
# docker ps
CONTAINER ID   IMAGE       COMMAND                  CREATED             STATUS             PORTS                                                  NAMES
74dddf51983d   nginx       "/docker-entrypoint.…"   3 seconds ago       Up 2 seconds       0.0.0.0:80->80/tcp, :::80->80/tcp                      nginx-server-port
# ls /opt
nginx-server  nginx-server-port
# echo "nginx is running" > /opt/nginx-server-port/index.html

在宿主机上访问

在这里插入图片描述

# docker top nginx-server-port
UID                 PID                 PPID                C                   STIME               TTY                 TIME                CMD
root                22195               22163               0                   15:08               ?                   00:00:00            nginx: master process nginx -g daemon off;
101                 22387               22195               0                   15:08               ?                   00:00:00            nginx: worker process

2.4 运行Nginx应用容器

挂载配置文件,需要创建一个nginx容器,把配置文件复制出来修改后使用。

# docker cp 容器名 :/etc/nginx/nginx.conf /opt/nginxcon/
修改后即可使用
# ls /opt/nginxcon/nginx.conf
/opt/nginxcon/nginx.conf
# docker run -d \
-p 82:80 --name nginx-server-conf \
-v /opt/nginx-server-conf:/usr/share/nginx/html:ro \
-v /opt/nginxcon/nginx.conf:/etc/nginx/nginx.conf:ro \
nginx:latest
76251ec44e5049445399303944fc96eb8161ccb49e27b673b99cb2492009523c


# docker run -v <宿主机路径>:<容器内路径> <其他选项> <镜像名>
# -p <宿主机端口>:<容器端口>
# docker top nginx-server-conf
UID                 PID                 PPID                C                   STIME               TTY                 TIME                CMD
root                25005               24972               0                   15:38               ?                   00:00:00            nginx: master process nginx -g daemon off;
101                 25178               25005               0                   15:38               ?                   00:00:00            nginx: worker process
101                 25179               25005               0                   15:38               ?                   00:00:00            nginx: worker process

如何通过docker容器部署https访问的nginx应用?

一、应用目录准备

存储配置文件
# mkdir -p nginxdir/nginx/conf.d
存储证书文件
# mkdir -p nginxdir/nginx/certs
存储网站文件
# mkdir -p nginxdir/app

二、文件准备

2.1 证书文件准备

# ls /root/nginxdir/nginx/certs/
www.kube.com.key  www.kubeyuyang.com.pem

2.2 网站文件准备

# echo "ssl test" > /root/nginxdir/nginx/app/index.html
# ls /root/nginxdir/nginx/app/
index.html

2.3 配置文件准备

# vim /root/nginxdir/nginx/conf.d/default.conf
# cat /root/nginxdir/nginx/conf.d/default.conf
server {
    listen       80;
    listen       443 ssl;
    listen  [::]:443;
    server_name  www.kubeyuyang.com;

    #access_log  /var/log/nginx/host.access.log  main;

    ssl_certificate /etc/nginx/certs/www.kubeyuyang.com.pem;
    ssl_certificate_key /etc/nginx/certs/www.kubeyuyang.com.key;

    location / {
        root   /usr/share/nginx/html;
        index  index.html index.htm;
    }

    #error_page  404              /404.html;

    # redirect server error pages to the static page /50x.html
    #
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }

    # proxy the PHP scripts to Apache listening on 127.0.0.1:80
    #
    #location ~ \.php$ {
    #    proxy_pass   http://127.0.0.1;
    #}

    # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
    #
    #location ~ \.php$ {
    #    root           html;
    #    fastcgi_pass   127.0.0.1:9000;
    #    fastcgi_index  index.php;
    #    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
    #    include        fastcgi_params;
    #}

    # deny access to .htaccess files, if Apache's document root
    # concurs with nginx's one
    #
    #location ~ /\.ht {
    #    deny  all;
    #}
}
# vim /root/nginxdir/nginx/conf.d/default.conf
# cat /root/nginxdir/nginx/conf.d/default.conf
server {
   listen       80;
   server_name  www.kubeyuyang.com;
   return 301 https://$host$request_uri;
}
server {
    listen      443 ssl;
    server_name  www.kubeyuyang.com;

    #access_log  /var/log/nginx/host.access.log  main;
    ssl_certificate /etc/nginx/certs/www.kubeyuyang.com.pem;
    ssl_certificate_key /etc/nginx/certs/www.kubeyuyang.com.key;

    location / {
        root   /usr/share/nginx/html;
        index  index.html index.htm;
    }

    #error_page  404              /404.html;

    # redirect server error pages to the static page /50x.html
    #
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }

    # proxy the PHP scripts to Apache listening on 127.0.0.1:80
    #
    #location ~ \.php$ {
    #    proxy_pass   http://127.0.0.1;
    #}

    # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
    #
    #location ~ \.php$ {
    #    root           html;
    #    fastcgi_pass   127.0.0.1:9000;
    #    fastcgi_index  index.php;
    #    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
    #    include        fastcgi_params;
    #}

    # deny access to .htaccess files, if Apache's document root
    # concurs with nginx's one
    #
    #location ~ /\.ht {
    #    deny  all;
    #}
}

三、使用docker run运行应用

# docker run -d --name my-nginx \
    -p 80:80 -p 443:443 \
    -v /root/nginxdir/nginx/conf.d:/etc/nginx/conf.d \
    -v /root/nginxdir/nginx/certs:/etc/nginx/certs \
    -v /root/nginxdir/app:/usr/share/nginx/html/ \
    --restart always \
    nginx:latest
# docker ps
CONTAINER ID   IMAGE          COMMAND                   CREATED          STATUS          PORTS                                                                      NAMES
ff203e7bbba8   nginx:latest   "/docker-entrypoint.…"   12 minutes ago   Up 12 minutes   0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp   my-nginx

四、访问应用

# vim /etc/hosts
# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.10.161 www.kubeyuyang.com
# curl http://www.kubeyuyang.com
ssl test

# curl https://www.kubeyuyang.com
ssl test

三、使用Docker容器实现Tomcat部署

3.1 获取参考资料

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

3.2 运行tomcat应用容器

3.2.1 不暴露端口运行

# docker run -d --rm tomcat:9.0
# docker ps
CONTAINER ID   IMAGE        COMMAND                  CREATED             STATUS             PORTS                                                  NAMES
c20a0e781246   tomcat:9.0   "catalina.sh run"        27 seconds ago      Up 25 seconds      8080/tcp                                               heuristic_cori

3.2.2 暴露端口运行

# docker run -d -p 8080:8080 --rm tomcat:9.0
2fcf5762314373c824928490b871138a01a94abedd7e6814ad5f361d09fbe1de
# docker ps
CONTAINER ID   IMAGE        COMMAND                  CREATED             STATUS             PORTS                                                  NAMES
2fcf57623143   tomcat:9.0   "catalina.sh run"        3 seconds ago       Up 1 second        0.0.0.0:8080->8080/tcp, :::8080->8080/tcp              eloquent_chatelet

在宿主机访问

在这里插入图片描述

# docker exec 2fc ls /usr/local/tomcat/webapps
里面为空,所以可以添加网站文件。

3.2.3 暴露端口及添加网站文件

# docker run -d -p 8081:8080 -v /opt/tomcat-server:/usr/local/tomcat/webapps/ROOT tomcat:9.0
f456e705d48fc603b7243a435f0edd6284558c194e105d87befff2dccddc0b63
# docker ps
CONTAINER ID   IMAGE        COMMAND             CREATED         STATUS         PORTS                                       NAMES
f456e705d48f   tomcat:9.0   "catalina.sh run"   3 seconds ago   Up 2 seconds   0.0.0.0:8081->8080/tcp, :::8081->8080/tcp   cool_germain
# echo "tomcat running" > /opt/tomcat-server/index.html

在宿主机访问

在这里插入图片描述

四、使用Docker容器实现MySQL部署

4.1 单节点MySQL部署

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

# docker run -p 3306:3306 \
 --name mysql \
 -v /opt/mysql/log:/var/log/mysql \
 -v /opt/mysql/data:/var/lib/mysql \
 -v /opt/mysql/conf:/etc/mysql \
 -e MYSQL_ROOT_PASSWORD=root \
 -d \
 mysql:5.7
# docker ps
CONTAINER ID   IMAGE       COMMAND                  CREATED          STATUS          PORTS                                                  NAMES
6d16ca21cf31   mysql:5.7   "docker-entrypoint.s…"   32 seconds ago   Up 30 seconds   0.0.0.0:3306->3306/tcp, :::3306->3306/tcp, 33060/tcp   mysql
通过容器中客户端访问
# docker exec -it mysql mysql -uroot -proot
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 4
Server version: 5.7.37 MySQL Community Server (GPL)

Copyright (c) 2000, 2022, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql>
在docker host上访问
# yum -y install mariadb

# mysql -h 192.168.255.157 -uroot -proot -P 3306
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MySQL connection id is 7
Server version: 5.7.37 MySQL Community Server (GPL)

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MySQL [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
4 rows in set (0.00 sec)

4.2 MySQL主从复制集群部署

4.2.1 MySQL主节点部署

# docker run -p 3306:3306 \
 --name mysql-master \
 -v /opt/mysql-master/log:/var/log/mysql \
 -v /opt/mysql-master/data:/var/lib/mysql \
 -v /opt/mysql-master/conf:/etc/mysql \
 -e MYSQL_ROOT_PASSWORD=root \
 -d mysql:5.7
# docker ps
CONTAINER ID   IMAGE       COMMAND                  CREATED          STATUS          PORTS                                                  NAMES
2dbbed8e35c7   mysql:5.7   "docker-entrypoint.s…"   58 seconds ago   Up 57 seconds   0.0.0.0:3306->3306/tcp, :::3306->3306/tcp, 33060/tcp   mysql-master

4.2.2 MySQL主节点配置

# vim /opt/mysql-master/conf/my.cnf
# cat /opt/mysql-master/conf/my.cnf
[client]
default-character-set=utf8

[mysql]
default-character-set=utf8

[mysqld]
init_connect='SET collation_connection = utf8_unicode_ci'
init_connect='SET NAMES utf8'
character-set-server=utf8
collation-server=utf8_unicode_ci
skip-character-set-client-handshake
skip-name-resolve

server_id=1
log-bin=mysql-bin
read-only=0
binlog-do-db=kube_test

replicate-ignore-db=mysql
replicate-ignore-db=sys
replicate-ignore-db=information_schema
replicate-ignore-db=performance_schema

4.2.3 MySQL从节点部署

# docker run -p 3307:3306 \
 --name mysql-slave \
 -v /opt/mysql-slave/log:/var/log/mysql \
 -v /opt/mysql-slave/data:/var/lib/mysql \
 -v /opt/mysql-slave/conf:/etc/mysql \
 -e MYSQL_ROOT_PASSWORD=root \
 -d 
 --link mysql-master:mysql-master
 mysql:5.7
# docker ps
CONTAINER ID   IMAGE       COMMAND                  CREATED         STATUS         PORTS                                                  NAMES
caf7bf3fc68f   mysql:5.7   "docker-entrypoint.s…"   8 seconds ago   Up 6 seconds   33060/tcp, 0.0.0.0:3307->3306/tcp, :::3307->3306/tcp   mysql-slave

4.2.4 MySQL从节点配置

# vim /opt/mysql-slave/conf/my.cnf
# cat /opt/mysql-slave/conf/my.cnf
[client]
default-character-set=utf8

[mysql]
default-character-set=utf8

[mysqld]
init_connect='SET collation_connection = utf8_unicode_ci'
init_connect='SET NAMES utf8'
character-set-server=utf8
collation-server=utf8_unicode_ci
skip-character-set-client-handshake
skip-name-resolve

server_id=2
log-bin=mysql-bin
read-only=1
binlog-do-db=kube_test

replicate-ignore-db=mysql
replicate-ignore-db=sys
replicate-ignore-db=information_schema
replicate-ignore-db=performance_schema

4.2.5 master节点配置

# mysql -h 192.168.255.157 -uroot -proot -P 3306
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.7.37 MySQL Community Server (GPL)

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MySQL [(none)]>
授权
MySQL [(none)]> grant replication slave on *.* to 'backup'@'%' identified by '123456';
重启容器,使用配置生效
# docker restart mysql-master
查看状态
MySQL [(none)]> show master status\G
*************************** 1. row ***************************
             File: mysql-bin.000001
         Position: 154
     Binlog_Do_DB: kube_test
 Binlog_Ignore_DB:
Executed_Gtid_Set:
1 row in set (0.00 sec)

4.2.6 slave节点配置

# docker restart mysql-slave
# mysql -h 192.168.255.157 -uroot -proot -P 3307
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.7.37 MySQL Community Server (GPL)

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MySQL [(none)]>
MySQL [(none)]> change master to master_host='mysql-master', master_user='backup', master_password='123456', master_log_file='mysql-bin.000001', master_log_pos=154, master_port=3306;
MySQL [(none)]> start slave;
MySQL [(none)]> show slave status\G
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: mysql-master
                  Master_User: backup
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: mysql-bin.000001
          Read_Master_Log_Pos: 154
               Relay_Log_File: e0872f94c377-relay-bin.000002
                Relay_Log_Pos: 320
        Relay_Master_Log_File: mysql-bin.000001
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
              Replicate_Do_DB:
          Replicate_Ignore_DB: mysql,sys,information_schema,performance_schema
           Replicate_Do_Table:
       Replicate_Ignore_Table:
      Replicate_Wild_Do_Table:
  Replicate_Wild_Ignore_Table:
                   Last_Errno: 0
                   Last_Error:
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 154
              Relay_Log_Space: 534
              Until_Condition: None
               Until_Log_File:
                Until_Log_Pos: 0
           Master_SSL_Allowed: No
           Master_SSL_CA_File:
           Master_SSL_CA_Path:
              Master_SSL_Cert:
            Master_SSL_Cipher:
               Master_SSL_Key:
        Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error:
               Last_SQL_Errno: 0
               Last_SQL_Error:
  Replicate_Ignore_Server_Ids:
             Master_Server_Id: 1
                  Master_UUID: 0130b415-8b21-11ec-8982-0242ac110002
             Master_Info_File: /var/lib/mysql/master.info
                    SQL_Delay: 0
          SQL_Remaining_Delay: NULL
      Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates
           Master_Retry_Count: 86400
                  Master_Bind:
      Last_IO_Error_Timestamp:
     Last_SQL_Error_Timestamp:
               Master_SSL_Crl:
           Master_SSL_Crlpath:
           Retrieved_Gtid_Set:
            Executed_Gtid_Set:
                Auto_Position: 0
         Replicate_Rewrite_DB:
                 Channel_Name:
           Master_TLS_Version:
1 row in set (0.00 sec)

4.2.7 验证MySQL集群可用性

在MySQL Master节点添加kube_test数据库
# mysql -h 192.168.255.157 -uroot -proot -P3306

MySQL [(none)]> create database kube_test;
Query OK, 1 row affected (0.00 sec)

MySQL [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| kube_test       |     |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
6 rows in set (0.00 sec)
在MySQL Slave节点查看同步情况
# mysql -h 192.168.255.157 -uroot -proot -P3307

MySQL [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| kube_test       |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
5 rows in set (0.00 sec)

五、使用Docker容器实现Oracle部署

5.1 获取参考资料

在这里插入图片描述
在这里插入图片描述

5.2 运行oracle容器

# docker pull oracleinanutshell/oracle-xe-11g
# docker run -h oracle --name oracle -d -p 49160:22 -p 49161:1521 -p 49162:8080 oracleinanutshell/oracle-xe-11g
237db949020abf2cee12e3193fa8a34d9dfadaafd9d5604564668d4472abe0b2
# docker ps
CONTAINER ID   IMAGE                             COMMAND                  CREATED         STATUS         PORTS                                                                                                                               NAMES
237db949020a   oracleinanutshell/oracle-xe-11g   "/bin/sh -c '/usr/sb…"   7 seconds ago   Up 4 seconds   0.0.0.0:49160->22/tcp, :::49160->22/tcp, 0.0.0.0:49161->1521/tcp, :::49161->1521/tcp, 0.0.0.0:49162->8080/tcp, :::49162->8080/tcp   oracle
说明:
49160 为ssh端口
49161 为sqlplus端口
49162 为oem端口
oracle数据库连接信息
port:49161
sid:xe
username:system
password:oracle

SYS用户密码为:oracle

5.3 下载客户端连接工具

下载链接地址:https://www.oracle.com/tools/downloads/sqldev-downloads.html

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

在这里插入图片描述

六、使用Docker容器实现ElasticSearch+Kibana部署

6.1 获取参考资料

6.1.1 ES部署参考资料

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

6.1.2 Kibana部署参考资料

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

6.2 ES部署

# docker pull elasticsearch:7.17.0
# mkdir -p /opt/es/config
# mkdir -p /opt/es/data
# echo "http.host: 0.0.0.0" >> /opt/es/config/elasticsearch.yml
# chmod -R 777 /opt/es/
# docker run --name elasticsearch -p 9200:9200 -p 9300:9300 \
-e "discovery.type=single-node" \
-e ES_JAVA_OPTS="-Xms64m -Xmx512m" \
-v /opt/es/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml \
-v /opt/es/data:/usr/share/elasticsearch/data \
-v /opt/es/plugins:/usr/share/elasticsearch/plugins \
-d elasticsearch:7.17.0
# docker ps
CONTAINER ID   IMAGE                 COMMAND                  CREATED          STATUS          PORTS                                                                                  NAMES
e1c306e6e5a3   elasticsearch:7.17.0   "/bin/tini -- /usr/l…"   22 seconds ago   Up 20 seconds   0.0.0.0:9200->9200/tcp, :::9200->9200/tcp, 0.0.0.0:9300->9300/tcp, :::9300->9300/tcp   elasticsearch

在这里插入图片描述

6.3 Kibana部署

# docker pull kibana:7.17.0
# docker run --name kibana -e ELASTICSEARCH_HOSTS=http://192.168.255.157:9200 -p 5601:5601 \
-d kibana:7.17.0
# docker ps
CONTAINER ID   IMAGE                  COMMAND                  CREATED         STATUS         PORTS                                                                                  NAMES
fb60e73f9cd5   kibana:7.17.0          "/bin/tini -- /usr/l…"   2 minutes ago   Up 2 minutes   0.0.0.0:5601->5601/tcp, :::5601->5601/tcp                                              kibana

在这里插入图片描述

七、使用Docker容器实现Redis部署

7.1 获取参考资料

在这里插入图片描述

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

7.2 运行Redis容器

# mkdir -p /opt/redis/conf
# touch /opt/redis/conf/redis.conf
# docker run -p 6379:6379 --name redis -v /opt/redis/data:/data \
-v /opt/redis/conf:/etc/redis \
-d redis redis-server /etc/redis/redis.conf
# docker ps
CONTAINER ID   IMAGE                  COMMAND                  CREATED          STATUS          PORTS                                                                                  NAMES
9bd2b39cd92a   redis                  "docker-entrypoint.s…"   44 seconds ago   Up 42 seconds   0.0.0.0:6379->6379/tcp, :::6379->6379/tcp                                              redis

7.3 验证

# wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
# yum -y install redis
# redis-cli -h 192.168.255.157 -p 6379

192.168.255.157:6379> set test1 a
OK
192.168.255.157:6379> get test1
"a"

7.4 Redis集群

安装redis-cluster;3主3从方式,从为了同步备份,主进行slot数据分片

编辑运行多个redis容器脚本文件
# vim redis-cluster.sh
# cat redis-cluster.sh
for port in $(seq 8001 8006); \
do \
mkdir -p /mydata/redis/node-${port}/conf
touch /mydata/redis/node-${port}/conf/redis.conf
cat << EOF >/mydata/redis/node-${port}/conf/redis.conf
port ${port}
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
cluster-announce-ip 192.168.255.157
cluster-announce-port ${port}
cluster-announce-bus-port 1${port}
appendonly yes
EOF
docker run -p ${port}:${port} -p 1${port}:1${port} --name redis-${port} \
-v /mydata/redis/node-${port}/data:/data \
-v /mydata/redis/node-${port}/conf/redis.conf:/etc/redis/redis.conf \
-d redis:5.0.7 redis-server /etc/redis/redis.conf; \
done
执行脚本
# sh redis-cluster.sh
查看已运行容器
# docker ps
CONTAINER ID   IMAGE         COMMAND                  CREATED              STATUS              PORTS                                                                                                NAMES
8d53864a98ce   redis:5.0.7   "docker-entrypoint.s…"   About a minute ago   Up About a minute   0.0.0.0:8006->8006/tcp, :::8006->8006/tcp, 6379/tcp, 0.0.0.0:18006->18006/tcp, :::18006->18006/tcp   redis-8006
e2b5da0f0605   redis:5.0.7   "docker-entrypoint.s…"   2 minutes ago        Up About a minute   0.0.0.0:8005->8005/tcp, :::8005->8005/tcp, 6379/tcp, 0.0.0.0:18005->18005/tcp, :::18005->18005/tcp   redis-8005
70e8e8f15aea   redis:5.0.7   "docker-entrypoint.s…"   2 minutes ago        Up 2 minutes        0.0.0.0:8004->8004/tcp, :::8004->8004/tcp, 6379/tcp, 0.0.0.0:18004->18004/tcp, :::18004->18004/tcp   redis-8004
dff8e4bf02b4   redis:5.0.7   "docker-entrypoint.s…"   2 minutes ago        Up 2 minutes        0.0.0.0:8003->8003/tcp, :::8003->8003/tcp, 6379/tcp, 0.0.0.0:18003->18003/tcp, :::18003->18003/tcp   redis-8003
c34dc4c423ef   redis:5.0.7   "docker-entrypoint.s…"   2 minutes ago        Up 2 minutes        0.0.0.0:8002->8002/tcp, :::8002->8002/tcp, 6379/tcp, 0.0.0.0:18002->18002/tcp, :::18002->18002/tcp   redis-8002
b8cb5feffb43   redis:5.0.7   "docker-entrypoint.s…"   2 minutes ago        Up 2 minutes        0.0.0.0:8001->8001/tcp, :::8001->8001/tcp, 6379/tcp, 0.0.0.0:18001->18001/tcp, :::18001->18001/tcp   redis-8001
登录redis容器
# docker exec -it redis-8001 bash
root@b8cb5feffb43:/data#
创建redis-cluster
root@b8cb5feffb43:/data# redis-cli --cluster create 192.168.255.157:8001 192.168.255.157:8002 192.168.255.157:8003 192.168.255.157:8004 192.168.255.157:8005 192.168.255.157:8006 --cluster-replicas 1
输出:
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 192.168.255.157:8005 to 192.168.255.157:8001
Adding replica 192.168.255.157:8006 to 192.168.255.157:8002
Adding replica 192.168.255.157:8004 to 192.168.255.157:8003
>>> Trying to optimize slaves allocation for anti-affinity
[WARNING] Some slaves are in the same host as their master
M: abd07f1a2679fe77558bad3ff4b7ab70ec41efa5 192.168.255.157:8001
   slots:[0-5460] (5461 slots) master
M: 40e69202bb3eab13a8157c33da6240bb31f2fd6f 192.168.255.157:8002
   slots:[5461-10922] (5462 slots) master
M: 9a927abf3c2982ba9ffdb29176fc8ffa77a2cf03 192.168.255.157:8003
   slots:[10923-16383] (5461 slots) master
S: 81d0a4056328830a555fcd75cf523d4c9d52205c 192.168.255.157:8004
   replicates 9a927abf3c2982ba9ffdb29176fc8ffa77a2cf03
S: 8121a28519e5b52e4817913aa3969d9431bb68af 192.168.255.157:8005
   replicates abd07f1a2679fe77558bad3ff4b7ab70ec41efa5
S: 3a8dd5343c0b8f5580bc44f6b3bb5b4371d4dde5 192.168.255.157:8006
   replicates 40e69202bb3eab13a8157c33da6240bb31f2fd6f
Can I set the above configuration? (type 'yes' to accept): yes 输入yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
.....
>>> Performing Cluster Check (using node 192.168.255.157:8001)
M: abd07f1a2679fe77558bad3ff4b7ab70ec41efa5 192.168.255.157:8001
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: 81d0a4056328830a555fcd75cf523d4c9d52205c 192.168.255.157:8004
   slots: (0 slots) slave
   replicates 9a927abf3c2982ba9ffdb29176fc8ffa77a2cf03
M: 40e69202bb3eab13a8157c33da6240bb31f2fd6f 192.168.255.157:8002
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: 8121a28519e5b52e4817913aa3969d9431bb68af 192.168.255.157:8005
   slots: (0 slots) slave
   replicates abd07f1a2679fe77558bad3ff4b7ab70ec41efa5
M: 9a927abf3c2982ba9ffdb29176fc8ffa77a2cf03 192.168.255.157:8003
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: 3a8dd5343c0b8f5580bc44f6b3bb5b4371d4dde5 192.168.255.157:8006
   slots: (0 slots) slave
   replicates 40e69202bb3eab13a8157c33da6240bb31f2fd6f
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

八、使用Docker容器实现RabbitMQ部署

8.1 获取参考资料

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

8.2 部署RabbitMQ

部署带管理控制台的RabbitMQ

# docker run -d --name rabbitmq -p 5671:5671 -p 5672:5672 -p 4369:4369 -p 25672:25672 -p 15671:15671 -p 15672:15672 -v /opt/rabbitmq:/var/lib/rabbitmq rabbitmq:management
# docker ps
CONTAINER ID   IMAGE                 COMMAND                  CREATED          STATUS         PORTS                                                                                                                                                                                                                                             NAMES
97d28093faa4   rabbitmq:management   "docker-entrypoint.s…"   11 seconds ago   Up 6 seconds   0.0.0.0:4369->4369/tcp, :::4369->4369/tcp, 0.0.0.0:5671-5672->5671-5672/tcp, :::5671-5672->5671-5672/tcp, 0.0.0.0:15671-15672->15671-15672/tcp, :::15671-15672->15671-15672/tcp, 0.0.0.0:25672->25672/tcp, :::25672->25672/tcp, 15691-15692/tcp   rabbitmq
端口说明:
4369, 25672 (Erlang发现&集群端口)
5672, 5671 (AMQP端口)
15672 (web管理后台端口)
61613, 61614 (STOMP协议端口)
1883, 8883 (MQTT协议端口)

默认用户名和密码:guest

在这里插入图片描述

在这里插入图片描述
在这里插入图片描述

Dockerfile精讲及新型容器镜像构建技术

一、容器与容器镜像之间的关系

说到Docker管理的容器不得不说容器镜像,主要因为容器镜像是容器模板,通过容器镜像我们才能快速创建容器。

如下图所示:
在这里插入图片描述

Docker Daemon通过容器镜像创建容器。

二、容器镜像分类

  • 操作系统类
    • CentOS
    • Ubuntu
    • 在dockerhub下载或自行制作
  • 应用类
    • Tomcat
    • Nginx
    • MySQL
    • Redis

三、容器镜像获取的方法

主要有以下几种:

1、在DockerHub直接下载

2、把操作系统中文件系统打包为容器镜像

3、把正在运行的容器打包为容器镜像,即docker commit

4、通过Dockerfile实现容器镜像的自定义及生成

四、容器镜像获取方法演示

4.1 在DockerHub直接下载

# docker pull centos:latest
# docker pull nginx:latest

4.2 把操作系统中文件系统打包为容器镜像

4.2.1 安装一个最化的操作系统

在这里插入图片描述

4.2.2 把操作系统中文件系统进行打包

# tar --numeric-owner --exclude=/proc --exclude=/sys -cvf centos7u6.tar /

4.2.3 把打包后文件加载至本地文件系统生成本地容器镜像

# ls
centos7u6.tar
# docker import centos7u6.tar centos7u6:v1
# docker images
REPOSITORY   TAG       IMAGE ID       CREATED         SIZE
centos7u6    v1        130cb005b2dc   7 seconds ago   1.09GB
# docker run -it centos7u6:v1 bash
[root@50f24f688b4d /]# ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
7: eth0@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever

4.3 把正在运行的容器打包为容器镜像

4.3.1 运行一个容器

# docker run -it centos7u6:v1 bash

4.3.2 在容器中安装应用

[root@064aace45718 /]# yum -y install httpd

4.3.3 把正在运行的容器打包为容器镜像

[root@064aace45718 /]# ctrl + p +q
# docker commit 064aace45718 centos7u6-httpd:v1
# docker images
REPOSITORY        TAG       IMAGE ID       CREATED         SIZE
centos7u6-httpd   v1        30ec9d728880   6 seconds ago   1.29GB
# docker run -it centos7u6-httpd:v1 bash
[root@01a1373b4a3f /]# rpm -qa | grep httpd
httpd-tools-2.4.6-97.el7.centos.4.x86_64
httpd-2.4.6-97.el7.centos.4.x86_64

4.4 通过Dockerfile实现容器镜像的自定义及生成

4.4.1 Dockerfile介绍

Dockerfile是一种能够被Docker程序解释的剧本。Dockerfile由一条一条的指令组成,并且有自己的书写格式和支持的命令。当我们需要在容器镜像中指定自己额外的需求时,只需在Dockerfile上添加或修改指令,然后通过docker build生成我们自定义的容器镜像(image)。

在这里插入图片描述

4.4.2 Dockerfile指令

  • 构建类指令

    • 用于构建image
    • 其指定的操作不会在运行image的容器上执行(FROM、MAINTAINER、RUN、ENV、ADD、COPY)
  • 设置类指令

    • 用于设置image的属性
    • 其指定的操作将在运行image的容器中执行(CMD、ENTRYPOINT、USER 、EXPOSE、VOLUME、WORKDIR、ONBUILD)
  • 指令说明

指令描述
FROM构建新镜像基于的基础镜像
LABEL标签
RUN构建镜像时运行的Shell命令
COPY拷贝文件或目录到镜像中
ADD解压压缩包并拷贝
ENV设置环境变量
USER为RUN、CMD和ENTRYPOINT执行命令指定运行用户
EXPOSE声明容器运行的服务端口
WORKDIR为RUN、CMD、ENTRYPOINT、COPY和ADD设置工作目录
CMD运行容器时默认执行,如果有多个CMD指令,最后一个生效
  • 指令详细解释

通过man dockerfile可以查看到详细的说明,这里简单的翻译并列出常用的指令

1, FROM

FROM指令用于指定其后构建新镜像所使用的基础镜像。

FROM指令必是Dockerfile文件中的首条命令。

FROM指令指定的基础image可以是官方远程仓库中的,也可以位于本地仓库,优先本地仓库。

格式:FROM <image>:<tag>
例:FROM centos:latest

2, RUN

RUN指令用于在构建镜像中执行命令,有以下两种格式:

  • shell格式
格式:RUN <命令>
例:RUN echo 'kube' > /var/www/html/index.html
  • exec格式
格式:RUN ["可执行文件", "参数1", "参数2"]
例:RUN ["/bin/bash", "-c", "echo kube > /var/www/html/index.html"]

注意: 按优化的角度来讲:当有多条要执行的命令,不要使用多条RUN,尽量使用&&符号与\符号连接成一行。因为多条RUN命令会让镜像建立多层(总之就是会变得臃肿了😃)。

RUN yum install httpd httpd-devel -y
RUN echo test > /var/www/html/index.html
可以改成
RUN yum install httpd httpd-devel -y && echo test > /var/www/html/index.html
或者改成
RUN yum install httpd httpd-devel -y  \
    && echo test > /var/www/html/index.html

3, CMD

CMD不同于RUN,CMD用于指定在容器启动时所要执行的命令,而RUN用于指定镜像构建时所要执行的命令。

格式有三种:
CMD ["executable","param1","param2"]
CMD ["param1","param2"]
CMD command param1 param2

每个Dockerfile只能有一条CMD命令。如果指定了多条命令,只有最后一条会被执行。

如果用户启动容器时候指定了运行的命令,则会覆盖掉CMD指定的命令。

什么是启动容器时指定运行的命令?
# docker run -d -p 80:80 镜像名 运行的命令

4, EXPOSE

EXPOSE指令用于指定容器在运行时监听的端口

格式:EXPOSE <port> [<port>...]
例:EXPOSE 80 3306 8080

上述运行的端口还需要使用docker run运行容器时通过-p参数映射到宿主机的端口.

5, ENV

ENV指令用于指定一个环境变量.

格式:ENV <key> <value> 或者 ENV <key>=<value>
例:ENV JAVA_HOME /usr/local/jdkxxxx/

6, ADD

ADD指令用于把宿主机上的文件拷贝到镜像中

格式:ADD <src> <dest>
<src>可以是一个本地文件或本地压缩文件,还可以是一个url,
如果把<src>写成一个url,那么ADD就类似于wget命令
<dest>路径的填写可以是容器内的绝对路径,也可以是相对于工作目录的相对路径

7, COPY

COPY指令与ADD指令类似,但COPY的源文件只能是本地文件

格式:COPY <src> <dest>

8, ENTRYPOINT

ENTRYPOINT与CMD非常类似

相同点:
一个Dockerfile只写一条,如果写了多条,那么只有最后一条生效
都是容器启动时才运行

不同点:
如果用户启动容器时候指定了运行的命令,ENTRYPOINT不会被运行的命令覆盖,而CMD则会被覆盖

格式有两种:
ENTRYPOINT ["executable", "param1", "param2"]
ENTRYPOINT command param1 param2

9, VOLUME

VOLUME指令用于把宿主机里的目录与容器里的目录映射.

只指定挂载点,docker宿主机映射的目录为自动生成的。

格式:VOLUME ["<mountpoint>"]

10, USER

USER指令设置启动容器的用户(像hadoop需要hadoop用户操作,oracle需要oracle用户操作),可以是用户名或UID

USER daemon
USER 1001

注意:如果设置了容器以daemon用户去运行,那么RUN,CMD和ENTRYPOINT都会以这个用户去运行
镜像构建完成后,通过docker run运行容器时,可以通过-u参数来覆盖所指定的用户

11, WORKDIR

WORKDIR指令设置工作目录,类似于cd命令。不建议使用RUN cd /root ,建议使用WORKDIR

WORKDIR /root

4.4.3 Dockerfile基本构成

  • 基础镜像信息

  • 维护者信息

  • 镜像操作指令

  • 容器启动时执行指令

4.4.4 Dockerfile生成容器镜像方法

在这里插入图片描述

4.4.5 Dockerfile生成容器镜像案例

4.4.5.0 使用Dockerfile生成容器镜像步骤
第一步:创建一个文件夹(目录)

第二步:在文件夹(目录)中创建Dockerfile文件(并编写)及其它文件

第三步:使用`docker build`命令构建镜像

第四步:使用构建的镜像启动容器
4.4.5.1 使用Dockerfile生成Nginx容器镜像
[root@localhost ~]# mkdir nginxroot
[root@localhost ~]# cd nginxroot
[root@localhost nginxroot]#
[root@localhost nginxroot]# echo "nginx's running" >> index.html
[root@localhost nginxroot]# ls
index.html
[root@localhost nginxroot]# cat index.html
nginx's running
[root@localhost nginxroot]# vim Dockerfile
[root@localhost nginxroot]# cat Dockerfile
FROM centos:centos7

MAINTAINER "www.kube.com"

RUN yum -y install wget

RUN wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo

RUN yum -y install nginx

ADD index.html /usr/share/nginx/html/

RUN echo "daemon off;" >> /etc/nginx/nginx.conf

EXPOSE 80

CMD /usr/sbin/nginx
[root@localhost nginxroot]# docker build -t centos7-nginx:v1 .
输出:
Sending build context to Docker daemon  3.072kB
第一步:下载基础镜像
Step 1/9 : FROM centos:centos7
 ---> eeb6ee3f44bd
第二步:维护者信息
Step 2/9 : MAINTAINER "www.kube.com"
 ---> Using cache
 ---> f978e524772c
 
第三步:安装wget
Step 3/9 : RUN yum -y install wget
 ---> Running in 4e0fc3854088
Loaded plugins: fastestmirror, ovl
Determining fastest mirrors
 * base: mirrors.huaweicloud.com
 * extras: mirrors.tuna.tsinghua.edu.cn
 * updates: mirrors.tuna.tsinghua.edu.cn
Resolving Dependencies
--> Running transaction check
---> Package wget.x86_64 0:1.14-18.el7_6.1 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package        Arch             Version                   Repository      Size
================================================================================
Installing:
 wget           x86_64           1.14-18.el7_6.1           base           547 k

Transaction Summary
================================================================================
Install  1 Package

Total download size: 547 k
Installed size: 2.0 M
Downloading packages:
warning: /var/cache/yum/x86_64/7/base/packages/wget-1.14-18.el7_6.1.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID f4a80eb5: NOKEY
Public key for wget-1.14-18.el7_6.1.x86_64.rpm is not installed
Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
Importing GPG key 0xF4A80EB5:
 Userid     : "CentOS-7 Key (CentOS 7 Official Signing Key) <security@centos.org>"
 Fingerprint: 6341 ab27 53d7 8a78 a7c2 7bb1 24c6 a8a7 f4a8 0eb5
 Package    : centos-release-7-9.2009.0.el7.centos.x86_64 (@CentOS)
 From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : wget-1.14-18.el7_6.1.x86_64                                  1/1
install-info: No such file or directory for /usr/share/info/wget.info.gz
  Verifying  : wget-1.14-18.el7_6.1.x86_64                                  1/1

Installed:
  wget.x86_64 0:1.14-18.el7_6.1

Complete!
Removing intermediate container 4e0fc3854088
 ---> 369e33a2152a
 
第四步:使用wget下载YUM源
Step 4/9 : RUN wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
 ---> Running in 4bdfc0a1c844
--2022-02-10 06:18:07--  http://mirrors.aliyun.com/repo/epel-7.repo
Resolving mirrors.aliyun.com (mirrors.aliyun.com)... 221.195.209.65, 221.195.209.64, 221.195.209.70, ...
Connecting to mirrors.aliyun.com (mirrors.aliyun.com)|221.195.209.65|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 664 [application/octet-stream]
Saving to: '/etc/yum.repos.d/epel.repo'

     0K                                                       100%  158M=0s

2022-02-10 06:18:07 (158 MB/s) - '/etc/yum.repos.d/epel.repo' saved [664/664]

Removing intermediate container 4bdfc0a1c844
 ---> 1d73faa62447
 
第五步:安装Nginx
Step 5/9 : RUN yum -y install nginx
 ---> Running in 51b50c2ce841
Loaded plugins: fastestmirror, ovl
Loading mirror speeds from cached hostfile
 * base: mirrors.huaweicloud.com
 * extras: mirrors.tuna.tsinghua.edu.cn
 * updates: mirrors.tuna.tsinghua.edu.cn
Resolving Dependencies
--> Running transaction check
---> Package nginx.x86_64 1:1.20.1-9.el7 will be installed
--> Processing Dependency: nginx-filesystem = 1:1.20.1-9.el7 for package: 1:nginx-1.20.1-9.el7.x86_64
--> Processing Dependency: libcrypto.so.1.1(OPENSSL_1_1_0)(64bit) for package: 1:nginx-1.20.1-9.el7.x86_64
--> Processing Dependency: libssl.so.1.1(OPENSSL_1_1_0)(64bit) for package: 1:nginx-1.20.1-9.el7.x86_64
--> Processing Dependency: libssl.so.1.1(OPENSSL_1_1_1)(64bit) for package: 1:nginx-1.20.1-9.el7.x86_64
--> Processing Dependency: nginx-filesystem for package: 1:nginx-1.20.1-9.el7.x86_64
--> Processing Dependency: openssl for package: 1:nginx-1.20.1-9.el7.x86_64
--> Processing Dependency: redhat-indexhtml for package: 1:nginx-1.20.1-9.el7.x86_64
--> Processing Dependency: system-logos for package: 1:nginx-1.20.1-9.el7.x86_64
--> Processing Dependency: libcrypto.so.1.1()(64bit) for package: 1:nginx-1.20.1-9.el7.x86_64
--> Processing Dependency: libprofiler.so.0()(64bit) for package: 1:nginx-1.20.1-9.el7.x86_64
--> Processing Dependency: libssl.so.1.1()(64bit) for package: 1:nginx-1.20.1-9.el7.x86_64
--> Running transaction check
---> Package centos-indexhtml.noarch 0:7-9.el7.centos will be installed
---> Package centos-logos.noarch 0:70.0.6-3.el7.centos will be installed
---> Package gperftools-libs.x86_64 0:2.6.1-1.el7 will be installed
---> Package nginx-filesystem.noarch 1:1.20.1-9.el7 will be installed
---> Package openssl.x86_64 1:1.0.2k-24.el7_9 will be installed
--> Processing Dependency: openssl-libs(x86-64) = 1:1.0.2k-24.el7_9 for package: 1:openssl-1.0.2k-24.el7_9.x86_64
--> Processing Dependency: make for package: 1:openssl-1.0.2k-24.el7_9.x86_64
---> Package openssl11-libs.x86_64 1:1.1.1k-2.el7 will be installed
--> Running transaction check
---> Package make.x86_64 1:3.82-24.el7 will be installed
---> Package openssl-libs.x86_64 1:1.0.2k-19.el7 will be updated
---> Package openssl-libs.x86_64 1:1.0.2k-24.el7_9 will be an update
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package               Arch        Version                   Repository    Size
================================================================================
Installing:
 nginx                 x86_64      1:1.20.1-9.el7            epel         587 k
Installing for dependencies:
 centos-indexhtml      noarch      7-9.el7.centos            base          92 k
 centos-logos          noarch      70.0.6-3.el7.centos       base          21 M
 gperftools-libs       x86_64      2.6.1-1.el7               base         272 k
 make                  x86_64      1:3.82-24.el7             base         421 k
 nginx-filesystem      noarch      1:1.20.1-9.el7            epel          24 k
 openssl               x86_64      1:1.0.2k-24.el7_9         updates      494 k
 openssl11-libs        x86_64      1:1.1.1k-2.el7            epel         1.5 M
Updating for dependencies:
 openssl-libs          x86_64      1:1.0.2k-24.el7_9         updates      1.2 M

Transaction Summary
================================================================================
Install  1 Package  (+7 Dependent packages)
Upgrade             ( 1 Dependent package)

Total download size: 26 M
Downloading packages:
Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
--------------------------------------------------------------------------------
Total                                              3.1 MB/s |  26 MB  00:08
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : centos-logos-70.0.6-3.el7.centos.noarch                     1/10
  Installing : centos-indexhtml-7-9.el7.centos.noarch                      2/10
  Installing : 1:make-3.82-24.el7.x86_64                                   3/10
  Installing : gperftools-libs-2.6.1-1.el7.x86_64                          4/10
  Installing : 1:openssl11-libs-1.1.1k-2.el7.x86_64                        5/10
  Updating   : 1:openssl-libs-1.0.2k-24.el7_9.x86_64                       6/10
  Installing : 1:openssl-1.0.2k-24.el7_9.x86_64                            7/10
  Installing : 1:nginx-filesystem-1.20.1-9.el7.noarch                      8/10
  Installing : 1:nginx-1.20.1-9.el7.x86_64                                 9/10
  Cleanup    : 1:openssl-libs-1.0.2k-19.el7.x86_64                        10/10
  Verifying  : 1:nginx-filesystem-1.20.1-9.el7.noarch                      1/10
  Verifying  : 1:nginx-1.20.1-9.el7.x86_64                                 2/10
  Verifying  : 1:openssl-libs-1.0.2k-24.el7_9.x86_64                       3/10
  Verifying  : 1:openssl11-libs-1.1.1k-2.el7.x86_64                        4/10
  Verifying  : gperftools-libs-2.6.1-1.el7.x86_64                          5/10
  Verifying  : 1:make-3.82-24.el7.x86_64                                   6/10
  Verifying  : 1:openssl-1.0.2k-24.el7_9.x86_64                            7/10
  Verifying  : centos-indexhtml-7-9.el7.centos.noarch                      8/10
  Verifying  : centos-logos-70.0.6-3.el7.centos.noarch                     9/10
  Verifying  : 1:openssl-libs-1.0.2k-19.el7.x86_64                        10/10

Installed:
  nginx.x86_64 1:1.20.1-9.el7

Dependency Installed:
  centos-indexhtml.noarch 0:7-9.el7.centos
  centos-logos.noarch 0:70.0.6-3.el7.centos
  gperftools-libs.x86_64 0:2.6.1-1.el7
  make.x86_64 1:3.82-24.el7
  nginx-filesystem.noarch 1:1.20.1-9.el7
  openssl.x86_64 1:1.0.2k-24.el7_9
  openssl11-libs.x86_64 1:1.1.1k-2.el7

Dependency Updated:
  openssl-libs.x86_64 1:1.0.2k-24.el7_9

Complete!
Removing intermediate container 51b50c2ce841
 ---> 88a7d7a2c522
 
第六步:添加文件至容器
Step 6/9 : ADD index.html /usr/share/nginx/html/
 ---> a2226a4d6720
第七步:设置nginx服务运行方式
Step 7/9 : RUN echo "daemon off;" >> /etc/nginx/nginx.conf
 ---> Running in 01d623937807
Removing intermediate container 01d623937807
 ---> 53fddea5b491
 
第八步:暴露端口
Step 8/9 : EXPOSE 80
 ---> Running in 9b73fcf7ee1b
Removing intermediate container 9b73fcf7ee1b
 ---> 903377216b23
第九步:运行命令,执行nginx二进制文件
Step 9/9 : CMD /usr/sbin/nginx
 ---> Running in 58037652952c
Removing intermediate container 58037652952c
 ---> 944d27b80f1f
 
生成镜像,并为镜像打标记:
Successfully built 944d27b80f1f
Successfully tagged centos7-nginx:v1
[root@localhost nginxroot]# docker images
REPOSITORY        TAG       IMAGE ID       CREATED             SIZE
centos7-nginx     v1        944d27b80f1f   3 minutes ago       587MB
[root@localhost ~]# docker run -d -p 8081:80 centos7-nginx:v1
[root@localhost ~]# curl http://localhost:8081
nginx's running
4.4.5.2 使用Dockerfile生成Tomcat容器镜像
[root@localhost ~]# mkdir tomcatdir
[root@localhost ~]# cd tomcatdir/
[root@localhost tomcatdir]#
[root@localhost tomcatdir]# echo "tomcat is running" >> index.html
[root@localhost tomcatdir]# ls
Dockerfile  jdk index.html
jdk为目录
index.html 网站首页
[root@localhost tomcatdir]# vim Dockerfile
[root@localhost tomcatdir]# cat Dockerfile
FROM centos:centos7

MAINTAINER "www.kube.com"

ENV VERSION=8.5.75
ENV JAVA_HOME=/usr/local/jdk
ENV TOMCAT_HOME=/usr/local/tomcat

RUN yum -y install wget

RUN wget https://dlcdn.apache.org/tomcat/tomcat-8/v${VERSION}/bin/apache-tomcat-${VERSION}.tar.gz

RUN tar xf apache-tomcat-${VERSION}.tar.gz

RUN mv apache-tomcat-${VERSION} /usr/local/tomcat

RUN rm -rf apache-tomcat-${VERSION}.tar.gz /usr/local/tomcat/webapps/*

RUN mkdir /usr/local/tomcat/webapps/ROOT

ADD ./index.html /usr/local/tomcat/webapps/ROOT/

ADD ./jdk /usr/local/jdk


RUN echo "export TOMCAT_HOME=/usr/local/tomcat" >> /etc/profile

RUN echo "export JAVA_HOME=/usr/local/jdk" >> /etc/profile

RUN echo "export PATH=${TOMCAT_HOME}/bin:${JAVA_HOME}/bin:$PATH" >> /etc/profile

RUN echo "export CLASSPATH=.:${JAVA_HOME}/lib/dt.jar:${JAVA_HOME}/lib/tools.jar" >> /etc/profile


RUN source /etc/profile

EXPOSE 8080

CMD ["/usr/local/tomcat/bin/catalina.sh","run"]
[root@localhost tomcatdir]# docker build -t centos-tomcat:v1 .                           

Sending build context to Docker daemon  398.9MB
Step 1/20 : FROM centos:centos7
 ---> eeb6ee3f44bd
Step 2/20 : MAINTAINER "www.kube.com"
 ---> Using cache
 ---> f978e524772c
Step 3/20 : ENV VERSION=8.5.75
 ---> Using cache
 ---> 792767bbdb22
Step 4/20 : ENV JAVA_HOME=/usr/local/jdk
 ---> Using cache
 ---> 6eb3855650f0
Step 5/20 : ENV TOMCAT_HOME=/usr/local/tomcat
 ---> Using cache
 ---> e38bdbbfd19d
Step 6/20 : RUN yum -y install wget
 ---> Using cache
 ---> 4c6aafa6d8ba
Step 7/20 : RUN wget http://dlcdn.apache.org/tomcat/tomcat-8/v${VERSION}/bin/apache-tomcat-${VERSION}.tar.gz
 ---> Using cache
 ---> 9bdb6f636a5f
Step 8/20 : RUN tar xf apache-tomcat-${VERSION}.tar.gz
 ---> Using cache
 ---> 6abe5cb0ef26
Step 9/20 : RUN mv apache-tomcat-${VERSION} /usr/local/tomcat
 ---> Using cache
 ---> b3907af15c22
Step 10/20 : RUN rm -rf apache-tomcat-${VERSION}.tar.gz /usr/local/tomcat/webapps/*
 ---> Using cache
 ---> b775439344e3
Step 11/20 : RUN mkdir /usr/local/tomcat/webapps/ROOT
 ---> Using cache
 ---> 149ad46776eb
Step 12/20 : ADD ./index.html /usr/local/tomcat/webapps/ROOT/
 ---> 064579c39a46
Step 13/20 : ADD ./jdk /usr/local/jdk
 ---> 477fd38dfbcf
Step 14/20 : RUN echo "export TOMCAT_HOME=/usr/local/tomcat" >> /etc/profile
 ---> Running in 3fc9bc5e8ba5
Removing intermediate container 3fc9bc5e8ba5
 ---> 3c43bccd5779
Step 15/20 : RUN echo "export JAVA_HOME=/usr/local/jdk" >> /etc/profile
 ---> Running in 80f8150f0e80
Removing intermediate container 80f8150f0e80
 ---> e01307ccb02a
Step 16/20 : RUN echo "export PATH=${TOMCAT_HOME}/bin:${JAVA_HOME}/bin:$PATH" >> /etc/profile
 ---> Running in 92a6a4fd1cbc
Removing intermediate container 92a6a4fd1cbc
 ---> 1d26f53b7095
Step 17/20 : RUN echo "export CLASSPATH=.:${JAVA_HOME}/lib/dt.jar:${JAVA_HOME}/lib/tools.jar" >> /etc/profile
 ---> Running in fb5ee1710c36
Removing intermediate container fb5ee1710c36
 ---> d2eaff35dce3
Step 18/20 : RUN source /etc/profile
 ---> Running in 0422af810b35
Removing intermediate container 0422af810b35
 ---> fc6d285288ca
Step 19/20 : EXPOSE 8080
 ---> Running in eeb64d4f9e94
Removing intermediate container eeb64d4f9e94
 ---> 05ec1c6d06cf
Step 20/20 : CMD ["/usr/local/tomcat/bin/catalina.sh","run"]
 ---> Running in 66b7851e2772
Removing intermediate container 66b7851e2772
 ---> ad338289055c
Successfully built ad338289055c
Successfully tagged centos-tomcat:v1
# docker images
REPOSITORY        TAG       IMAGE ID       CREATED          SIZE
centos-tomcat     v1        ad338289055c   6 minutes ago    797MB
# docker run -d -p 8082:8080 centos-tomcat:v1
# curl http://localhost:8082
tomcat is running

4.4.6 使用Dockerfile生成容器镜像优化

4.4.6.1 减少镜像分层

Dockerfile中包含多种指令,如果涉及到部署最多使用的算是RUN命令了,使用RUN命令时,不建议每次安装都使用一条单独的RUN命令,可以把能够合并安装指令合并为一条,这样就可以减少镜像分层。

FROM centos:latest
MAINTAINER www.kube.com
RUN yum install epel-release -y 
RUN yum install -y gcc gcc-c++ make -y
RUN wget http://docs.php.net/distributions/php-5.6.36.tar.gz
RUN tar zxf php-5.6.36.tar.gz
RUN cd php-5.6.36
RUN ./configure --prefix=/usr/local/php 
RUN make -j 4 
RUN make install
EXPOSE 9000
CMD ["php-fpm"]

优化内容如下:

FROM centos:latest
MAINTAINER www.kube.com
RUN yum install epel-release -y && \
    yum install -y gcc gcc-c++ make

RUN wget http://docs.php.net/distributions/php-5.6.36.tar.gz && \
    tar zxf php-5.6.36.tar.gz && \
    cd php-5.6.36 && \
    ./configure --prefix=/usr/local/php && \
    make -j 4 && make install
EXPOSE 9000
CMD ["php-fpm"]
4.4.6.2 清理无用数据
  • 一次RUN形成新的一层,如果没有在同一层删除,无论文件是否最后删除,都会带到下一层,所以要在每一层清理对应的残留数据,减小镜像大小。
  • 把生成容器镜像过程中部署的应用软件包做删除处理
FROM centos:latest
MAINTAINER www.kube.com
RUN yum install epel-release -y && \
    yum install -y gcc gcc-c++ make gd-devel libxml2-devel \
    libcurl-devel libjpeg-devel libpng-devel openssl-devel \
    libmcrypt-devel libxslt-devel libtidy-devel autoconf \
    iproute net-tools telnet wget curl && \
    yum clean all && \
    rm -rf /var/cache/yum/*

RUN wget http://docs.php.net/distributions/php-5.6.36.tar.gz && \
    tar zxf php-5.6.36.tar.gz && \
    cd php-5.6.36 && \
    ./configure --prefix=/usr/local/php \
    make -j 4 && make install && \
    cd / && rm -rf php*
4.4.6.3 多阶段构建镜像

项目容器镜像有两种,一种直接把项目代码复制到容器镜像中,下次使用容器镜像时即可直接启动;另一种把需要对项目源码进行编译,再复制到容器镜像中使用。

不论是哪种方法都会让制作镜像复杂了些,并也会让容器镜像比较大,建议采用分阶段构建镜像的方法实现。

$ git clone https://github.com/kube/tomcat-java-demo
$ cd tomcat-java-demo
$ vi Dockerfile
FROM maven AS build
ADD ./pom.xml pom.xml
ADD ./src src/
RUN mvn clean package

FROM kube/tomcat
RUN rm -rf /usr/local/tomcat/webapps/ROOT
COPY --from=build target/*.war /usr/local/tomcat/webapps/ROOT.war

$ docker build -t demo:v1 .
$ docker container run -d -v demo:v1
第一个 FROM 后边多了个 AS 关键字,可以给这个阶段起个名字
第二个 FROM 使用上面构建的 Tomcat 镜像,COPY 关键字增加了 —from 参数,用于拷贝某个阶段的文件到当前阶段。

Docker容器网络与通信原理深度解析

一、Docker容器默认网络模型

1.1 原理图

在这里插入图片描述

1.2 名词解释

  • docker0
    • 是一个二层网络设备,即网桥
    • 通过网桥可以将Linux支持的不同的端口连接起来
    • 实现类交换机多对多的通信
  • veth pair
    • 虚拟以太网(Ethernet)设备
    • 成对出现,用于解决网络命名空间之间的隔离
    • 一端连接Container network namespace,另一端连接host network namespace

二、Docker容器默认网络模型工作原理

2.1 容器访问外网

在这里插入图片描述

# docker run -d --name web1 -p 8081:80 nginx:latest
# iptables -t nat -vnL POSTROUTING
输出:
Chain POSTROUTING (policy ACCEPT 7 packets, 766 bytes)
 pkts bytes target     prot opt in     out     source               destination
    0     0 MASQUERADE  tcp  --  *      *       172.17.0.2           172.17.0.2           tcp dpt:80

2.2 外网访问容器

在这里插入图片描述

# iptables -t nat -vnL DOCKER
输出:
Chain DOCKER (2 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 DNAT       tcp  --  !docker0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:8081 to:172.17.0.2:80

三、Docker容器四种网络模型

在这里插入图片描述

模式使用方法说明
bridge [桥接式网络(Bridge container A)]–network bridge桥接容器,除了有一块本地回环接口(Loopback interface)外,还有一块私有接口(Private interface)通过容器虚拟接口(Container virtual interface)连接到桥接虚拟接口(Docker bridge virtual interface),之后通过逻辑主机接口(Logical host interface)连接到主机物理网络(Physical network interface)。
桥接网卡默认会分配到172.17.0.0/16的IP地址段。
如果我们在创建容器时没有指定网络模型,默认就是(Nat)桥接网络,这也就是为什么我们在登录到一个容器后,发现IP地址段都在172.17.0.0/16网段的原因。
host [开放式容器(Open container)]–network host比联盟式网络更开放,我们知道联盟式网络是多个容器共享网络(Net),而开放式容器(Open contaner)就直接共享了宿主机的名称空间。因此物理网卡有多少个,那么该容器就能看到多少网卡信息。我们可以说Open container是联盟式容器的衍生。
none [封闭式网络(Closed container)]–network none封闭式容器,只有本地回环接口(Loopback interface,和咱们服务器看到的lo接口类似),无法与外界进行通信。
container [联盟式网络(Joined container A | Joined container B ]–network container:c1(容器名称或容器ID)每个容器都各有一部分名称空间(Mount,PID,User),另外一部分名称空间是共享的(UTS,Net,IPC)。
由于它们的网络是共享的,因此各个容器可以通过本地回环接口(Loopback interface)进行通信。
除了共享同一组本地回环接口(Loopback interface)外,还有一块一块私有接口(Private interface)通过联合容器虚拟接口(Joined container virtual interface)连接到桥接虚拟接口(Docker bridge virtual interface),之后通过逻辑主机接口(Logical host interface)连接到主机物理网络(Physical network interface)。

四、Docker容器四种网络模型应用案例

4.1 查看已有的网络模型

查看已有的网络模型
# docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
a26c79961d8c   bridge    bridge    local
d04ce0d0e6ca   host      host      local
a369d8e58a41   none      null      local
查看已有网络模型详细信息
# docker network inspect bridge
[
    {
        "Name": "bridge",
        "Id": "a26c79961d8c3a5f66a7de782b773291e4902badc60d0614745e01b18f506907",
        "Created": "2022-02-08T11:45:25.607195911+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16",
                    "Gateway": "172.17.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "dbac5dd601b960c91bee8fafcabc0a6e6091bff14d5fccfa80ca2c74df8891ad": {
                "Name": "web1",
                "EndpointID": "2c1d8c66f7f46d6d76e5c384b1729a90441e1398496b3112124ba65d255432a1",
                "MacAddress": "02:42:ac:11:00:02",
                "IPv4Address": "172.17.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]
查看docker支持的网络模型
# docker info | grep Network
  Network: bridge host ipvlan macvlan null overlay

4.2 创建指定类型的网络模型

4.2.1 bridge

查看创建网络模型的帮助方法
# docker network create --help
创建一个名称为mybr0的网络
# docker network create -d bridge --subnet "192.168.100.0/24" --gateway "192.168.100.1" -o com.docker.network.bridge.name=docker1 mybr0
查看已创建网络
# docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
......
a6a1ad36c3c0   mybr0     bridge    local
......
在docker host主机上可以看到多了一个网桥docker1
# ifconfig
docker1: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.100.1  netmask 255.255.255.0  broadcast 192.168.100.255
        ether 02:42:14:aa:f5:04  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 20  bytes 1598 (1.5 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
启动一个容器并连接到已创建mybr0网络
# docker run -it --network mybr0 --rm busybox
/ # ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:C0:A8:65:02
          inet addr:192.168.100.2  Bcast:192.168.100.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:18 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:2185 (2.1 KiB)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

/ # exit

4.2.2 host

查看host类型的网络模型
# docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
......
d04ce0d0e6ca   host      host      local
......
查看host网络模型的详细信息
# docker network inspect host
[
    {
        "Name": "host",
        "Id": "d04ce0d0e6ca8e6226937f19033ef2c3f05b47ed63e06492d5c3071904fbb80b",
        "Created": "2022-01-21T16:12:05.30970114+08:00",
        "Scope": "local",
        "Driver": "host",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": []
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]
创建容器使用host网络模型,并查看其网络信息
# docker run -it --network host --rm busybox
/ # ifconfig
docker0   Link encap:Ethernet  HWaddr 02:42:11:B8:9A:C5
          inet addr:172.17.0.1  Bcast:172.17.255.255  Mask:255.255.0.0
          inet6 addr: fe80::42:11ff:feb8:9ac5/64 Scope:Link
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:53 errors:0 dropped:0 overruns:0 frame:0
          TX packets:94 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:6924 (6.7 KiB)  TX bytes:7868 (7.6 KiB)

docker1   Link encap:Ethernet  HWaddr 02:42:14:AA:F5:04
          inet addr:192.168.100.1  Bcast:192.168.100.255  Mask:255.255.255.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)


ens33     Link encap:Ethernet  HWaddr 00:0C:29:AF:89:0B
          inet addr:192.168.255.161  Bcast:192.168.255.255  Mask:255.255.255.0
          inet6 addr: fe80::44fc:2662:bfab:2b93/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:157763 errors:0 dropped:0 overruns:0 frame:0
          TX packets:50865 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:205504721 (195.9 MiB)  TX bytes:3626119 (3.4 MiB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:88 errors:0 dropped:0 overruns:0 frame:0
          TX packets:88 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:8196 (8.0 KiB)  TX bytes:8196 (8.0 KiB)

virbr0    Link encap:Ethernet  HWaddr 52:54:00:EB:01:E5
          inet addr:192.168.122.1  Bcast:192.168.122.255  Mask:255.255.255.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

/ # exit

运行Nginx服务

创建用于运行nginx应用的容器,使用host网络模型
# docker run -d --network host nginx:latest
查看容器运行状态
# docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED         STATUS         PORTS     NAMES
f6677b213271   nginx:latest   "/docker-entrypoint.…"   7 seconds ago   Up 6 seconds             youthful_shtern
查看docker host 80端口状态
# ss -anput | grep ":80"
tcp    LISTEN     0      511       *:80                    *:*                   users:(("nginx",pid=42866,fd=7),("nginx",pid=42826,fd=7))
tcp    LISTEN     0      511      :::80                   :::*                   users:(("nginx",pid=42866,fd=8),("nginx",pid=42826,fd=8))
使用curl命令访问docker host主机IP地址,验证是否可以对nginx进行访问,如可访问,则说明容器与docker host共享网络命名空间
# curl http://192.168.255.161
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

4.2.3 none

查看none类型的网络模型
# docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
......
a369d8e58a41   none      null      local
查看none网络模型详细信息
# docker network inspect none
[
    {
        "Name": "none",
        "Id": "a369d8e58a41ce2e3c25f2273b059e984dd561bfa7e79077a0cce9b3a925b9c9",
        "Created": "2022-01-21T16:12:05.217801814+08:00",
        "Scope": "local",
        "Driver": "null",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": []
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]
创建容器使用none网络模型,并查看其网络状态
# docker run -it --network none --rm busybox:latest
/ # ifconfig
lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

/ # exit

4.2.4 联盟网络

创建c1容器,使用默认网络模型
# docker run -it --name c1 --rm busybox:latest
/ # ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:AC:11:00:02
          inet addr:172.17.0.2  Bcast:172.17.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:16 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1916 (1.8 KiB)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
查看c1容器状态
# docker ps
CONTAINER ID   IMAGE            COMMAND   CREATED          STATUS          PORTS     NAMES
0905bc8ebfb6   busybox:latest   "sh"      13 seconds ago   Up 11 seconds             c1
创建c2容器,与c1容器共享网络命名空间
# docker run -it --name c2 --network container:c1 --rm busybox:latest
/ # ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:AC:11:00:02
          inet addr:172.17.0.2  Bcast:172.17.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:22 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:2574 (2.5 KiB)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
在c2容器中创建文件并开启httpd服务
/ # echo "hello world" >> /tmp/index.html
/ # ls /tmp
index.html
/ # httpd -h /tmp

验证80端口是否打开
/ # netstat -npl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 :::80                   :::*                    LISTEN      10/httpd
在c1容器中进行访问验证
# docker exec c1 wget -O - -q 127.0.0.1
hello world
查看c1容器/tmp目录,发现没有在c2容器中创建的文件,说明c1与c2仅共享了网络命名空间,没有共享文件系统
# docker exec c1 ls /tmp

五、跨Docker Host容器间通信实现

5.1 跨Docker Host容器间通信必要性

  • 由于Docker容器运行的环境类似于在局域网中运行服务一样,无法直接被外界访问,如果采用在Docker Host利用端口映射方式会导致端口被严重消耗。
  • 能够实现不同的Docker Host方便访问其它Docker Host之上的容器提供的服务

5.2 跨Docker Host容器间通信实现方案

5.2.1 Docker原生方案

  • overlay
    • 基于VXLAN封装实现Docker原生overlay网络
  • macvlan
    • Docker主机网卡接口逻辑上分为多个子接口,每个子接口标识一个VLAN,容器接口直接连接Docker Host
  • 网卡接口
    • 通过路由策略转发到另一台Docker Host

5.2.2 第三方方案

5.2.2.1 隧道方案
  • Flannel
    • 支持UDP和VLAN封装传输方式
  • Weave
    • 支持UDP和VXLAN
  • OpenvSwitch
    • 支持VXLAN和GRE协议
5.2.2.2 路由方案
  • Calico
    • 支持BGP协议和IPIP隧道
    • 每台宿主机作为虚拟路由,通过BGP协议实现不同主机容器间通信。

5.3 Flannel

5.3.1 overlay network介绍

Overlay网络是指在不改变现有网络基础设施的前提下,通过某种约定通信协议,把二层报文封装在IP报文之上的新的数据格式。这样不但能够充分利用成熟的IP路由协议进程数据分发;而且在Overlay技术中采用扩展的隔离标识位数,能够突破VLAN的4000数量限制支持高达16M的用户,并在必要时可将广播流量转化为组播流量,避免广播数据泛滥。

因此,Overlay网络实际上是目前最主流的容器跨节点数据传输和路由方案。

5.3.2 Flannel介绍

Flannel是 CoreOS 团队针对 Kubernetes 设计的一个覆盖网络(Overlay Network)工具,其目的在于帮助每一个使用 Kuberentes 的 CoreOS 主机拥有一个完整的子网。 Flannel通过给每台宿主机分配一个子网的方式为容器提供虚拟网络,它基于Linux TUN/TAP,使用UDP封装IP包来创建overlay网络,并借助etcd维护网络的分配情况。 Flannel is a simple and easy way to configure a layer 3 network fabric designed for Kubernetes.

5.3.3 Flannel工作原理

Flannel是CoreOS团队针对Kubernetes设计的一个网络规划服务,简单来说,它的功能是让集群中的不同节点主机创建的Docker容器都具有全集群唯一的虚拟IP地址。但在默认的Docker配置中,每个Node的Docker服务会分别负责所在节点容器的IP分配。Node内部的容器之间可以相互访问,但是跨主机(Node)网络相互间是不能通信。Flannel设计目的就是为集群中所有节点重新规划IP地址的使用规则,从而使得不同节点上的容器能够获得"同属一个内网"且"不重复的"IP地址,并让属于不同节点上的容器能够直接通过内网IP通信。 Flannel 使用etcd存储配置数据和子网分配信息。flannel 启动之后,后台进程首先检索配置和正在使用的子网列表,然后选择一个可用的子网,然后尝试去注册它。etcd也存储这个每个主机对应的ip。flannel 使用etcd的watch机制监视/coreos.com/network/subnets下面所有元素的变化信息,并且根据它来维护一个路由表。为了提高性能,flannel优化了Universal TAP/TUN设备,对TUN和UDP之间的ip分片做了代理。 如下原理图:

在这里插入图片描述

1、数据从源容器中发出后,经由所在主机的docker0虚拟网卡转发到flannel0虚拟网卡,这是个P2P的虚拟网卡,flanneld服务监听在网卡的另外一端。
2、Flannel通过Etcd服务维护了一张节点间的路由表,该张表里保存了各个节点主机的子网网段信息。
3、源主机的flanneld服务将原本的数据内容UDP封装后根据自己的路由表投递给目的节点的flanneld服务,数据到达以后被解包,然后直接进入目的节点的flannel0虚拟网卡,然后被转发到目的主机的docker0虚拟网卡,最后就像本机容器通信一样的由docker0路由到达目标容器。

5.4 ETCD

etcd是CoreOS团队于2013年6月发起的开源项目,它的目标是构建一个高可用的分布式键值(key-value)数据库。etcd内部采用raft协议作为一致性算法,etcd基于Go语言实现。

etcd作为服务发现系统,特点:

  • 简单:安装配置简单,而且提供了HTTP API进行交互,使用也很简单
  • 安全:支持SSL证书验证
  • 快速:根据官方提供的benchmark数据,单实例支持每秒2k+读操作
  • 可靠:采用raft算法,实现分布式系统数据的可用性和一致性

5.5 ETCD部署

主机防火墙及SELINUX均关闭。

5.5.1 主机名称配置

# hostnamectl set-hostname node1
# hostnamectl set-hostname node2

5.5.2 主机IP地址配置

# vim /etc/sysconfig/network-scripts/ifcfg-ens33
# cat /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="none"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens33"
UUID="6c020cf7-4c6e-4276-9aa6-0661670da705"
DEVICE="ens33"
ONBOOT="yes"
IPADDR="192.168.255.154"
PREFIX="24"
GATEWAY="192.168.255.2"
DNS1="119.29.29.29"
# vim /etc/sysconfig/network-scripts/ifcfg-ens33
# cat /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="none"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens33"
UUID="6c020cf7-4c6e-4276-9aa6-0661670da705"
DEVICE="ens33"
ONBOOT="yes"
IPADDR="192.168.255.155"
PREFIX="24"
GATEWAY="192.168.255.2"
DNS1="119.29.29.29"

5.5.3 主机名与IP地址解析

# vim /etc/hosts
[root@node1 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.255.154 node1
192.168.255.155 node2
# vim /etc/hosts
[root@node2 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.255.154 node1
192.168.255.155 node2

5.5.4 开启内核转发

所有Docker Host

# vim /etc/sysctl.conf
[root@node1 ~]# cat /etc/sysctl.conf
......
net.ipv4.ip_forward=1
# sysctl -p
# vim /etc/sysctl.conf
[root@node2 ~]# cat /etc/sysctl.conf
......
net.ipv4.ip_forward=1
# sysctl -p

5.5.5 etcd安装

etcd集群

[root@node1 ~]# yum -y install etcd
[root@node2 ~]# yum -y install etcd

5.5.6 etcd配置

# vim /etc/etcd/etcd.conf
[root@node1 ~]# cat /etc/etcd/etcd.conf
#[Member]
#ETCD_CORS=""
ETCD_DATA_DIR="/var/lib/etcd/node1.etcd"
#ETCD_WAL_DIR=""
ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
ETCD_NAME="node1"
#ETCD_SNAPSHOT_COUNT="100000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#ETCD_QUOTA_BACKEND_BYTES="0"
#ETCD_MAX_REQUEST_BYTES="1572864"
#ETCD_GRPC_KEEPALIVE_MIN_TIME="5s"
#ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s"
#ETCD_GRPC_KEEPALIVE_TIMEOUT="20s"
#
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.255.154:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.255.154:2379,http://192.168.255.155:4001"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#ETCD_DISCOVERY_SRV=""
ETCD_INITIAL_CLUSTER="node1=http://192.168.255.154:2380,node2=http://192.168.255.155:2380"
#ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
#ETCD_INITIAL_CLUSTER_STATE="new"
#ETCD_STRICT_RECONFIG_CHECK="true"
#ETCD_ENABLE_V2="true"
#
#[Proxy]

# vim /etc/etcd/etcd.conf
[root@node2 ~]# cat /etc/etcd/etcd.conf
#[Member]
#ETCD_CORS=""
ETCD_DATA_DIR="/var/lib/etcd/node2.etcd"
#ETCD_WAL_DIR=""
ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
ETCD_NAME="node2"
#ETCD_SNAPSHOT_COUNT="100000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#ETCD_QUOTA_BACKEND_BYTES="0"
#ETCD_MAX_REQUEST_BYTES="1572864"
#ETCD_GRPC_KEEPALIVE_MIN_TIME="5s"
#ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s"
#ETCD_GRPC_KEEPALIVE_TIMEOUT="20s"
#
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.255.155:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.255.155:2379,http://192.168.255.155:4001"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#ETCD_DISCOVERY_SRV=""
ETCD_INITIAL_CLUSTER="node1=http://192.168.255.154:2380,node2=http://192.168.255.155:2380"
#ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
#ETCD_INITIAL_CLUSTER_STATE="new"
#ETCD_STRICT_RECONFIG_CHECK="true"
#ETCD_ENABLE_V2="true"
#
#[Proxy]

5.5.7 启动etcd服务

[root@node1 ~]# systemctl enable etcd

[root@node1 ~]# systemctl start etcd
[root@node2 ~]# systemctl enable etcd

[root@node2 ~]# systemctl start etcd

5.5.8 检查端口状态

# netstat -tnlp | grep -E  "4001|2380"
输出结果:
tcp6       0      0 :::2380                 :::*                    LISTEN      65318/etcd
tcp6       0      0 :::4001                 :::*                    LISTEN      65318/etcd

5.5.9 检查etcd集群是否健康

# etcdctl -C http://192.168.255.154:2379 cluster-health
输出:
member 5be09658727c5574 is healthy: got healthy result from http://192.168.255.154:2379
member c48e6c7a65e5ca43 is healthy: got healthy result from http://192.168.255.155:2379
cluster is healthy
# etcdctl member list
输出:
5be09658727c5574: name=node1 peerURLs=http://192.168.255.154:2380 clientURLs=http://192.168.255.154:2379,http://192.168.255.155:4001 isLeader=true
c48e6c7a65e5ca43: name=node2 peerURLs=http://192.168.255.155:2380 clientURLs=http://192.168.255.155:2379,http://192.168.255.155:4001 isLeader=false

5.6 Flannel部署

5.6.1 Flannel安装

[root@node1 ~]# yum -y install flannel
[root@node2 ~]# yum -y install flannel

5.6.2 修改Flannel配置文件

[root@node1 ~]# vim /etc/sysconfig/flanneld
[root@node1 ~]# cat /etc/sysconfig/flanneld
# Flanneld configuration options

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://192.168.255.154:2379,http://192.168.255.155:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/atomic.io/network"

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""
FLANNEL_OPTIONS="--logtostderr=false --log_dir=/var/log/ --etcd endpoints=http://192.168.255.154:2379,http://192.168.255.155:2379 --iface=ens33"
[root@node2 ~]# vim /etc/sysconfig/flanneld
[root@node2 ~]# cat /etc/sysconfig/flanneld
# Flanneld configuration options

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://192.168.255.154:2379,http://192.168.255.155:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/atomic.io/network"

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""
FLANNEL_OPTIONS="--logtostderr=false --log_dir=/var/log/ --etcd-endpoints=http://192.168.255.154:2379,http://192.168.255.155:2379 --iface=ens33"

5.6.3 配置etcd中关于flannel的key

Flannel使用Etcd进行配置,来保证多个Flannel实例之间的配置一致性,所以需要在etcd上进行如下配置('/http://atomic.io/network/config’这个key与上面的/etc/sysconfig/flannel中的配置项FLANNEL_ETCD_PREFIX是相对应的,错误的话启动就会出错)

该ip网段可以任意设定,随便设定一个网段都可以。容器的ip就是根据这个网段进行自动分配的,ip分配后,容器一般是可以对外联网的(网桥模式,只要Docker Host能上网即可。)

[root@node1 ~]# etcdctl mk /atomic.io/network/config '{"Network":"172.21.0.0/16"}'
{"Network":"172.21.0.0/16"}

[root@node1 ~]# etcdctl set /atomic.io/network/config '{"Network":"172.21.0.0/16"}'
{"Network":"172.21.0.0/16"}
[root@node1 ~]# etcdctl get /atomic.io/network/config
{"Network":"172.21.0.0/16"}

5.6.4 启动Flannel服务

[root@node1 ~]# systemctl enable flanneld;systemctl start flanneld
[root@node2 ~]# systemctl enable flanneld;systemctl start flanneld

5.6.5 查看各node中flannel产生的配置信息

[root@node1 ~]# ls /run/flannel/
docker  subnet.env
[root@node1 ~]# cat /run/flannel/subnet.env
FLANNEL_NETWORK=172.21.0.0/16
FLANNEL_SUBNET=172.21.31.1/24
FLANNEL_MTU=1472
FLANNEL_IPMASQ=false
[root@node1 ~]# ip a s
......
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:63:d1:9e:0b brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
6: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN group default qlen 500
    link/none
    inet 172.21.31.0/16 scope global flannel0
       valid_lft forever preferred_lft forever
    inet6 fe80::edfa:d8b0:3351:4126/64 scope link flags 800
       valid_lft forever preferred_lft forever
[root@node2 ~]# ls /run/flannel/
docker  subnet.env
[root@node2 ~]# cat /run/flannel/subnet.env
FLANNEL_NETWORK=172.21.0.0/16
FLANNEL_SUBNET=172.21.55.1/24
FLANNEL_MTU=1472
FLANNEL_IPMASQ=false
[root@node2 ~]# ip a s
......
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:e1:16:68:de brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
6: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN group default qlen 500
    link/none
    inet 172.21.55.0/16 scope global flannel0
       valid_lft forever preferred_lft forever
    inet6 fe80::f895:9b5a:92b1:78aa/64 scope link flags 800
       valid_lft forever preferred_lft forever

5.7 Docker网络配置

–bip=172.21.31.1/24 --ip-masq=true --mtu=1472 放置于启动程序后

[root@node1 ~]# vim /usr/lib/systemd/system/docker.service
[root@node1 ~]# cat /usr/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket containerd.service

[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --bip=172.21.31.1/24 --ip-masq=true --mtu=1472
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always

# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
# Both the old, and new location are accepted by systemd 229 and up, so using the old location
# to make them work for either version of systemd.
StartLimitBurst=3

# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
# this option work for either version of systemd.
StartLimitInterval=60s

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Comment TasksMax if your systemd version does not support it.
# Only systemd 226 and above support this option.
TasksMax=infinity

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process
OOMScoreAdjust=-500

[Install]
WantedBy=multi-user.target
[root@node2 ~]# vim /usr/lib/systemd/system/docker.service
[root@node2 ~]# cat /usr/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket containerd.service

[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --bip=172.21.55.1/24 --ip-masq=true --mtu=1472
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always

# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
# Both the old, and new location are accepted by systemd 229 and up, so using the old location
# to make them work for either version of systemd.
StartLimitBurst=3

# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
# this option work for either version of systemd.
StartLimitInterval=60s

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Comment TasksMax if your systemd version does not support it.
# Only systemd 226 and above support this option.
TasksMax=infinity

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process
OOMScoreAdjust=-500

[Install]
WantedBy=multi-user.target
[root@node1 ~]# systemctl daemon-reload
[root@node1 ~]# systemctl restart docker
[root@node1 ~]# ip a s
......
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:63:d1:9e:0b brd ff:ff:ff:ff:ff:ff
    inet 172.21.31.1/24 brd 172.21.31.255 scope global docker0
       valid_lft forever preferred_lft forever
6: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN group default qlen 500
    link/none
    inet 172.21.31.0/16 scope global flannel0
       valid_lft forever preferred_lft forever
    inet6 fe80::edfa:d8b0:3351:4126/64 scope link flags 800
       valid_lft forever preferred_lft forever
[root@node2 ~]# systemctl daemon-reload
[root@node2 ~]# systemctl restart docker
[root@node2 ~]# ip a s
......
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:e1:16:68:de brd ff:ff:ff:ff:ff:ff
    inet 172.21.55.1/24 brd 172.21.55.255 scope global docker0
       valid_lft forever preferred_lft forever
6: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN group default qlen 500
    link/none
    inet 172.21.55.0/16 scope global flannel0
       valid_lft forever preferred_lft forever
    inet6 fe80::f895:9b5a:92b1:78aa/64 scope link flags 800
       valid_lft forever preferred_lft forever

5.8 跨Docker Host容器间通信验证

[root@node1 ~]# docker run -it --rm busybox:latest

/ # ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:AC:15:1F:02
          inet addr:172.21.31.2  Bcast:172.21.31.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1472  Metric:1
          RX packets:21 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:2424 (2.3 KiB)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)


/ # ping 172.21.55.2
PING 172.21.55.2 (172.21.55.2): 56 data bytes
64 bytes from 172.21.55.2: seq=0 ttl=60 time=2.141 ms
64 bytes from 172.21.55.2: seq=1 ttl=60 time=1.219 ms
64 bytes from 172.21.55.2: seq=2 ttl=60 time=0.730 ms
^C
--- 172.21.55.2 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.730/1.363/2.141 ms
[root@node2 ~]# docker run -it --rm busybox:latest

/ # ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:AC:15:37:02
          inet addr:172.21.55.2  Bcast:172.21.55.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1472  Metric:1
          RX packets:19 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:2246 (2.1 KiB)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)


/ # ping 172.21.31.2
PING 172.21.31.2 (172.21.31.2): 56 data bytes
64 bytes from 172.21.31.2: seq=0 ttl=60 time=1.286 ms
64 bytes from 172.21.31.2: seq=1 ttl=60 time=0.552 ms
^C
--- 172.21.31.2 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.552/0.919/1.286 ms

Docker容器数据持久化存储机制

一、Docker容器数据持久化存储介绍

  • 物理机或虚拟机数据持久化存储

    • 由于物理机或虚拟机本身就拥有大容量的磁盘,所以可以直接把数据存储在物理机或虚拟机本地文件系统中,亦或者也可以通过使用额外的存储系统(NFS、GlusterFS、Ceph等)来完成数据持久化存储。
  • Docker容器数据持久化存储

    • 由于Docker容器是由容器镜像生成的,所以一般容器镜像中包含什么文件或目录,在容器启动后,我们依旧可以看到相同的文件或目录。
    • 由于Docker容器属于“用后即焚”型计算资源,因此Docker容器不适合做数据持久化存储

二、Docker容器数据持久化存储方式

Docker提供三种方式将数据从宿主机挂载到容器中:

  • docker run -v
    • 运行容器时,直接挂载本地目录至容器中
  • volumes
    • Docker管理宿主机文件系统的一部分(/var/lib/docker/volumes)
    • 是Docker默认存储数据方式
  • bind mounts
    • 将宿主机上的任意位置文件或目录挂载到容器中

三、Docker容器数据持久化存储方式应用案例演示

3.1 docker run -v

3.1.1 未挂载本地目录

运行一个容器,未挂载本地目录
# docker run -d --name web1 nginx:latest
# docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED          STATUS          PORTS     NAMES
c4ad9f2c15fa   nginx:latest   "/docker-entrypoint.…"   46 seconds ago   Up 44 seconds   80/tcp    web1
使用curl命令访问容器
# curl http://172.17.0.2
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
查看容器中/usr/share/nginx/html目录中目录或子目录
# docker exec web ls /usr/share/nginx/html
50x.html
index.html

3.1.2 挂载本地目录

创建本地目录
# mkdir /opt/wwwroot
向本地目录中添加index.html文件
# echo 'kube' > /opt/wwwroot/index.html
运行web2容器,把/opt/wwwroot目录挂载到/usr/share/nginx/html目录中
# docker run -d --name web2 -v /opt/wwwroot/:/usr/share/nginx/html/ nginx:latest
查看容器IP地址
# docker inspect web2

......
 "IPAddress": "172.17.0.3",
 ......
使用curl命令访问容器
# curl http://172.17.0.3
kube

3.1.3 未创建本地目录

运行web3容器,挂载未创建的本地目录,启动容器时将自动创建本地目录
# docker run -d --name web3 -v /opt/web3root/:/usr/share/nginx/html/ nginx:latest
往自动创建的目录中添加一个index.html文件
# echo "kube web3" > /opt/web3root/index.html
在容器中执行查看文件命令
# docker exec web3 cat /usr/share/nginx/html/index.html
kube web3

3.2 volumes

3.2.1 创建数据卷

创建一个名称为nginx-vol的数据卷
# docker volume create nginx-vol
nginx-vol
确认数据卷创建后的位置
# ls /var/lib/docker/volumes/
backingFsBlockDev  metadata.db  nginx-vol
查看已经创建数据卷
# docker volume ls
DRIVER    VOLUME NAME
local     nginx-vol
查看数据卷详细信息
# docker volume inspect nginx-vol
[
    {
        "CreatedAt": "2022-02-08T14:36:16+08:00",
        "Driver": "local",
        "Labels": {},
        "Mountpoint": "/var/lib/docker/volumes/nginx-vol/_data",
        "Name": "nginx-vol",
        "Options": {},
        "Scope": "local"
    }
]

3.2.2 使用数据卷

运行web4容器,使用--mount选项,实现数据卷挂载
# docker run -d --name web4 --mount src=nginx-vol,dst=/usr/share/nginx/html nginx:latest

运行web4容器,使用-v选项,实现数据卷挂载
# docker run -d --name web4 -v nginx-vol:/usr/share/nginx/html/ nginx:latest
查看容器运行后数据卷中文件或子目录
# ls /var/lib/docker/volumes/nginx-vol/_data/
50x.html  index.html
使用curl命令访问容器
# curl http://172.17.0.2
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
修改index.html文件内容
# echo "web4" > /var/lib/docker/volumes/nginx-vol/_data/index.html
再次使用curl命令访问容器
# curl http://172.17.0.2
web4

3.3 bind mounts

创建用于容器挂载的目录web5root
# mkdir /opt/web5root
运行web5容器并使用bind mount方法实现本地任意目录挂载
# docker run -d --name web5 --mount type=bind,src=/opt/web5root,dst=/usr/share/nginx/html nginx:latest
查看已挂载目录,里面没有任何数据
# ls /opt/web5root/
添加内容至/opt/web5root/index.html中
# echo "web5" > /opt/web5root/index.html
使用curl命令访问容器
# curl http://172.17.0.3
web5

Docker容器服务编排利器 Docker Compose应用实战

一、使用Docker Compose必要性及定义

用容器运行一个服务,需要使用docker run命令。但如果我要运行多个服务呢?

假设我要运行一个web服务,还要运行一个db服务,那么是用一个容器运行,还是用多个容器运行呢?

一个容器运行多个服务会造成镜像的复杂度提高,docker倾向于一个容器运行一个应用

那么复杂的架构就会需要很多的容器,并且需要它们之间有关联(容器之间的依赖和连接)就更复杂了。

这个复杂的问题需要解决,这就涉及到了**容器编排**的问题了。

  • Compose
    • 编排
      • 是对多个容器进行启动和管理的方法
      • 例如:LNMT,先启动MySQL,再启动Tomcat,最后启动Nginx
  • 服务架构的演进
    • 单体服务架构
    • 分布式服务架构
    • 微服务架构
    • 超微服务架构
  • 容器编排工具
    • docker machine
      • 在虚拟机中部署docker容器引擎的工具
    • docker compose
      • 是一个用于定义和运行多容器Docker的应用程序工具
    • docker swarm
      • 是Docker Host主机批量管理及资源调度管理工具
    • mesos+marathon
      • mesos 对计算机计算资源进行管理和调度
      • marathon 服务发现及负载均衡的功能
    • kubernetes
      • google开源的容器编排工具

二、Docker Compose应用参考资料

  • 网址
    • https://docs.docker.com/compose/

在这里插入图片描述

  • yaml格式
    • https://yaml.org/

三、Docker Compose应用最佳实践步骤

3.1 概念

  • 工程(project)
  • 服务 (Service)
  • 容器 (Container)

3.2 步骤

1.定义应用的Dockerfile文件,为了anywhere进行构建。

2.使用docker-compose.yaml定义一套服务,这套服务可以一起在一个隔离环境中运行。

3.使用docker-compose up就可以启动整套服务。

四、Docker Compose安装

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

# wget https://github.com/docker/compose/releases/download/v2.2.3/docker-compose-linux-x86_64
# mv docker-compose-linux-x86_64 /usr/bin/docker-compose
# chmod +x /usr/bin/docker-compose
# docker-compose version
Docker Compose version v2.2.3

五、Docker Compose应用案例

运行Python语言开发的网站

5.1 网站文件准备

# mkdir flaskproject
[root@localhost ~]# cd flaskproject/
[root@localhost flaskproject]#
[root@localhost flaskproject]# vim app.py
[root@localhost flaskproject]# cat app.py
import time

import redis
from flask import Flask

app = Flask(__name__)
cache = redis.Redis(host='redis', port=6379)


def get_hit_count():
    retries = 5
    while True:
        try:
            return cache.incr('hits')
        except redis.exceptions.ConnectionError as exc:
            if retries == 0:
                raise exc
            retries -= 1
            time.sleep(0.5)


@app.route('/')
def hello():
    count = get_hit_count()
    return 'Hello World! I have been seen {} times.\n'.format(count)
[root@localhost flaskproject]# vim requirements.txt
[root@localhost flaskproject]# cat requirements.txt
flask
redis

5.2 Dockerfile文件准备

[root@localhost flaskproject]# vim Dockerfile
[root@localhost flaskproject]# cat Dockerfile
FROM python:3.7-alpine
WORKDIR /code
ENV FLASK_APP app.py
ENV FLASK_RUN_HOST 0.0.0.0
RUN apk add --no-cache gcc musl-dev linux-headers
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
CMD ["flask", "run"]

5.3 Compose文件准备

[root@localhost flaskproject]# vim docker-compose.yaml
[root@localhost flaskproject]# cat docker-compose.yaml
version: '3'
services:
  web:
    build: .
    ports:
      - "5000:5000"
  redis:
    image: "redis:alpine"

5.4 使用docker-compose up启动容器

[root@localhost flaskproject]# ls
app.py  docker-compose.yaml  Dockerfile  requirements.txt
[root@localhost flaskproject]# docker-compose up
输出:
[+] Running 7/7
 ⠿ redis Pulled                                                                         15.8s
   ⠿ 59bf1c3509f3 Pull complete                                                          2.9s
   ⠿ 719adce26c52 Pull complete                                                          3.0s
   ⠿ b8f35e378c31 Pull complete                                                          5.8s
   ⠿ d034517f789c Pull complete                                                          6.5s
   ⠿ 3772d4d76753 Pull complete                                                          6.6s
   ⠿ 211a7f52febb Pull complete                                                          6.8s
Sending build context to Docker daemon     714B
Step 1/9 : FROM python:3.7-alpine
3.7-alpine: Pulling from library/python
59bf1c3509f3: Already exists
07a400e93df3: Already exists
bdabb07397e1: Already exists
cd0af01c7b70: Already exists
d0f18e022200: Already exists
Digest: sha256:5a776e3b5336827faf7a1c3a191b73b5b2eef4cdcfe8b94f59b79cb749a2b5d8
Status: Downloaded newer image for python:3.7-alpine
 ---> e72b511ad78e
Step 2/9 : WORKDIR /code
 ---> Running in 2b9d07bef719
Removing intermediate container 2b9d07bef719
 ---> 7d39e96fadf1
Step 3/9 : ENV FLASK_APP app.py
 ---> Running in 9bcb28bd632a
Removing intermediate container 9bcb28bd632a
 ---> 79f656a616d5
Step 4/9 : ENV FLASK_RUN_HOST 0.0.0.0
 ---> Running in 8470c2dbd6c2
Removing intermediate container 8470c2dbd6c2
 ---> e212ba688fcd
Step 5/9 : RUN apk add --no-cache gcc musl-dev linux-headers
 ---> Running in 6e9ca0766bc8
fetch https://dl-cdn.alpinelinux.org/alpine/v3.15/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.15/community/x86_64/APKINDEX.tar.gz
(1/13) Installing libgcc (10.3.1_git20211027-r0)
(2/13) Installing libstdc++ (10.3.1_git20211027-r0)
(3/13) Installing binutils (2.37-r3)
(4/13) Installing libgomp (10.3.1_git20211027-r0)
(5/13) Installing libatomic (10.3.1_git20211027-r0)
(6/13) Installing libgphobos (10.3.1_git20211027-r0)
(7/13) Installing gmp (6.2.1-r1)
(8/13) Installing isl22 (0.22-r0)
(9/13) Installing mpfr4 (4.1.0-r0)
(10/13) Installing mpc1 (1.2.1-r0)
(11/13) Installing gcc (10.3.1_git20211027-r0)
(12/13) Installing linux-headers (5.10.41-r0)
(13/13) Installing musl-dev (1.2.2-r7)
Executing busybox-1.34.1-r3.trigger
OK: 143 MiB in 49 packages
Removing intermediate container 6e9ca0766bc8
 ---> 273d4f04dfbc
Step 6/9 : COPY requirements.txt requirements.txt
 ---> daf51c54e8ba
Step 7/9 : RUN pip install -r requirements.txt
 ---> Running in 2aa2d30c5311
Collecting flask
  Downloading Flask-2.0.3-py3-none-any.whl (95 kB)
Collecting redis
  Downloading redis-4.1.3-py3-none-any.whl (173 kB)
Collecting Jinja2>=3.0
  Downloading Jinja2-3.0.3-py3-none-any.whl (133 kB)
Collecting itsdangerous>=2.0
  Downloading itsdangerous-2.0.1-py3-none-any.whl (18 kB)
Collecting click>=7.1.2
  Downloading click-8.0.3-py3-none-any.whl (97 kB)
Collecting Werkzeug>=2.0
  Downloading Werkzeug-2.0.3-py3-none-any.whl (289 kB)
Collecting deprecated>=1.2.3
  Downloading Deprecated-1.2.13-py2.py3-none-any.whl (9.6 kB)
Collecting packaging>=20.4
  Downloading packaging-21.3-py3-none-any.whl (40 kB)
Collecting importlib-metadata>=1.0
  Downloading importlib_metadata-4.11.1-py3-none-any.whl (17 kB)
Collecting wrapt<2,>=1.10
  Downloading wrapt-1.13.3-cp37-cp37m-musllinux_1_1_x86_64.whl (78 kB)
Collecting typing-extensions>=3.6.4
  Downloading typing_extensions-4.1.1-py3-none-any.whl (26 kB)
Collecting zipp>=0.5
  Downloading zipp-3.7.0-py3-none-any.whl (5.3 kB)
Collecting MarkupSafe>=2.0
  Downloading MarkupSafe-2.0.1-cp37-cp37m-musllinux_1_1_x86_64.whl (30 kB)
Collecting pyparsing!=3.0.5,>=2.0.2
  Downloading pyparsing-3.0.7-py3-none-any.whl (98 kB)
Installing collected packages: zipp, typing-extensions, wrapt, pyparsing, MarkupSafe, importlib-metadata, Werkzeug, packaging, Jinja2, itsdangerous, deprecated, click, redis, flask
Successfully installed Jinja2-3.0.3 MarkupSafe-2.0.1 Werkzeug-2.0.3 click-8.0.3 deprecated-1.2.13 flask-2.0.3 importlib-metadata-4.11.1 itsdangerous-2.0.1 packaging-21.3 pyparsing-3.0.7 redis-4.1.3 typing-extensions-4.1.1 wrapt-1.13.3 zipp-3.7.0
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
WARNING: You are using pip version 21.2.4; however, version 22.0.3 is available.
You should consider upgrading via the '/usr/local/bin/python -m pip install --upgrade pip' command.
Removing intermediate container 2aa2d30c5311
 ---> dd8f52b132f8
Step 8/9 : COPY . .
 ---> b36938a26cf5
Step 9/9 : CMD ["flask", "run"]
 ---> Running in 260cbfa02959
Removing intermediate container 260cbfa02959
 ---> fa04dfec6ff2
Successfully built fa04dfec6ff2
Successfully tagged flaskproject_web:latest

Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them
[+] Running 3/3
 ⠿ Network flaskproject_default    Created                                               0.1s
 ⠿ Container flaskproject-redis-1  Created                                               0.1s
 ⠿ Container flaskproject-web-1    Created                                               0.1s
Attaching to flaskproject-redis-1, flaskproject-web-1
flaskproject-redis-1  | 1:C 15 Feb 2022 14:14:21.696 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
flaskproject-redis-1  | 1:C 15 Feb 2022 14:14:21.696 # Redis version=6.2.6, bits=64, commit=00000000, modified=0, pid=1, just started
flaskproject-redis-1  | 1:C 15 Feb 2022 14:14:21.696 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
flaskproject-redis-1  | 1:M 15 Feb 2022 14:14:21.697 * monotonic clock: POSIX clock_gettime
flaskproject-redis-1  | 1:M 15 Feb 2022 14:14:21.698 * Running mode=standalone, port=6379.
flaskproject-redis-1  | 1:M 15 Feb 2022 14:14:21.698 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
flaskproject-redis-1  | 1:M 15 Feb 2022 14:14:21.698 # Server initialized
flaskproject-redis-1  | 1:M 15 Feb 2022 14:14:21.698 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
flaskproject-redis-1  | 1:M 15 Feb 2022 14:14:21.698 * Ready to accept connections
flaskproject-web-1    |  * Serving Flask app 'app.py' (lazy loading)
flaskproject-web-1    |  * Environment: production
flaskproject-web-1    |    WARNING: This is a development server. Do not use it in a production deployment.
flaskproject-web-1    |    Use a production WSGI server instead.
flaskproject-web-1    |  * Debug mode: off
flaskproject-web-1    |  * Running on all addresses.
flaskproject-web-1    |    WARNING: This is a development server. Do not use it in a production deployment.
flaskproject-web-1    |  * Running on http://172.18.0.2:5000/ (Press CTRL+C to quit)

5.5 访问

在这里插入图片描述

Docker主机集群化方案 Docker Swarm

一、docker swarm介绍

Docker Swarm是Docker官方提供的一款集群管理工具,其主要作用是把若干台Docker主机抽象为一个整体,并且通过一个入口统一管理这些Docker主机上的各种Docker资源。Swarm和Kubernetes比较类似,但是更加轻,具有的功能也较kubernetes更少一些。

  • 是docker host集群管理工具
  • docker官方提供的
  • docker 1.12版本以后
  • 用来统一集群管理的,把整个集群资源做统一调度
  • 比kubernetes要轻量化
  • 实现scaling 规模扩大或缩小
  • 实现rolling update 滚动更新或版本回退
  • 实现service discovery 服务发现
  • 实现load balance 负载均衡
  • 实现route mesh 路由网格,服务治理

二、docker swarm概念与架构

参考网址:https://docs.docker.com/swarm/overview/

2.1 架构

在这里插入图片描述
在这里插入图片描述

2.2 概念

节点 (node): 就是一台docker host上面运行了docker engine.节点分为两类:

  • 管理节点(manager node) 负责管理集群中的节点并向工作节点分配任务
  • 工作节点(worker node) 接收管理节点分配的任务,运行任务
# docker node ls

服务(services): 在工作节点运行的,由多个任务共同组成

# docker service ls

任务(task): 运行在工作节点上容器或容器中包含应用,是集群中调度最小管理单元
在这里插入图片描述

三、docker swarm集群部署

部署3主2从节点集群,另需提前准备1台本地容器镜像仓库服务器(Harbor)

3.1 容器镜像仓库 Harbor准备

在这里插入图片描述

3.2 主机准备

3.2.1 主机名

# hostnamectl set-hostname xxx
说明:
sm1 管理节点1
sm2 管理节点2
sm3 管理节点3
sw1 工作节点1
sw2 工作节点2

3.2.2 IP地址

编辑网卡配置文件
# vim /etc/sysconfig/network-scripts/ifcfg-ens33
# cat /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="none" 修改为静态
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens33"

DEVICE="ens33"
ONBOOT="yes"

添加如下内容:
IPADDR="192.168.10.xxx"
PREFIX="24"
GATEWAY="192.168.10.2"
DNS1="119.29.29.29"

说明:
sm1 管理节点1 192.168.10.10
sm2 管理节点2 192.168.10.11
sm3 管理节点3 192.168.10.12
sw1 工作节点1 192.168.10.13
sw2 工作节点2 192.168.10.14

3.2.3 主机名与IP地址解析

编辑主机/etc/hosts文件,添加主机名解析
# vim /etc/hosts
# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.10.10 sm1
192.168.10.11 sm2
192.168.10.12 sm3
192.168.10.13 sw1
192.168.10.14 sw2

3.3.4 主机时间同步

添加计划任务,实现时间同步,同步服务器为time1.aliyun.com
# crontab -e
no crontab for root - using an empty one
crontab: installing new crontab

查看添加后计划任务
# crontab -l
0 */1 * * * ntpdate time1.aliyun.com

3.2.5 主机安全设置

关闭防火墙并查看其运行状态
# systemctl stop firewalld;systemctl disable firewalld
# firewall-cmd --state
not running
使用非交互式修改selinux配置文件
# sed -ri 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config

重启所有的主机系统
# reboot

重启后验证selinux是否关闭
# sestatus
SELinux status:                 disabled

3.3 docker安装

3.3.1 docker安装

下载YUM源
# wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
安装docker-ce
# yum -y install docker-ce
启动docker服务并设置为开机自启动
# systemctl enable docker;systemctl start docker

3.3.2 配置docker daemon使用harbor

添加daemon.json文件,配置docker daemon使用harbor
# vim /etc/docker/daemon.json
# cat /etc/docker/daemon.json
{
        "insecure-registries": ["http://192.168.10.15"]
}
重启docker服务
# ystemctl restart docker
深度登录harbor
# docker login 192.168.10.15
Username: admin
Password: 12345
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

3.4 docker swarm集群初始化

3.4.1 获取docker swarm命令帮助

获取docker swarm命令使用帮助
# docker swarm --help

Usage:  docker swarm COMMAND

Manage Swarm

Commands:
  ca          Display and rotate the root CA
  init        Initialize a swarm                    初始化
  join        Join a swarm as a node and/or manager 加入集群
  join-token  Manage join tokens                    集群加入时token管理
  leave       Leave the swarm                       离开集群
  unlock      Unlock swarm
  unlock-key  Manage the unlock key
  update      Update the swarm                      更新集群

3.4.2 在管理节点初始化

本次在sm1上初始化

初始化集群
# docker swarm init --advertise-addr 192.168.10.10 --listen-addr 192.168.10.10:2377
Swarm initialized: current node (j42cwubrr70pwxdpmesn1cuo6) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-297iry1n2jeh30oopsjecvsco1uuvl15t2jz6jxabdpf0xkry4-6pddlyiq5f1i35w8d7q4bl1co 192.168.10.10:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
说明:
--advertise-addr 当主机有多块网卡时使用其选择其中一块用于广播,用于其它节点连接管理节点使用
--listen-addr    监听地址,用于承载集群流量使用

3.4.3 添加工作节点到集群

使用初始化过程中生成的token加入集群
[root@sw1 ~]# docker swarm join --token SWMTKN-1-297iry1n2jeh30oopsjecvsco1uuvl15t2jz6jxabdpf0xkry4-6pddlyiq5f1i35w8d7q4bl1co 192.168.10.10:2377
This node joined a swarm as a worker.
查看已加入的集群
# docker node ls
ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
j42cwubrr70pwxdpmesn1cuo6 *   sm1        Ready     Active         Leader           20.10.12
4yb34kuma6i9g5hf30vkxm9yc     sw1        Ready     Active                          20.10.12

如果使用的token已过期,可以再次生成新的加入集群的方法,如下命令所示。

重新生成用于添加工作点的token
[root@sm1 ~]# docker swarm join-token worker
To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-297iry1n2jeh30oopsjecvsco1uuvl15t2jz6jxabdpf0xkry4-6pddlyiq5f1i35w8d7q4bl1co 192.168.10.10:2377
加入至集群
[root@sw2 ~]# docker swarm join --token SWMTKN-1-297iry1n2jeh30oopsjecvsco1uuvl15t2jz6jxabdpf0xkry4-6pddlyiq5f1i35w8d7q4bl1co 192.168.10.10:2377
This node joined a swarm as a worker.
查看node状态
# docker node ls
ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
j42cwubrr70pwxdpmesn1cuo6 *   sm1        Ready     Active         Leader           20.10.12
4yb34kuma6i9g5hf30vkxm9yc     sw1        Ready     Active                          20.10.12
mekitdu1xbpcttgupwuoiwg91     sw2        Ready     Active                          20.10.12

3.4.4 添加管理节点到集群

生成用于添加管理节点加入集群所使用的token
[root@sm1 ~]# docker swarm join-token manager
To add a manager to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-297iry1n2jeh30oopsjecvsco1uuvl15t2jz6jxabdpf0xkry4-7g85apo82mwz8ttmgdr7onfhu 192.168.10.10:2377
加入集群
[root@sm2 ~]# docker swarm join --token SWMTKN-1-297iry1n2jeh30oopsjecvsco1uuvl15t2jz6jxabdpf0xkry4-7g85apo82mwz8ttmgdr7onfhu 192.168.10.10:2377
This node joined a swarm as a manager.
加入集群
[root@sm3 ~]# docker swarm join --token SWMTKN-1-297iry1n2jeh30oopsjecvsco1uuvl15t2jz6jxabdpf0xkry4-7g85apo82mwz8ttmgdr7onfhu 192.168.10.10:2377
This node joined a swarm as a manager.
查看节点状态
# docker node ls
ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
j42cwubrr70pwxdpmesn1cuo6 *   sm1        Ready     Active         Leader           20.10.12
nzpmehm8n87b9a17or2el10lc     sm2        Ready     Active         Reachable        20.10.12
xc2x9z1b33rwdfxc5sdpobf0i     sm3        Ready     Active         Reachable        20.10.12
4yb34kuma6i9g5hf30vkxm9yc     sw1        Ready     Active                          20.10.12
mekitdu1xbpcttgupwuoiwg91     sw2        Ready     Active                          20.10.12

3.4.5 模拟管理节点出现故障

3.4.5.1 停止docker服务并查看结果
停止docker服务
[root@sm1 ~]# systemctl stop docker
查看node状态,发现sm1不可达,状态为未知,并重启选择出leader
[root@sm2 ~]# docker node ls
ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
j42cwubrr70pwxdpmesn1cuo6     sm1        Unknown   Active         Unreachable      20.10.12
nzpmehm8n87b9a17or2el10lc *   sm2        Ready     Active         Leader           20.10.12
xc2x9z1b33rwdfxc5sdpobf0i     sm3        Ready     Active         Reachable        20.10.12
4yb34kuma6i9g5hf30vkxm9yc     sw1        Ready     Active                          20.10.12
mekitdu1xbpcttgupwuoiwg91     sw2        Ready     Active                          20.10.12
3.4.5.2 启动docker服务并查看结果
再次重动docker
[root@sm1 ~]# systemctl start docker
观察可以得知sm1是可达状态,但并不是Leader
[root@sm1 ~]# docker node ls
ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
j42cwubrr70pwxdpmesn1cuo6 *   sm1        Ready     Active         Reachable        20.10.12
nzpmehm8n87b9a17or2el10lc     sm2        Ready     Active         Leader           20.10.12
xc2x9z1b33rwdfxc5sdpobf0i     sm3        Ready     Active         Reachable        20.10.12
4yb34kuma6i9g5hf30vkxm9yc     sw1        Ready     Active                          20.10.12
mekitdu1xbpcttgupwuoiwg91     sw2        Ready     Active                          20.10.12

四、docker swarm集群应用

4.1 容器镜像准备

准备多个版本的容器镜像,以便于后期使用测试。

4.1.1 v1版本

生成网站文件v1版
[root@harbor nginximg]# vim index.html
[root@harbor nginximg]# cat index.html
v1
编写Dockerfile文件,用于构建容器镜像
[root@harbor nginximg]# vim Dockerfile
[root@harbor nginximg]# cat Dockerfile
FROM nginx:latest

MAINTAINER  'tom<tom@kubemsb.com>'

ADD index.html /usr/share/nginx/html

RUN echo "daemon off;" >> /etc/nginx/nginx.conf

EXPOSE 80

CMD /usr/sbin/nginx
使用docker build构建容器镜像
[root@harbor nginximg]# docker build -t 192.168.10.15/library/nginx:v1 .
登录harbor
# docker login 192.168.10.15
Username: admin
Password: 12345
推送容器镜像至harbor
# docker push 192.168.10.15/library/nginx:v1

4.1.2 v2版本

生成网站文件v2版
[root@harbor nginximg]# vim index.html
[root@harbor nginximg]# cat index.html
v2
编写Dockerfile文件,用于构建容器镜像
[root@harbor nginximg]# vim Dockerfile
[root@harbor nginximg]# cat Dockerfile
FROM nginx:latest

MAINTAINER  'tom<tom@kubemsb.com>'

ADD index.html /usr/share/nginx/html

RUN echo "daemon off;" >> /etc/nginx/nginx.conf

EXPOSE 80

CMD /usr/sbin/nginx
使用docker build构建容器镜像
[root@harbor nginximg]# docker build -t 192.168.10.15/library/nginx:v2 .
推送镜像至Harbor
[root@harbor nginximg]# docker push 192.168.10.15/library/nginx:v2

在这里插入图片描述

4.2 发布服务

在docker swarm中,对外暴露的是服务(service),而不是容器。

为了保持高可用架构,它准许同时启动多个容器共同支撑一个服务,如果一个容器挂了,它会自动使用另一个容器

4.2.1 使用docker service ls查看服务

在管理节点(manager node)上操作

[root@sm1 ~]# docker service ls
ID        NAME      MODE      REPLICAS   IMAGE     PORTS

4.2.2 发布服务

[root@sm1 ~]# docker service create --name nginx-svc-1 --replicas 1 --publish 80:80  192.168.10.15/library/nginx:v1
ucif0ibkjqrd7meal6vqwnduz
overall progress: 1 out of 1 tasks
1/1: running   [==================================================>]
verify: Service converged
说明
* 创建一个服务,名为nginx_svc-1
* replicas 1指定1个副本
* --publish 80:80  将服务内部的80端口发布到外部网络的80端口
* 使用的镜像为`192.168.10.15/library/nginx:v1`

4.2.3 查看已发布服务

[root@sm1 ~]# docker service ls
ID             NAME          MODE         REPLICAS   IMAGE                            PORTS
ucif0ibkjqrd   nginx-svc-1   replicated   1/1        192.168.10.15/library/nginx:v1   *:80->80/tcp

4.2.4 查看已发布服务容器

[root@sm1 ~]# docker service ps  nginx-svc-1
ID             NAME            IMAGE                            NODE      DESIRED STATE   CURRENT STATE            ERROR     PORTS
47t0s0egf6xf   nginx-svc-1.1   192.168.10.15/library/nginx:v1   sw1       Running         Running 48 minutes ago

[root@sw1 ~]# docker ps
CONTAINER ID   IMAGE                            COMMAND                  CREATED          STATUS          PORTS     NAMES
1bdf8981f511   192.168.10.15/library/nginx:v1   "/docker-entrypoint.…"   53 minutes ago   Up 53 minutes   80/tcp    nginx-svc-1.1.47t0s0egf6xf1n8m0c0jez3q0

4.2.5 访问已发布的服务

[root@sm1 ~]# curl http://192.168.10.10
v1
[root@sm1 ~]# curl http://192.168.10.11
v1
[root@sm1 ~]# curl http://192.168.10.12
v1
[root@sm1 ~]# curl http://192.168.10.13
v1
[root@sm1 ~]# curl http://192.168.10.14
v1

在集群之外的主机访问

在这里插入图片描述

4.3 服务扩展

使用scale指定副本数来扩展

[root@sm1 ~]# docker service scale nginx-svc-1=2
nginx-svc-1 scaled to 2
overall progress: 2 out of 2 tasks
1/2: running   [==================================================>]
2/2: running   [==================================================>]
verify: Service converged
[root@sm1 ~]# docker service ls
ID             NAME          MODE         REPLICAS   IMAGE                            PORTS
ucif0ibkjqrd   nginx-svc-1   replicated   2/2        192.168.10.15/library/nginx:v1   *:80->80/tcp
[root@sm1 ~]# docker service ps nginx-svc-1
ID             NAME            IMAGE                            NODE      DESIRED STATE   CURRENT STATE               ERROR     PORTS
47t0s0egf6xf   nginx-svc-1.1   192.168.10.15/library/nginx:v1   sw1       Running         Running about an hour ago
oy16nuh5udn0   nginx-svc-1.2   192.168.10.15/library/nginx:v1   sw2       Running         Running 57 seconds ago

[root@sw1 ~]# docker ps
CONTAINER ID   IMAGE                            COMMAND                  CREATED             STATUS             PORTS     NAMES
1bdf8981f511   192.168.10.15/library/nginx:v1   "/docker-entrypoint.…"   About an hour ago   Up About an hour   80/tcp    nginx-svc-1.1.47t0s0egf6xf1n8m0c0jez3q0
[root@sw2 ~]# docker ps
CONTAINER ID   IMAGE                            COMMAND                  CREATED              STATUS              PORTS     NAMES
0923c0d10223   192.168.10.15/library/nginx:v1   "/docker-entrypoint.…"   About a minute ago   Up About a minute   80/tcp    nginx-svc-1.2.oy16nuh5udn0s1hda5bcpr9hd

问题:现在仅扩展为2个副本,如果把服务扩展到3个副本,集群会如何分配主机呢?

[root@sm1 ~]# docker service scale nginx-svc-1=3
nginx-svc-1 scaled to 3
overall progress: 3 out of 3 tasks
1/3: running   [==================================================>]
2/3: running   [==================================================>]
3/3: running   [==================================================>]
verify: Service converged
[root@sm1 ~]# docker service ps nginx-svc-1
ID             NAME            IMAGE                            NODE      DESIRED STATE   CURRENT STATE               ERROR     PORTS
47t0s0egf6xf   nginx-svc-1.1   192.168.10.15/library/nginx:v1   sw1       Running         Running about an hour ago
oy16nuh5udn0   nginx-svc-1.2   192.168.10.15/library/nginx:v1   sw2       Running         Running 12 minutes ago
mn9fwxqbc9d1   nginx-svc-1.3   192.168.10.15/library/nginx:v1   sm1       Running         Running 9 minutes ago
说明:
当把服务扩展到一定数量时,管理节点也会参与到负载运行中来。

4.4 服务裁减

[root@sm1 ~]# docker service scale nginx-svc-1=2
nginx-svc-1 scaled to 2
overall progress: 2 out of 2 tasks
1/2: running   [==================================================>]
2/2: running   [==================================================>]
verify: Service converged
[root@sm1 ~]# docker service ls
ID             NAME          MODE         REPLICAS   IMAGE                            PORTS
ucif0ibkjqrd   nginx-svc-1   replicated   2/2        192.168.10.15/library/nginx:v1   *:80->80/tcp
[root@sm1 ~]# docker service ps nginx-svc-1
ID             NAME            IMAGE                            NODE      DESIRED STATE   CURRENT STATE            ERROR     PORTS
47t0s0egf6xf   nginx-svc-1.1   192.168.10.15/library/nginx:v1   sw1       Running         Running 2 hours ago
oy16nuh5udn0   nginx-svc-1.2   192.168.10.15/library/nginx:v1   sw2       Running         Running 29 minutes ago

4.5 负载均衡

服务中包含多个容器时,每次访问将以轮询的方式访问到每个容器

修改sw1主机中容器网页文件
[root@sw1 ~]# docker ps
CONTAINER ID   IMAGE                            COMMAND                  CREATED             STATUS             PORTS     NAMES
1bdf8981f511   192.168.10.15/library/nginx:v1   "/docker-entrypoint.…"   About an hour ago   Up About an hour   80/tcp    nginx-svc-1.1.47t0s0egf6xf1n8m0c0jez3q0
[root@sw1 ~]# docker exec -it 1bdf bash
root@1bdf8981f511:/# echo "sw1 web" > /usr/share/nginx/html/index.html
root@1bdf8981f511:/# exit
修改sw2主机中容器网页文件
[root@sw2 ~]# docker ps
CONTAINER ID   IMAGE                            COMMAND                  CREATED          STATUS          PORTS     NAMES
0923c0d10223   192.168.10.15/library/nginx:v1   "/docker-entrypoint.…"   42 minutes ago   Up 42 minutes   80/tcp    nginx-svc-1.2.oy16nuh5udn0s1hda5bcpr9hd
[root@sw2 ~]# docker exec -it 0923 bash
root@0923c0d10223:/# echo "sw2 web" > /usr/share/nginx/html/index.html
root@0923c0d10223:/# exit
[root@sm1 ~]# curl http://192.168.10.10
sw1 web
[root@sm1 ~]# curl http://192.168.10.10
sw2 web
[root@sm1 ~]# curl http://192.168.10.10
sw1 web
[root@sm1 ~]# curl http://192.168.10.10
sw2 web

4.6 删除服务

[root@sm1 ~]# docker service ls
ID             NAME          MODE         REPLICAS   IMAGE                            PORTS
ucif0ibkjqrd   nginx-svc-1   replicated   2/2        192.168.10.15/library/nginx:v1   *:80->80/tcp
[root@sm1 ~]# docker service rm nginx-svc-1
nginx-svc-1
[root@sm1 ~]# docker service ls
ID        NAME      MODE      REPLICAS   IMAGE     PORTS

4.7 服务版本更新

[root@sm1 ~]# docker service create --name nginx-svc --replicas=1 --publish 80:80 192.168.10.15/library/nginx:v1
yz3wq6f1cgf10vtq5ne4qfwjz
overall progress: 1 out of 1 tasks
1/1: running   [==================================================>]
verify: Service converged
[root@sm1 ~]# curl http://192.168.10.10
v1
[root@sm1 ~]# docker service update nginx-svc --image 192.168.10.15/library/nginx:v2
nginx-svc
overall progress: 1 out of 1 tasks
1/1: running   [==================================================>]
verify: Service converged
[root@sm1 ~]# curl http://192.168.10.10
v2

4.8 服务版本回退

[root@sm1 ~]# docker service update nginx-svc --image 192.168.10.15/library/nginx:v1
nginx-svc
overall progress: 1 out of 1 tasks
1/1: running   [==================================================>]
verify: Service converged

4.9 服务版本滚动间隔更新

# docker service create --name nginx-svc --replicas 60 --publish 80:80 192.168.10.15/library/nginx:v1
pqrt561dckg2wfpect3vf9ll0
overall progress: 60 out of 60 tasks
verify: Service converged
[root@sm1 ~]# docker service update --replicas 60 --image 192.168.10.15/library/nginx:v2 --update-parallelism 5 --update-delay 30s nginx-svc
nginx-svc
overall progress: 3 out of 3 tasks
1/3: running   [==================================================>]
2/3: running   [==================================================>]
3/3: running   [==================================================>]
verify: Service converged
说明
* --update-parallelism 5 指定并行更新数量
* --update-delay 30s 指定更新间隔时间
docker swarm滚动更新会造成节点上有exit状态的容器,可以考虑清除
命令如下:
[root@sw1 ~]# docker container prune
WARNING! This will remove all stopped containers.
Are you sure you want to continue? [y/N] y

4.10 副本控制器

副本控制器

[root@sm1 ~]# docker service ls
ID             NAME        MODE         REPLICAS   IMAGE                            PORTS
yz3wq6f1cgf1   nginx-svc   replicated   3/3        192.168.10.15/library/nginx:v2   *:80->80/tcp
[root@sm1 ~]# docker service ps nginx-svc
ID             NAME              IMAGE                            NODE      DESIRED STATE   CURRENT STATE          ERROR     PORTS
x78l0santsbb   nginx-svc.1       192.168.10.15/library/nginx:v2   sw2       Running         Running 3 hours ago
ura9isskfxku    \_ nginx-svc.1   192.168.10.15/library/nginx:v1   sm1       Shutdown        Shutdown 3 hours ago
z738gvgazish    \_ nginx-svc.1   192.168.10.15/library/nginx:v2   sw1       Shutdown        Shutdown 3 hours ago
3qsrkkxn32bl    \_ nginx-svc.1   192.168.10.15/library/nginx:v1   sm3       Shutdown        Shutdown 3 hours ago
psbi0mxu3amy   nginx-svc.2       192.168.10.15/library/nginx:v2   sw1       Running         Running 3 hours ago
zpjw39bwhd78   nginx-svc.3       192.168.10.15/library/nginx:v2   sm1       Running         Running 3 hours ago
[root@sm1 ~]# docker ps
CONTAINER ID   IMAGE                            COMMAND                  CREATED       STATUS       PORTS     NAMES
81fffd9132d8   192.168.10.15/library/nginx:v2   "/docker-entrypoint.…"   3 hours ago   Up 3 hours   80/tcp    nginx-svc.3.zpjw39bwhd78pw49svpy4q8zd
[root@sm1 ~]# docker stop 81fffd9132d8;docker rm 81fffd9132d8
81fffd9132d8
81fffd9132d8
[root@sm1 ~]# docker service ls
ID             NAME        MODE         REPLICAS   IMAGE                            PORTS
yz3wq6f1cgf1   nginx-svc   replicated   3/3        192.168.10.15/library/nginx:v2   *:80->80/tcp
[root@sm1 ~]# docker service ps nginx-svc
ID             NAME              IMAGE                            NODE      DESIRED STATE   CURRENT STATE            ERROR                         PORTS
x78l0santsbb   nginx-svc.1       192.168.10.15/library/nginx:v2   sw2       Running         Running 3 hours ago
ura9isskfxku    \_ nginx-svc.1   192.168.10.15/library/nginx:v1   sm1       Shutdown        Shutdown 3 hours ago
z738gvgazish    \_ nginx-svc.1   192.168.10.15/library/nginx:v2   sw1       Shutdown        Shutdown 3 hours ago
3qsrkkxn32bl    \_ nginx-svc.1   192.168.10.15/library/nginx:v1   sm3       Shutdown        Shutdown 3 hours ago
psbi0mxu3amy   nginx-svc.2       192.168.10.15/library/nginx:v2   sw1       Running         Running 3 hours ago
qv6ya3crz1fj   nginx-svc.3       192.168.10.15/library/nginx:v2   sm1       Running         Running 13 seconds ago
zpjw39bwhd78    \_ nginx-svc.3   192.168.10.15/library/nginx:v2   sm1       Shutdown        Failed 19 seconds ago    "task: non-zero exit (137)"

4.11 在指定网络中发布服务

[root@sm1 ~]# docker network create -d overlay tomcat-net
mrkgccdfddy8zg92ja6fpox7p
[root@sm1 ~]# docker network ls
NETWORK ID     NAME              DRIVER    SCOPE
5ba369c13795   bridge            bridge    local
54568abb541a   docker_gwbridge   bridge    local
4edcb5c4a324   host              host      local
l6xmfxiiseqk   ingress           overlay   swarm
5d06d748c9c7   none              null      local
mrkgccdfddy8   tomcat-net        overlay   swarm
[root@sm1 ~]# docker network inspect tomcat-net
[
    {
        "Name": "tomcat-net",
        "Id": "mrkgccdfddy8zg92ja6fpox7p",
        "Created": "2022-02-16T13:56:52.338589006Z",
        "Scope": "swarm",
        "Driver": "overlay",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "10.0.1.0/24",
                    "Gateway": "10.0.1.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": null,
        "Options": {
            "com.docker.network.driver.overlay.vxlanid_list": "4097"
        },
        "Labels": null
    }
]
说明:
创建名为tomcat-net的覆盖网络(Overlay Netowork),这是个二层网络,处于该网络下的docker容器,即使宿主机不一样,也能相互访问
# docker service create --name tomcat \
--network tomcat-net \
-p 8080:8080 \
--replicas 2 \
tomcat:7.0.96-jdk8-openjdk
说明:
创建名为tomcat的服务,使用了刚才创建的覆盖网络
[root@sm1 ~]# docker service ls
ID             NAME      MODE         REPLICAS   IMAGE                        PORTS
wgqkz8vymxkr   tomcat    replicated   2/2        tomcat:7.0.96-jdk8-openjdk   *:8080->8080/tcp
[root@sm1 ~]# docker service ps tomcat
ID             NAME       IMAGE                        NODE      DESIRED STATE   CURRENT STATE            ERROR     PORTS
fsx1fnssbmtg   tomcat.1   tomcat:7.0.96-jdk8-openjdk   sm3       Running         Running 49 seconds ago
gq0ogycj7orb   tomcat.2   tomcat:7.0.96-jdk8-openjdk   sm2       Running         Running 58 seconds ago

在这里插入图片描述

4.12 服务网络模式

  • 服务模式一共有两种:Ingress和Host,如果不指定,则默认的是Ingress;

    • Ingress模式下,到达Swarm任何节点的8080端口的流量,都会映射到任何服务副本的内部80端口,就算该节点上没有tomcat服务副本也会映射;
# docker service create --name tomcat \
--network tomcat-net \
-p 8080:8080 \
--replicas 2 \
tomcat:7.0.96-jdk8-openjdk
[root@sm1 ~]# docker service ps tomcat
ID             NAME       IMAGE                        NODE      DESIRED STATE   CURRENT STATE           ERROR     PORTS
fsx1fnssbmtg   tomcat.1   tomcat:7.0.96-jdk8-openjdk   sm3       Running         Running 8 minutes ago
gq0ogycj7orb   tomcat.2   tomcat:7.0.96-jdk8-openjdk   sm2       Running         Running 8 minutes ago
[root@sm2 ~]# docker ps
CONTAINER ID   IMAGE                        COMMAND             CREATED         STATUS         PORTS      NAMES
f650498c8e71   tomcat:7.0.96-jdk8-openjdk   "catalina.sh run"   9 minutes ago   Up 9 minutes   8080/tcp   tomcat.2.gq0ogycj7orbu4ua1dwk140as

[root@sm2 ~]# docker inspect f650498c8e71 | grep IPAddress
            "SecondaryIPAddresses": null,
            "IPAddress": "",
                    "IPAddress": "10.0.0.24", ingress IP地址
                    "IPAddress": "10.0.1.9",  容器IP地址

[root@sm3 ~]# docker ps
CONTAINER ID   IMAGE                        COMMAND             CREATED         STATUS         PORTS      NAMES
9d0c412717d7   tomcat:7.0.96-jdk8-openjdk   "catalina.sh run"   9 minutes ago   Up 9 minutes   8080/tcp   tomcat.1.fsx1fnssbmtgv3qh84fgqknlh

[root@sm3 ~]# docker inspect 9d0c412717d7 | grep IPAddress
            "SecondaryIPAddresses": null,
            "IPAddress": "",
                    "IPAddress": "10.0.0.23",
                    "IPAddress": "10.0.1.8",

[root@sm1 ~]# ss -anput | grep ":8080"
tcp    LISTEN     0      128    [::]:8080               [::]:*                   users:(("dockerd",pid=2727,fd=54))
[root@sm2 ~]# ss -anput | grep ":8080"
tcp    LISTEN     0      128    [::]:8080               [::]:*                   users:(("dockerd",pid=1229,fd=26))
[root@sm3 ~]# ss -anput | grep ":8080"
tcp    LISTEN     0      128    [::]:8080               [::]:*                   users:(("dockerd",pid=1226,fd=22))
[root@sw1 ~]# ss -anput | grep ":8080"
tcp    LISTEN     0      128    [::]:8080               [::]:*                   users:(("dockerd",pid=1229,fd=39))
[root@sw2 ~]# ss -anput | grep ":8080"
tcp    LISTEN     0      128    [::]:8080               [::]:*                   users:(("dockerd",pid=1229,fd=22))
  • Host模式下,仅在运行有容器副本的机器上开放端口,使用Host模式的命令如下:
# docker service create --name tomcat \
--network tomcat-net \
--publish published=8080,target=8080,mode=host \
--replicas 3 \
tomcat:7.0.96-jdk8-openjdk
[root@sm1 ~]# docker service ps tomcat
ID             NAME       IMAGE                        NODE      DESIRED STATE   CURRENT STATE            ERROR     PORTS
x6022h0oungs   tomcat.1   tomcat:7.0.96-jdk8-openjdk   sw1       Running         Running 19 seconds ago             *:8080->8080/tcp,*:8080->8080/tcp
jmnthwqi6ubf   tomcat.2   tomcat:7.0.96-jdk8-openjdk   sm1       Running         Running 18 seconds ago             *:8080->8080/tcp,*:8080->8080/tcp
nvcbijnfy2es   tomcat.3   tomcat:7.0.96-jdk8-openjdk   sw2       Running         Running 19 seconds ago             *:8080->8080/tcp,*:8080->8080/tcp

[root@sm1 ~]# ss -anput | grep ":8080"
tcp    LISTEN     0      128       *:8080                  *:*                   users:(("docker-proxy",pid=20963,fd=4))
tcp    LISTEN     0      128    [::]:8080               [::]:*                   users:(("docker-proxy",pid=20967,fd=4))
[root@sw1 ~]# ss -anput | grep ":8080"
tcp    LISTEN     0      128       *:8080                  *:*                   users:(("docker-proxy",pid=20459,fd=4))
tcp    LISTEN     0      128    [::]:8080               [::]:*                   users:(("docker-proxy",pid=20463,fd=4))
[root@sw2 ~]# ss -anput | grep ":8080"
tcp    LISTEN     0      128       *:8080                  *:*                   users:(("docker-proxy",pid=19938,fd=4))
tcp    LISTEN     0      128    [::]:8080               [::]:*                   users:(("docker-proxy",pid=19942,fd=4))
没有被映射端口
[root@sm2 ~]# ss -anput | grep ":8080"

[root@sm3 ~]# ss -anput | grep ":8080"

4.13 服务数据持久化存储

4.13.1 本地存储

4.13.1.1 在集群所有主机上创建本地目录
# mkdir -p /data/nginxdata
4.13.1.2 发布服务时挂载本地目录到容器中
[root@sm1 ~]# docker service create --name nginx-svc --replicas 3 --mount "type=bind,source=/data/nginxdata,target=/usr/share/nginx/html" --publish 80:80  192.168.10.15/library/nginx:v1

s31z75rniv4p53ycbqch3xbqm
overall progress: 3 out of 3 tasks
1/3: running   [==================================================>]
2/3: running   [==================================================>]
3/3: running   [==================================================>]
verify: Service converged
4.13.1.3 验证是否使用本地目录
[root@sm1 ~]# docker service ls
ID             NAME        MODE         REPLICAS   IMAGE                            PORTS
s31z75rniv4p   nginx-svc   replicated   3/3        192.168.10.15/library/nginx:v1   *:80->80/tcp

[root@sm1 ~]# docker service ps nginx-svc
ID             NAME          IMAGE                            NODE      DESIRED STATE   CURRENT STATE            ERROR     PORTS
vgfhk4lksbtp   nginx-svc.1   192.168.10.15/library/nginx:v1   sm2       Running         Running 54 seconds ago
v2bs9araxeuc   nginx-svc.2   192.168.10.15/library/nginx:v1   sw2       Running         Running 59 seconds ago
1m7fobr3cscz   nginx-svc.3   192.168.10.15/library/nginx:v1   sm3       Running         Running 59 seconds ago
[root@sm2 ~]# ls /data/nginxdata/
[root@sm2 ~]# echo "sm2 web" > /data/nginxdata/index.html
[root@sm3 ~]# ls /data/nginxdata/
[root@sm3 ~]# echo "sm3 web" > /data/nginxdata/index.html
[root@sw2 ~]# ls /data/nginxdata
[root@sw2 ~]# echo "sw2 web" > /data/nginxdata/index.html
[root@sm1 ~]# curl http://192.168.10.10
sm2 web
[root@sm1 ~]# curl http://192.168.10.10
sm3 web
[root@sm1 ~]# curl http://192.168.10.10
sw2 web

存在数据一致性问题

4.13.2 网络存储

  • 网络存储卷可以实现跨docker宿主机的数据共享,数据持久保存到网络存储卷中
  • 在创建service时添加卷的挂载参数,网络存储卷可以帮助自动挂载(但需要集群节点都创建该网络存储卷)
4.13.2.1 部署NFS存储

本案例以NFS提供远程存储为例

在192.168.10.15服务器上部署NFS服务,共享目录为docker swarm集群主机使用。

[root@harbor ~]# mkdir /opt/dockervolume

[root@harbor ~]# yum -y install nfs-utils
[root@harbor ~]# vim /etc/exports
[root@harbor ~]# cat /etc/exports
/opt/dockervolume       *(rw,sync,no_root_squash)
[root@harbor ~]# systemctl enable nfs-server

[root@harbor ~]# systemctl start nfs-server
[root@harbor ~]# showmount -e
Export list for harbor:
/opt/dockervolume *
4.13.2.2 为集群所有主机安装nfs-utils软件
# yum -y install nfs-utils
# showmount -e 192.168.10.15
Export list for 192.168.10.15:
/opt/dockervolume *
4.13.2.3 创建存储卷

集群中所有节点

# docker volume create  --driver local --opt type=nfs --opt o=addr=192.168.10.15,rw --opt device=:/opt/dockervolume nginx_volume
nginx_volume

# docker volume ls
DRIVER    VOLUME NAME
local     nginx_volume
# docker volume inspect nginx_volume
[
    {
        "CreatedAt": "2022-02-16T23:29:11+08:00",
        "Driver": "local",
        "Labels": {},
        "Mountpoint": "/var/lib/docker/volumes/nginx_volume/_data",
        "Name": "nginx_volume",
        "Options": {
            "device": ":/opt/dockervolume",
            "o": "addr=192.168.10.15,rw",
            "type": "nfs"
        },
        "Scope": "local"
    }
]
4.13.2.4 发布服务
[root@sm1 ~]# docker service create  --name nginx-svc --replicas 3 --publish 80:80 --mount "type=volume,source=nginx_volume,target=/usr/share/nginx/html"  192.168.10.15/library/nginx:v1
uh6k84b87n8vciuirln4zqb4v
overall progress: 3 out of 3 tasks
1/3: running   [==================================================>]
2/3: running   [==================================================>]
3/3: running   [==================================================>]
verify: Service converged
4.13.2.5 验证
[root@sm1 ~]# docker service ls
ID             NAME        MODE         REPLICAS   IMAGE                            PORTS
uh6k84b87n8v   nginx-svc   replicated   3/3        192.168.10.15/library/nginx:v1   *:80->80/tcp
[root@sm1 ~]# docker service ps nginx-svc
ID             NAME          IMAGE                            NODE      DESIRED STATE   CURRENT STATE            ERROR     PORTS
k2vxpav5oadf   nginx-svc.1   192.168.10.15/library/nginx:v1   sw2       Running         Running 43 seconds ago
v8fh0r89wt5i   nginx-svc.2   192.168.10.15/library/nginx:v1   sw1       Running         Running 43 seconds ago
xb0nyft8ou4d   nginx-svc.3   192.168.10.15/library/nginx:v1   sm1       Running         Running 43 seconds ago
[root@sm1 ~]# df -Th | grep nfs
:/opt/dockervolume      nfs        50G  8.9G   42G   18% /var/lib/docker/volumes/nginx_volume/_data
[root@sw1 ~]# df -Th | grep nfs
:/opt/dockervolume      nfs        50G  8.9G   42G   18% /var/lib/docker/volumes/nginx_volume/_data
[root@sw2 ~]# df -Th | grep nfs
:/opt/dockervolume      nfs        50G  8.9G   42G   18% /var/lib/docker/volumes/nginx_volume/_data
[root@harbor ~]# echo "nfs test" > /opt/dockervolume/index.html
[root@sm1 ~]# curl http://192.168.10.10
nfs test
[root@sm1 ~]# curl http://192.168.10.11
nfs test
[root@sm1 ~]# curl http://192.168.10.12
nfs test
[root@sm1 ~]# curl http://192.168.10.13
nfs test
[root@sm1 ~]# curl http://192.168.10.14
nfs test

4.14 服务互联与服务发现

如果一个nginx服务与一个mysql服务之间需要连接,在docker swarm如何实现呢?

方法1:

把mysql服务也使用 --publish参数发布到外网,但这样做的缺点是:mysql这种服务发布到外网不安全

方法2:

将mysql服务等运行在内部网络,只需要nginx服务能够连接mysql就可以了,在docker swarm中可以使用==overlay==网络来实现。

但现在还有个问题,服务副本数发生变化时,容器内部的IP发生变化时,我们希望仍然能够访问到这个服务, 这就是**服务发现(service discovery)**.

通过服务发现, service的使用者都不需要知道service运行在哪里,IP是多少,有多少个副本,就能让service通信

下面使用docker network ls查看到的ingress网络就是一个overlay类型的网络,但它不支持服务发现

[root@sm1 ~]# docker network ls
NETWORK ID     NAME              DRIVER    SCOPE
5ba369c13795   bridge            bridge    local
54568abb541a   docker_gwbridge   bridge    local
4edcb5c4a324   host              host      local
l6xmfxiiseqk   ingress           overlay   swarm 此处
5d06d748c9c7   none              null      local
mrkgccdfddy8   tomcat-net        overlay   swarm

我们需要自建一个overlay网络来实现服务发现, 需要相互通信的service也必须属于同一个overlay网络

[root@sm1 ~]# docker network create --driver overlay --subnet 192.168.100.0/24 self-network
ejpf8zzig5rjsgefqucopcsdt

说明:

  • –driver overlay指定为overlay类型
  • –subnet 分配网段
  • self-network 为自定义的网络名称
[root@sm1 ~]# docker network ls
NETWORK ID     NAME              DRIVER    SCOPE
5ba369c13795   bridge            bridge    local
54568abb541a   docker_gwbridge   bridge    local
4edcb5c4a324   host              host      local
l6xmfxiiseqk   ingress           overlay   swarm
5d06d748c9c7   none              null      local
ejpf8zzig5rj   self-network      overlay   swarm 此处
mrkgccdfddy8   tomcat-net        overlay   swarm

验证自动发现

1, 发布nignx-svc服务,指定在自建的overlay网络

[root@sm1 ~]# docker service create --name nginx-svc --replicas 3 --network self-network --publish 80:80  192.168.10.15/library/nginx:v1
rr21tvm1xpi6vk3ic83tfy9e5
overall progress: 3 out of 3 tasks
1/3: running   [==================================================>]
2/3: running   [==================================================>]
3/3: running   [==================================================>]
verify: Service converged

2, 发布一个busybox服务,也指定在自建的overlay网络

[root@sm1 ~]# docker service create --name test --network self-network  busybox sleep 100000
w14lzhhzdyqwt18lrby4euw98
overall progress: 1 out of 1 tasks
1/1: running   [==================================================>]
verify: Service converged

说明:

  • 服务名为test

  • busybox是一个集成了linux常用命令的软件,这里使用它可以比较方便的测试与nginx_service的连通性

  • 没有指定副本,默认1个副本

  • 因为它并不是长时间运行的daemon守护进程,所以运行一下就会退出.sleep 100000是指定一个长的运行时间,让它有足够的时间给我们测试

3, 查出test服务在哪个节点运行的容器

[root@sm1 ~]# docker service ps test
ID             NAME      IMAGE            NODE      DESIRED STATE   CURRENT STATE                ERROR     PORTS
x8nkifpdtyw5   test.1    busybox:latest   sm2       Running         Running about a minute ago

4, 去运行test服务的容器节点查找容器的名称

[root@sm2 ~]# docker ps
CONTAINER ID   IMAGE            COMMAND          CREATED              STATUS              PORTS     NAMES
8df13819bd5c   busybox:latest   "sleep 100000"   About a minute ago   Up About a minute             test.1.x8nkifpdtyw5177zhr0r1lxad

5, 使用查找出来的容器名称,执行命令测试

[root@sm2 ~]# docker exec -it test.1.x8nkifpdtyw5177zhr0r1lxad ping -c 2 nginx-svc
PING nginx-svc (192.168.100.2): 56 data bytes
64 bytes from 192.168.100.2: seq=0 ttl=64 time=0.093 ms
64 bytes from 192.168.100.2: seq=1 ttl=64 time=0.162 ms

--- nginx-svc ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.093/0.127/0.162 ms

测试的结果为: test服务可以ping通nginx_service服务,并且返回的IP为自建网络的一个IP(192.168.100.2)

[root@sm1 ~]# docker service inspect nginx-svc
[
  ......
            "VirtualIPs": [
                {
                    "NetworkID": "l6xmfxiiseqkl57wnsm4cykps",
                    "Addr": "10.0.0.36/24"
                },
                {
                    "NetworkID": "ejpf8zzig5rjsgefqucopcsdt",
                    "Addr": "192.168.100.2/24" 与此处IP地址保持一致。
                }
            ]
        }
    }
]

6, 分别去各个节点查找nginx_service服务的各个容器(3个副本),发现它们的IP与上面ping的IP都不同

[root@sm1 ~]# docker inspect nginx-svc.1.6nxixaw3tn2ld3vklfjldnpl5 | grep IPAddress
            "SecondaryIPAddresses": null,
            "IPAddress": "",
                    "IPAddress": "10.0.0.37",
                    "IPAddress": "192.168.100.3",
[root@sw1 ~]# docker inspect nginx-svc.3.steywkaxfboynglx4bsji6jd1 | grep -i ipaddress
            "SecondaryIPAddresses": null,
            "IPAddress": "",
                    "IPAddress": "10.0.0.39",
                    "IPAddress": "192.168.100.5",
[root@sw2 ~]# docker inspect nginx-svc.2.rz1iifb9eg0tos7r59cbesucd | grep -i ipaddress
            "SecondaryIPAddresses": null,
            "IPAddress": "",
                    "IPAddress": "10.0.0.38",
                    "IPAddress": "192.168.100.4",

7, 后续测试, 将nginx_service服务扩展,裁减,更新,回退.都不影响test服务访问nginx-svc。

结论: 在自建的overlay网络内,通过服务发现可以实现服务之间通过服务名(不用知道对方的IP)互联,而且不会受服务内副本个数和容器内IP变化等的影响。

4.15 docker swarm网络

在 Swarm Service 中有三个重要的网络概念:

  • Overlay networks 管理 Swarm 中 Docker 守护进程间的通信。你可以将服务附加到一个或多个已存在的 overlay 网络上,使得服务与服务之间能够通信。
  • ingress network 是一个特殊的 overlay 网络,用于服务节点间的负载均衡。当任何 Swarm 节点在发布的端口上接收到请求时,它将该请求交给一个名为 IPVS 的模块。IPVS 跟踪参与该服务的所有IP地址,选择其中的一个,并通过 ingress 网络将请求路由到它。
    初始化或加入 Swarm 集群时会自动创建 ingress 网络,大多数情况下,用户不需要自定义配置,但是 docker 17.05 和更高版本允许你自定义。
  • docker_gwbridge是一种桥接网络,将 overlay 网络(包括 ingress 网络)连接到一个单独的 Docker 守护进程的物理网络。默认情况下,服务正在运行的每个容器都连接到本地 Docker 守护进程主机的 docker_gwbridge 网络。
    docker_gwbridge 网络在初始化或加入 Swarm 时自动创建。大多数情况下,用户不需要自定义配置,但是 Docker 允许自定义。
名称类型注释
docker_gwbridgebridgenone
ingressoverlaynone
custom-networkoverlaynone
  • docker_gwbridge和ingress是swarm自动创建的,当用户执行了docker swarm init/connect之后。

  • docker_gwbridge是bridge类型的负责本机container和主机直接的连接

  • ingress负责service在多个主机container之间的路由。

  • custom-network是用户自己创建的overlay网络,通常我们都需要创建自己的network并把service挂在上面。

在这里插入图片描述

五、docker stack

5.1 docker stack介绍

早期使用service发布,每次只能发布一个service。

yaml可以发布多个服务,但是使用docker-compose只能在一台主机发布。

一个stack就是一组有关联的服务的组合,可以一起编排,一起发布, 一起管理

5.2 docker stack与docker compose区别

  • Docker stack会忽略了“构建”指令,无法使用stack命令构建新镜像,它是需要镜像是预先已经构建好的。 所以docker-compose更适合于开发场景;
  • Docker Compose是一个Python项目,在内部,它使用Docker API规范来操作容器。所以需要安装Docker -compose,以便与Docker一起在您的计算机上使用;
  • Docker Stack功能包含在Docker引擎中。你不需要安装额外的包来使用它,docker stacks 只是swarm mode的一部分。
  • Docker stack不支持基于第2版写的docker-compose.yml ,也就是version版本至少为3。然而Docker Compose对版本为2和3的 文件仍然可以处理;
  • docker stack把docker compose的所有工作都做完了,因此docker stack将占主导地位。同时,对于大多数用户来说,切换到使用docker stack既不困难,也不需要太多的开销。如果您是Docker新手,或正在选择用于新项目的技术,请使用docker stack。

5.3 docker stack常用命令

命令描述
docker stack deploy部署新的堆栈或更新现有堆栈
docker stack ls列出现有堆栈
docker stack ps列出堆栈中的任务
docker stack rm删除一个或多个堆栈
docker stack services列出堆栈中的服务

5.4 部署wordpress案例

1, 编写YAML文件

[root@sm1 ~]# vim stack1.yaml
[root@sm1 ~]# cat stack1.yaml
version: '3'
services:
  db:
    image: mysql:5.7
    environment:
      MYSQL_ROOT_PASSWORD: somewordpress
      MYSQL_DATABASE: wordpress
      MYSQL_USER: wordpress
      MYSQL_PASSWORD: wordpress
    deploy:
      replicas: 1

  wordpress:
    depends_on:
      - db
    image: wordpress:latest
    ports:
      - "8010:80"
    environment:
      WORDPRESS_DB_HOST: db:3306
      WORDPRESS_DB_USER: wordpress
      WORDPRESS_DB_PASSWORD: wordpress
      WORDPRESS_DB_NAME: wordpress
    deploy:
      replicas: 1
      placement:
        constraints: [node.role == manager]

说明:

  • placement的constraints限制此容器在manager节点

2, 使用docker stack发布

[root@sm1 ~]# docker stack deploy -c stack1.yaml stack1
Creating network stack1_default						创建自建的overlay网络
Creating service stack1_db							创建stack1_db服务
Creating service stack1_wordpress					创建stack1_wordpress服务

如果报错,使用docker stack rm stack1删除.排完错再启动

[root@sm1 ~]# docker stack ls
NAME      SERVICES   ORCHESTRATOR
stack1    2          Swarm
[root@sm1 ~]# docker service ls
ID             NAME               MODE         REPLICAS   IMAGE              PORTS
tw1a8rnde2yr   stack1_db          replicated   1/1        mysql:5.7
zf1h2r4m12li   stack1_wordpress   replicated   1/1        wordpress:latest   *:8010->80/tcp

3, 验证
在这里插入图片描述

5.5 部署nginx与web管理服务案例

1, 编写YAML文件

[root@sm1 ~]# vim stack2.yaml
[root@sm1 ~]# cat stack2.yaml
version: "3"
services:
  nginx:
    image: 192.168.10.15/library/nginx:v1
    ports:
      - 80:80
    deploy:
      mode: replicated
      replicas: 3

  visualizer:
    image: dockersamples/visualizer
    ports:
      - "9001:8080"
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock"
    deploy:
      replicas: 1
      placement:
        constraints: [node.role == manager]

  portainer:
    image: portainer/portainer
    ports:
      - "9000:9000"
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock"
    deploy:
      replicas: 1
      placement:
        constraints: [node.role == manager]

说明: stack中共有3个service

  • nginx服务,3个副本
  • visualizer服务: 图形查看docker swarm集群
  • portainer服务: 图形管理docker swarm集群

2,使用docker stack发布

[root@sm1 ~]# docker stack deploy -c stack2.yaml stack2
Creating network stack2_default
Creating service stack2_portainer
Creating service stack2_nginx
Creating service stack2_visualizer

如果报错,使用docker stack rm stack2删除.排完错再启动
[root@sm1 ~]# docker stack ps stack2
ID             NAME                  IMAGE                             NODE      DESIRED STATE   CURRENT STATE             ERROR     PORTS
zpkt1780g2nr   stack2_nginx.1        192.168.10.15/library/nginx:v1    sm2       Running         Running 54 seconds ago
9iqdgw2fxk0s   stack2_nginx.2        192.168.10.15/library/nginx:v1    sm3       Running         Running 54 seconds ago
4h0ob7b4ho2a   stack2_nginx.3        192.168.10.15/library/nginx:v1    sw2       Running         Running 54 seconds ago
jpp7h6qheh4j   stack2_portainer.1    portainer/portainer:latest        sm1       Running         Running 21 seconds ago
ty0mktx60typ   stack2_visualizer.1   dockersamples/visualizer:latest   sm1       Running         Starting 22 seconds ago

3,验证
在这里插入图片描述

在这里插入图片描述

5.6 nginx+haproxy+nfs案例

1,在docker swarm管理节点上准备配置文件

[root@sm1 ~]# mkdir -p /docker-stack/haproxy
[root@sm1 ~]# cd /docker-stack/haproxy/

[root@sm1 haproxy]# vim haproxy.cfg
global
  log 127.0.0.1 local0
  log 127.0.0.1 local1 notice

defaults
  log global
  mode http
  option httplog
  option dontlognull
  timeout connect 5000ms
  timeout client 50000ms
  timeout server 50000ms
  stats uri /status

frontend balancer
    bind *:8080
    mode http
    default_backend web_backends

backend web_backends
    mode http
    option forwardfor
    balance roundrobin
    server web1 nginx1:80 check
    server web2 nginx2:80 check
    server web3 nginx3:80 check
    option httpchk GET /
    http-check expect status 200

2, 编写YAML编排文件

[root@sm1 haproxy]# vim stack3.yml
[root@sm1 haproxy]# vim stack3.yaml
[root@sm1 haproxy]# cat stack3.yaml
version: "3"
services:
  nginx1:
    image: 192.168.10.15/library/nginx:v1
    deploy:
      mode: replicated
      replicas: 1
      restart_policy:
        condition: on-failure
    volumes:
    - "nginx_vol:/usr/share/nginx/html"

  nginx2:
    image: 192.168.10.15/library/nginx:v1
    deploy:
      mode: replicated
      replicas: 1
      restart_policy:
        condition: on-failure
    volumes:
    - "nginx_vol:/usr/share/nginx/html"

  nginx3:
    image: 192.168.10.15/library/nginx:v1
    deploy:
      mode: replicated
      replicas: 1
      restart_policy:
        condition: on-failure
    volumes:
    - "nginx_vol:/usr/share/nginx/html"

  haproxy:
    image: haproxy:latest
    volumes:
      - "./haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro"
    ports:
      - "8080:8080"
    deploy:
      replicas: 1
      placement:
        constraints: [node.role == manager]

volumes:
  nginx_vol:
    driver: local
    driver_opts:
      type: "nfs"
      o: "addr=192.168.10.15,rw"
      device: ":/opt/dockervolume"

3, 发布

[root@sm1 haproxy]# docker stack deploy -c stack3.yml stack3
Creating network stack3_default
Creating service stack3_nginx3
Creating service stack3_haproxy
Creating service stack3_nginx1
Creating service stack3_nginx2

4, 验证

在这里插入图片描述
在这里插入图片描述

基于Docker容器DevOps应用方案 企业业务代码发布系统

一、企业业务代码发布方式

1.1 传统方式

  • 以物理机或虚拟机为颗粒度部署
  • 部署环境比较复杂,需要有先进的自动化运维手段
  • 出现问题后重新部署成本大,一般采用集群方式部署
  • 部署后以静态方式展现

1.2 容器化方式

  • 以容器为颗粒度部署
  • 部署方式简单,启动速度快
  • 一次构建可到处运行
  • 出现故障后,可随时恢复
  • 可同时部署多套环境(测试、预发布、生产环境等)

二、企业业务代码发布逻辑图

image-20220223152003734

三、企业业务代码发布工具及流程图

3.1 工具

序号工具工具用途
1git用于提交业务代码或克隆业务代码仓库
2gitlab用于存储业务代码
3jenkins用于利用插件完成业务代码编译、构建、推送至Harbor容器镜像仓库及项目部署
4tomcat用于运行JAVA业务代码
5maven用于编译业务代码
6harbor用于存储业务代码构建的容器镜像存储
7docker用于构建容器镜像,部署项目

3.2 流程图

本次部署Java代码包。

image-20220223163453076

四、企业业务代码发布系统环境部署

4.1 主机规划

序号主机名主机IP主机功能软件
1dev192.168.10.20开发者 项目代码 sologit
2gitlab-server192.168.10.21代码仓库gitlab-ce
3jenkins-server192.168.10.22编译代码、打包镜像、项目发布jenkins、docker、git
4harbor-server192.168.10.23存储容器镜像harbor、docker
5web-server192.168.10.24运行容器,项目上线docker

4.2 主机准备

4.2.1 主机名配置

# hostnamectl set-hostname xxx

根据主机规划实施配置

4.2.2 主机IP地址配置

# vim /etc/sysconfig/network-scripts/ifcfg-ens33
# cat /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="none" 配置为静态IP
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens33"
UUID="ec87533a-8151-4aa0-9d0f-1e970affcdc6"
DEVICE="ens33"
ONBOOT="yes"
IPADDR="192.168.10.2x"  把2x替换为对应的IP地址
PREFIX="24"
GATEWAY="192.168.10.2"
DNS1="119.29.29.29"

4.2.3 主机名与IP地址解析配置

# vim /etc/hosts
# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.10.20 dev
192.168.10.21 gitlab-server
192.168.10.22 jenkins-server
192.168.10.23 harobr-server
192.168.10.24 web-server

4.2.4 主机安全设置

# systemctl stop firewalld;systemctl disable firewalld
# firewall-cmd --state
# sestatus

4.2.5 主机时间同步

# crontab -e

# crotab -l
0 */1 * * * ntpdate time1.aliyun.com

4.3 主机中工具安装

4.3.1 dev主机

下载项目及上传代码至代码仓库

# yum -y install git

4.3.2 gitlab-server主机

4.3.2.1 获取YUM源

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

# cat /etc/yum.repos.d/gitlab.repo
[gitlab]
name=gitlab-ce
baseurl=https://mirrors.tuna.tsinghua.edu.cn/gitlab-ce/yum/el7
enabled=1
gpgcheck=0
4.3.2.2 gitlab-ce安装
# yum -y install gitlab-ce
4.3.2.3 gitlab-ce配置
# vim /etc/gitlab/gitlab.rb
32 external_url 'http://192.168.10.21'
4.3.2.4 启动gitlab-ce
# gitlab-ctl reconfigure
# gitlab-ctl status
4.3.2.5 访问gitlab-ce
# cat /etc/gitlab/initial_root_password
......

Password: znS4Bqlp0cfYUKg2dHzFiNCAN0GnhtnD4ENjEtEXMVE=

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

4.3.3 jenkins-server主机

4.3.3.1 jdk安装
# ls
 jdk-8u191-linux-x64.tar.gz
# mv jdk1.8.0_191 /usr/local/jdk
# vim /etc/profile
# cat /etc/profile
......
export JAVA_HOME=/usr/local/jdk
export PATH=${JAVA_HOME}/bin:$PATH
# source /etc/profile
# java -version
java version "1.8.0_191"
Java(TM) SE Runtime Environment (build 1.8.0_191-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.191-b12, mixed mode)
4.3.3.2 jenkins安装
4.3.3.2.1 安装

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

#  wget -O /etc/yum.repos.d/jenkins.repo https://pkg.jenkins.io/redhat-stable/jenkins.repo
# rpm --import https://pkg.jenkins.io/redhat-stable/jenkins.io.key

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

# wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo

# yum -y install jenkins
4.3.3.2.2 jenkins配置
# vim /etc/init.d/jenkins
在81行下面添加如下内容:
 82 /usr/local/jdk/bin/java
# vim /etc/sysconfig/jenkins
在19行双引号中添加jdk中java命令路径
 19 JENKINS_JAVA_CMD="/usr/local/jdk/bin/java"
4.3.3.2.3 jenkins启动
# chkconfig --list

注:该输出结果只显示 SysV 服务,并不包含
原生 systemd 服务。SysV 配置数据
可能被原生 systemd 配置覆盖。

      要列出 systemd 服务,请执行 'systemctl list-unit-files'。
      查看在具体 target 启用的服务请执行
      'systemctl list-dependencies [target]'。

jenkins         0:关    1:关    2:开    3:开    4:开    5:开    6:关
netconsole      0:关    1:关    2:关    3:关    4:关    5:关    6:关
network         0:关    1:关    2:开    3:开    4:开    5:开    6:关
# chkconfig jenkins on
# systemctl start jenkins
4.3.3.2.4 jenkins访问
# cat /var/lib/jenkins/secrets/initialAdminPassword
3363d658a1a5481bbe51a1ece1eb08ab

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

4.3.3.2.5 jenkins初始化配置

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

4.3.3.3 git安装
# yum -y install git
4.3.3.4 maven安装
4.3.3.4.1 获取maven安装包

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

# wget https://dlcdn.apache.org/maven/maven-3/3.8.4/binaries/apache-maven-3.8.4-bin.tar.gz
4.3.3.4.2 maven安装
# ls
apache-maven-3.8.4-bin.tar.gz
# tar xf apache-maven-3.8.4-bin.tar.gz
# ls
apache-maven-3.8.4
# mv apache-maven-3.8.4 /usr/local/mvn
# vim /etc/profile
......
export JAVA_HOME=/usr/local/jdk
export MAVEN_HOME=/usr/local/mvn
export PATH=${JAVA_HOME}/bin:${MAVEN_HOME}/bin:$PATH
# source /etc/profile
# mvn -v
Apache Maven 3.8.4 (9b656c72d54e5bacbed989b64718c159fe39b537)
Maven home: /usr/local/mvn
Java version: 1.8.0_191, vendor: Oracle Corporation, runtime: /usr/local/jdk/jre
Default locale: zh_CN, platform encoding: UTF-8
OS name: "linux", version: "3.10.0-1160.49.1.el7.x86_64", arch: "amd64", family: "unix"
4.3.3.5 docker安装
# wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# yum -y install docker-ce
# systemctl enable docker
# systemctl start docker

4.3.4 harbor-server主机

4.3.4.1 docker安装
# wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# yum -y install docker-ce
# systemctl enable docker
# systemctl start docker
4.3.4.2 docker-compose安装
4.3.4.2.1 获取docker-compose文件

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

# wget https://github.com/docker/compose/releases/download/v2.2.3/docker-compose-linux-x86_64
4.3.4.2.2 docker-compose安装及测试
# ls
docker-compose-linux-x86_64
# chmod +x docker-compose-linux-x86_64
# mv docker-compose-linux-x86_64 /usr/bin/docker-compose
# docker-compose version
Docker Compose version v2.2.3
4.3.4.3 harbor部署
4.3.4.3.1 harbor部署文件获取

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

# wget https://github.com/goharbor/harbor/releases/download/v2.4.1/harbor-offline-installer-v2.4.1.tgz
4.3.4.3.2 harbor部署
# ls
harbor-offline-installer-v2.4.1.tgz
# tar xf harbor-offline-installer-v2.4.1.tgz -C /home
# cd /home
# ls
harbor
[root@harbor-server home]# cd harbor/
[root@harbor-server harbor]# ls
common.sh  harbor.v2.4.1.tar.gz  harbor.yml.tmpl  install.sh  LICENSE  prepare
# mv harbor.yml.tmpl harbor.yml
[root@harbor-server harbor]# vim harbor.yml
[root@harbor-server harbor]# cat harbor.yml
# Configuration file of Harbor

# The IP address or hostname to access admin UI and registry service.
# DO NOT use localhost or 127.0.0.1, because Harbor needs to be accessed by external clients.
hostname: 192.168.10.23 修改

# http related config
http:
  # port for http, default is 80. If https enabled, this port will redirect to https port
  port: 80

# https related config
#https: 注释
  # https port for harbor, default is 443
#  port: 443 注释
  # The path of cert and key files for nginx
#  certificate: /your/certificate/path 注释
#  private_key: /your/private/key/path 注释

[root@harbor-server harbor]# ./prepare
[root@harbor-server harbor]# ./install.sh
[root@harbor-server harbor]# docker ps
CONTAINER ID   IMAGE                                COMMAND                  CREATED              STATUS                        PORTS                                   NAMES
12605eae32bb   goharbor/harbor-jobservice:v2.4.1    "/harbor/entrypoint.…"   About a minute ago   Up About a minute (healthy)                                           harbor-jobservice
85849b46d56d   goharbor/nginx-photon:v2.4.1         "nginx -g 'daemon of…"   About a minute ago   Up About a minute (healthy)   0.0.0.0:80->8080/tcp, :::80->8080/tcp   nginx
6a18e370354f   goharbor/harbor-core:v2.4.1          "/harbor/entrypoint.…"   About a minute ago   Up About a minute (healthy)                                           harbor-core
d115229ef49d   goharbor/harbor-portal:v2.4.1        "nginx -g 'daemon of…"   About a minute ago   Up About a minute (healthy)                                           harbor-portal
f5436556dd32   goharbor/harbor-db:v2.4.1            "/docker-entrypoint.…"   About a minute ago   Up About a minute (healthy)                                           harbor-db
7fb8c4945abe   goharbor/harbor-registryctl:v2.4.1   "/home/harbor/start.…"   About a minute ago   Up About a minute (healthy)                                           registryctl
d073e5da1399   goharbor/redis-photon:v2.4.1         "redis-server /etc/r…"   About a minute ago   Up About a minute (healthy)                                           redis
7c09362c986b   goharbor/registry-photon:v2.4.1      "/home/harbor/entryp…"   About a minute ago   Up About a minute (healthy)                                           registry
55d7f39909e3   goharbor/harbor-log:v2.4.1           "/bin/sh -c /usr/loc…"   About a minute ago   Up About a minute (healthy)   127.0.0.1:1514->10514/tcp               harbor-log

4.3.5 web-server

docker安装

# wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# yum -y install docker-ce
# systemctl enable docker
# systemctl start docker

4.4 工具集成配置

4.4.1 配置docker主机使用harbor

4.4.1.1 jenkins-server
[root@jenkins-server ~]# vim /etc/docker/daemon.json
[root@jenkins-server ~]# cat /etc/docker/daemon.json
{
        "insecure-registries": ["http://192.168.10.23"]
}
[root@jenkins-server ~]# systemctl restart docker
[root@jenkins-server ~]# docker login 192.168.10.23
Username: admin
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded
4.4.1.2 harbor-server
[root@harbor-server harbor]# vim /etc/docker/daemon.json
[root@harbor-server harbor]# cat /etc/docker/daemon.json
{
        "insecure-registries": ["http://192.168.10.23"]
}
[root@harbor-server harbor]# docker-compose down
[root@harbor-server harbor]# systemctl restart docker
[root@harbor-server harbor]# docker-compose up -d
4.4.1.3 web-server
[root@web-server ~]# vim /etc/docker/daemon.json
[root@web-server ~]# cat /etc/docker/daemon.json
{
        "insecure-registries": ["http://192.168.10.23"]
}
[root@web-server ~]# systemctl restart docker
[root@web-server ~]# docker login 192.168.10.23
Username: admin
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

4.4.2 配置jenkins使用docker

在jenkins-server主机上配置

验证系统中是否有jenkins用户
[root@jenkins-server ~]# grep jenkins /etc/passwd
jenkins:x:997:995:Jenkins Automation Server:/var/lib/jenkins:/bin/false
验证系统中是否有docker用户及用户组
[root@jenkins-server ~]# grep docker /etc/group
docker:x:993:
添加jenkins用户到docker用户组
[root@jenkins-server ~]# usermod -G docker jenkins
[root@jenkins-server ~]# grep docker /etc/group
docker:x:993:jenkins
重启jenkins服务
[root@jenkins-server ~]# systemctl restart jenkins

4.4.3 密钥配置

4.4.3.1 dev主机至gitlab-ce
4.4.3.1.1 dev主机生成密钥对
[root@dev ~]# ssh-keygen
4.4.3.1.2 添加公钥至gitlab-ce
[root@dev ~]# cat /root/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCy2PdvT9qX55CLZzzaaEf06x8gl3yHGfdJSmAp9L1Fdtcbd3yz3U0lgdOwWpB8fQ/A3HoUUTWCb1iC5WJBOvqkoD8rJ2xC3HJ62zjOjmqcn2fEs09CzJj3bCfahuqPzaPkIOoH42/Y2QdImQ7xZOqqjS7aIc5T2FjDLG3bMhaYFyvx18b1qiPACuh67iniPQnL667MFZ/0QGGVnQKwxop+SezhP9QqV1bvPk94eTdkERIBiY1CNcNmVryk6PzSKY8gfW++3TGN9F+knhMXcswFOu6FzqxcA3G+hYg+Io2HJaDrsfHGZ6CP5T9QiOlIWlNxz05BOK3OFQ5BPeomA+jv root@dev

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

4.4.3.2 jenkins-server主机至gitlab-ce
4.4.3.2.1 在jenkins-server生成密钥对
[root@jenkins-server ~]# ssh-keygen
4.4.3.2.2 添加公钥至gitlab-ce
[root@jenkins-server ~]# cat /root/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCyg3WaEm5yH9yva8Jm5wfTPwN3ROGMNPpAex8zYj+M1GesoMtE6gkiKHWydAJBiLuu/1fBx6HlgzzxghVj9oK4DmTRZQh2IZY4+zZIGBRaDBuBO1f7+SdVE/jZoLd1a+yZ3FQmy37AlXUcIKxbrDBtefvJ31faziWyZKvT4BGFJCznRU6AOxOg1pe4bWbWI+dGnMIIq7IhtK+6tY/w3OlF7xcWmrJP1oucpq33BYOrnRCL9EO5Zp2jcejDeG5UvXONG7CggT7FDhjwcCRZvX+AutDGAtgBckNXZjV9SDKWgDifCSDtDfV4Be4zb8b3hxtSMsbEY8YHxsThsmHrUkbz root@jenkins-server

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

4.4.3.3 配置jenkins-sever主机的私钥到凭据列表
[root@jenkins-server ~]# cat /root/.ssh/id_rsa
-----BEGIN RSA PRIVATE KEY-----
MIIEogIBAAKCAQEAsoN1mhJuch/cr2vCZucH0z8Dd0ThjDT6QHsfM2I/jNRnrKDL
ROoJIih1snQCQYi7rv9Xwceh5YM88YIVY/aCuA5k0WUIdiGWOPs2SBgUWgwbgTtX
+/knVRP42aC3dWvsmdxUJst+wJV1HCCsW6wwbXn7yd9X2s4lsmSr0+ARhSQs50VO
gDsToNaXuG1m1iPnRpzCCKuyIbSvurWP8NzpRe8XFpqyT9aLnKat9wWDq50Qi/RD
uWado3How3huVL1zjRuwoIE+xQ4Y8HAkWb1/gLrQxgLYAXJDV2Y1fUgyloA4nwkg
7Q31eAXuM2/G94cbUjLGxGPGB8bE4bJh61JG8wIDAQABAoIBAEOwy6BPyuelo1Y1
g3LnujTlWRgZ23kCAb7/sPYYFEb/qAxysIGCSVJVi0PO76gQBDM4iftmCsLv/+UI
UbolGK5Ybuxj5lB9LeyPfaba0qTOoINhkFxwvvRo7V0Ar3BsKzywqoxHb9nxEoZG
8XSVl4t7zPlgonzK3MqHmAxwk9QrIB/rnjolHGN6HvfK2Cwq5WN1crspwQ+XDbRS
J5qoAtv6PJzrU6QhJl/zSMCb0MytlIhZi+V+1yY/QhAYrWJgWypEwGlAXlVC90r4
twX1W/sl63xzFF396WjM1478yqpttvID06dKTC9T3y/k8lLmRNXwqmTCIm7C/jxP
9wjXJUECgYEA4r1N4AML7JpvE7hBRkreSdIIwoppRkBwl6gt/AJj8yt2ydQ8PY4P
X3s5ZkCfPzHX2XsVUUcQpsBFcU2PyG29+qzt3XOmlGYgJG11xPQbwi95/u9VSd5u
AuaNNa2YPw2teuM0hKVAl5knfy0+YHcOCdU14gHCCWsD4uOz5Zg9jVMCgYEAyYzv
SBvCqbZ4d5agTn+ZiOkmgKVT4UVnmZPivFXnCWiIbX2fi3ok7jU1hZAs6lf6VlTU
EPV8T1LwjO9yhFmicepzSl9lJCMbMXHt20OsqN0oUQFpoTQ07pbBE2K8c1IuQUEi
B2SoLHqv7Ym9jHQqvT3DVhTiC+H2LwsgVRvvi+ECgYAxaID0xJUvnMOBr5ABykTA
H1WrVs/z8AzY71v942N2VM1Q07/AxhkRfF+YqZJKCgl4KbsOeAbn31QCiZ1AVrGk
U1SOAiqVgd+VMIkOPwdhfEkARZT3QNIGLcktnkNj0g4wjhwen4gAwO37Z5eFG8xi
ViSkuC9ZMAmrwmSsLk2TYwKBgHQh0tYXuMiVLUCq99+DQnJS9S53FKfel900Cxc9
4AvZwZJlKgLx9EmVOyukcVzuKH6KDk9fQ6tpPNXYOoHsK9+7mYanBN4XpFmPLeCD
U/9QvyQ9ziFmtYEsOD/1SmSgW6qZ3wOnigdnAeu6zA8b+GxmJCF7kuwJ3RIqNQ0V
NafBAoGAXyynoTT2zugFq8jYRubxkMk7NdnTRAnGh+mlyrGGMsNLmPvfAw+6yKph
1fVHKXHtSrgtK0CVOIcmaH3r+LfG4Mfrjlq+8qiKcepBFvO9cZLNKn11vqQtzs7m
y+ydl4xTcCPoAMDsVeamJ3fv+9nyXe5KqYtw+BJMjpP+PnNN2YQ=
-----END RSA PRIVATE KEY-----

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

4.5 jenkins插件安装

4.5.1 maven integration

用于编译JAVA项目

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

4.5.2 git parameter

用于基于git版本提交进行参数构建项目

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

4.5.3 gitlab

用于jenkins-server拉取项目

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

4.5.4 Generic Webhook Trigger

用于项目自动化构建

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值