Docker 11:Docker Compose 一键启动高可用 Web 站点

十一:Docker Compose 启动 Web 站点

通过 Docker 的单机编排工具 docker compose 来编排一个高可用 Web 站点:

  • 前端使用 HAProxy 作为负载均衡;
  • 2 个 Nginx 容器响应静态资源请求和动态资源请求的代理转发;
  • 4 个 Tomcat 容器部署 java 程序来响应动态请求;
  • Redis 容器保存 session 信息,实现用户的会话保持;

11.1:实验环境准备

11.1.1:实验拓扑

  • Docker Server:192.168.1.201,容器运行的宿主机;
  • Harbor-Server:192.168.1.121,镜像仓库;
  • Images-Server:192.168.1.111,镜像制作服务器,用于 Dockerfile 构建镜像、上传至 Harbor;

在这里插入图片描述

11.1.2:Docker-Server 准备

安装并启动 Docker

编辑 Docker 安装脚本:

安装 18.09.9 版本:

root@Docker-Server:~# vim apt_install_docker.sh 
#!/bin/bash
# apt install docker.
sudo apt-get update;
sudo apt-get -y install apt-transport-https ca-certificates curl software-properties-common;
curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -;
sudo add-apt-repository "deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable";
sudo apt-get -y update;
sudo apt-get -y install docker-ce=5:18.09.9~3-0~ubuntu-bionic docker-ce-cli=5:18.09.9~3-0~ubuntu-bionic;

执行脚本,安装 Docker:

root@Docker-Server:~# bash apt_install_docker.sh 

添加 insecure-registry:

添加 Harbor 的地址;

root@Docker-Server:~# vim /lib/systemd/system/docker.service 
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --insecure-registry 192.168.1.121

root@Docker-Server:~# systemctl daemon-reload
root@Docker-Server:~# systemctl restart docker && systemctl enable docker
安装 Docker Compose

使用二进制程序:

root@Docker-Server:~# file docker-compose-Linux-x86_64 
docker-compose-Linux-x86_64: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/l, for GNU/Linux 2.6.32, BuildID[sha1]=294d1f19a085a730da19a6c55788ec08c2187039, stripped

root@Docker-Server:~# cp docker-compose-Linux-x86_64 /usr/bin/docker-compose

root@Docker-Server:~# docker-compose version
docker-compose version 1.25.4, build 8d51620a
docker-py version: 4.1.0
CPython version: 3.7.5
OpenSSL version: OpenSSL 1.1.0l  10 Sep 2019

11.1.3:Images-Server 准备

准备好 Docker 环境、构建好 Dockerfile 的层级目录结构即可;

参见章节:Docker 镜像制作;

11.1.4:Harbor-Server 准备

安装单机版 Harbor;

参见章节:部署单节点 Harbor(1.7.5)

11.2:Compose File 准备

11.2.1:容器编排规划

容器/服务/镜像的定义
服务名称容器名称镜像名称(Tag)links
haproxytestapp_haproxy192.168.1.121/haproxy/centos-haproxy:1.8.20nginx1、nginx2
nginx1testapp_nginx1192.168.1.121/nginx/centos-nginx:1.16.1tomcat1、tomcat2
nginx2testapp_nginx2192.168.1.121/nginx/centos-nginx:1.16.1tomcat3、tomcat4
tomcat1testapp_tomcat1192.168.1.121/testapp/testapp:v1redis
tomcat2testapp_tomcat2192.168.1.121/testapp/testapp:v1redis
tomcat3testapp_tomcat3192.168.1.121/testapp/testapp:v1redis
tomcat4testapp_tomcat4192.168.1.121/testapp/testapp:v1redis
redistestapp_redis192.168.1.121/redis/centos-redis:4.0.14

因为还要定义好 nginx-tomcat 的负载关系,所以无法采用动态扩容的方式来启动 tomcat 容器:

  • docker-compose.yml 中之定义一个 testapp_tomcat 服务;
  • docker-compose scale 来扩展后端 tomcat 容器;

只能在 docker-compose.yml 中定义多个 testapp_tomcat 服务,从而能够指定 Nginx 转发 Tomcat 请求的对应关系;

容器的启动和管理方式
  • HAProxy 容器的镜像采用默认配置,配置文件由 volumes 进行映射,方便后期的 Nginx 扩容;
    仅 HAProxy 对外映射端口,其它容器通过容器网络进行内部通信;
  • Nginx 容器的镜像也采用默认配置,配置文件由 volumes 进行映射,方便后期的 Tomcat 扩容;
  • Testapp 容器的镜像直接打好,版本升级也采用重新打镜像的方式,用容器镜像来进行版本发布;
  • Redis 容器的镜像直接打好,工作目录用 volumes 映射到宿主机单独的数据目录,实现容器数据的持久化;
  • Nginx 和 testapp 容器中,启动 nginx 和 tomcat 进程的用户均为 www,保证文件系统权限的一致性;

11.2.2:创建 Docker Compose 的工作目录

创建 ComposeFile 目录,在目录中以项目名称创建子目录,方便多项目的管理;

创建 testapp 项目的工作目录:

root@Docker-Server:~# mkdir /ComposeFile/testapp -p

11.2.3:编写 docker-compose.yml

注意:

  • volumes 映射宿主机路径时,要用绝对路径;
    如果使用相对路径,在相对路径前加上 ./ 已表明是当前目录,比如要引用当前目录下的 conf/,则应该写成 ./conf/
root@Docker-Server:~# cd /ComposeFile/testapp/
root@Docker-Server:/ComposeFile/testapp# vim docker-compose.yml
haproxy:
  image: 192.168.1.121/haproxy/centos-haproxy:1.8.20
  volumes:
    - ./conf/haproxy.cfg:/etc/haproxy/haproxy.cfg
  expose:
    - 80
    - 443
    - 9999
  ports:
    - "80:80"
    - "443:443"
    - "9999:9999"
  links:
    - nginx1
    - nginx2

nginx1:
  image: 192.168.1.121/nginx/centos-nginx:1.16.1
  volumes:
    - ./conf/nginx1.conf:/apps/nginx/conf/nginx.conf
  expose:
    - 80
  links:
    - tomcat1
    - tomcat2

nginx2:
  image: 192.168.1.121/nginx/centos-nginx:1.16.1
  volumes:
    - ./conf/nginx2.conf:/apps/nginx/conf/nginx.conf
  expose:
    - 80
  links:
    - tomcat3
    - tomcat4

tomcat1:
  image: 192.168.1.121/testapp/testapp:v1
  expose:
    - 8080
  links:
    - redis

tomcat2:
  image: 192.168.1.121/testapp/testapp:v1
  expose:
    - 8080
  links:
    - redis

tomcat3:
  image: 192.168.1.121/testapp/testapp:v1
  expose:
    - 8080
  links:
    - redis

tomcat4:
  image: 192.168.1.121/testapp/testapp:v1
  expose:
    - 8080
  links:
    - redis

redis:
  image: 192.168.1.121/redis/centos-redis:4.0.14
  expose:
    - 6379
  volumes:
    - /data/redis:/apps/redis/data

11.3:Docker 镜像准备

镜像准备工作的 Images-Server 镜像制作服务器上进行;

11.3.1:HAProxy 镜像

直接使用之前构建的 192.168.1.121/haproxy/centos-haproxy:1.8.20 镜像;

配置文件在 docker-compose.yml 中使用 volumes 进行映射;

11.3.2:Nginx 镜像

直接使用之前构建的 192.168.1.121/nginx/centos-nginx:1.16.1 镜像;

配置文件在 docker-compose.yml 中使用 volumes 进行映射;

保证镜像中的 nginx 启动用户为 www(UID 2000),与 testapp 的 tomcat 启动用户一致;

11.3.3:testapp 镜像

使用 MSM+Redis 的方式构建 Session Server 来实现会话保持;

准备基础镜像

基础镜像使用之前构建好的 centos-tomcat:8.5.60 镜像;

保证镜像中的 tomcat 启动用户为 www(UID 2000),与 nginx 容器中的 nginx 启动用户一致;
centos-tomcat:8.5.60 的镜像中 tomcat 启动用户为 www,不用更改(可在 centos-tomcat:8.5.60 的 Dockerfile 中查证);

将 centos-tomcat:8.5.60 镜像也推送至 Harbor,方便后续下载使用;

root@Images-Server:~# docker tag centos-tomcat:8.5.60 192.168.1.121/tomcat/centos-tomcat:8.5.60
root@Images-Server:~# docker push 192.168.1.121/tomcat/centos-tomcat:8.5.60
编写 Dockerfile

之前测试有 testapp 目录,备份并重建 testapp 目录:

root@Images-Server:~# cd /Dockerfile/Apps/
root@Images-Server:/Dockerfile/Apps# mv testapp/ testapp1
root@Images-Server:/Dockerfile/Apps# mkdir testapp

切换到 testapp 的 Dockerfile 工作目录,编写 Dockerfile:

root@Images-Server:~# cd /Dockerfile/Apps/testapp
root@Images-Server:/Dockerfile/Apps/testapp# vim Dockerfile
# testapp Dockerfile
#
FROM centos-tomcat:8.5.60
LABEL maintainer="yqc<20251839@qq.com>"

ADD run_tomcat.sh /apps/tomcat/bin/run_tomcat.sh
ADD testapp:v1/* /apps/tomcat/webapps/testapp/
ADD msm/* /apps/tomcat/lib/
ADD context.xml /apps/tomcat/conf/context.xml

RUN chown -R www:www /apps

EXPOSE 8080 8009

CMD ["/apps/tomcat/bin/run_tomcat.sh"]
准备相关文件
tomcat 启动脚本

脚本添加执行权限;

root@Images-Server:/Dockerfile/Apps/testapp# vim run_tomcat.sh
#!/bin/bash
su - www -c "/apps/tomcat/bin/catalina.sh start"
su - www -c "tail -f /etc/hosts"

root@Images-Server:/Dockerfile/Apps/testapp# chmod +x run_tomcat.sh 
testapp 程序

此 java 代码可以查看到被调度到哪台服务器、客户端IP、Session Id 等信息,用于验证是否实现了站点的负载均衡和会话保持;

root@Images-Server:/Dockerfile/Apps/testapp# vim testapp:v1/index.jsp
<%@page import="java.util.Enumeration"%>
<br />
host: <%try{out.println(""+java.net.InetAddress.getLocalHost().getHostName());}catch(Exception e){}%>
<br />
remoteAddr: <%=request.getRemoteAddr()%>
<br />
remoteHost: <%=request.getRemoteHost()%>
<br />
sessionId: <%=request.getSession().getId()%>
<br />
serverName:<%=request.getServerName()%>
<br />
scheme:<%=request.getScheme()%>
<br />
<%request.getSession().setAttribute("t1","t2");%>
<%
        Enumeration en = request.getHeaderNames();
        while(en.hasMoreElements()){
        String hd = en.nextElement().toString();
                out.println(hd+" : "+request.getHeader(hd));
                out.println("<br />");
        }
%>
MSM 相关 jar 包
root@Images-Server:/Dockerfile/Apps/testapp# mkdir msm
root@Images-Server:/Dockerfile/Apps/testapp/msm# rz

root@Images-Server:/Dockerfile/Apps/testapp/msm# ll
total 1396
drwxr-xr-x 2 root root   4096 Feb  2 14:26 ./
drwxr-xr-x 4 root root   4096 Feb  2 14:25 ../
-rw-r--r-- 1 root root  53259 May 31  2020 asm-5.2.jar
-rw-r--r-- 1 root root 586620 May 31  2020 jedis-3.0.0.jar
-rw-r--r-- 1 root root 285211 May 31  2020 kryo-3.0.3.jar
-rw-r--r-- 1 root root 126366 May 31  2020 kryo-serializers-0.45.jar
-rw-r--r-- 1 root root 167294 May 31  2020 memcached-session-manager-2.3.2.jar
-rw-r--r-- 1 root root  10826 May 31  2020 memcached-session-manager-tc8-2.3.2.jar
-rw-r--r-- 1 root root   5923 May 31  2020 minlog-1.3.1.jar
-rw-r--r-- 1 root root  38372 May 31  2020 msm-kryo-serializer-2.3.2.jar
-rw-r--r-- 1 root root  55684 May 31  2020 objenesis-2.6.jar
-rw-r--r-- 1 root root  72265 May 31  2020 reflectasm-1.11.9.jar
context.xml
root@Images-Server:/Dockerfile/Apps/testapp# vim context.xml 
<?xml version="1.0" encoding="UTF-8"?>
<Context>
    <WatchedResource>WEB-INF/web.xml</WatchedResource>
    <WatchedResource>${catalina.base}/conf/web.xml</WatchedResource>
    <Manager pathname="" />
    <Manager className="de.javakaffee.web.msm.MemcachedBackupSessionManager"
      memcachedNodes="redis://redis:6379"
      sticky="false"
      sessionBackupAsync="false"
      lockingMode="uriPattern:/path1|/path2"
      requestUriIgnorePattern=".*\.(ico|png|gif|jpg|css|js)$"
      transcoderFactoryClass="de.javakaffee.web.msm.serializer.kryo.KryoTranscoderFactory"
    />
</Context>
构建并推送镜像

编辑镜像构建和推送脚本:

root@Images-Server:/Dockerfile/Apps/testapp# vim docker_build.sh
#!/bin/bash
docker build -t 192.168.1.121/testapp/testapp:v1 .
docker push 192.168.1.121/testapp/testapp:v1

执行构建脚本:

root@Images-Server:/Dockerfile/Apps/testapp# docker login 192.168.1.121

root@Images-Server:/Dockerfile/Apps/testapp# bash docker_build.sh 
验证镜像

启动 testapp 容器,验证访问:

root@Images-Server:~# docker run -itd --name testapp1 -p 8080:8080 192.168.1.121/testapp/testapp:v1
5ef92a40144649cf1a1dcf8614357f0253b8461e35bd8c2a8c2b303e2626126b

root@Images-Server:~# ss -tnl | grep 8080
LISTEN   0         20480                     *:8080                   *:*       

root@Images-Server:~# docker ps -f name=testapp1
CONTAINER ID        IMAGE                              COMMAND                  CREATED             STATUS              PORTS                              NAMES
5ef92a401446        192.168.1.121/testapp/testapp:v1   "/apps/tomcat/bin/ru…"   14 seconds ago      Up 13 seconds       8009/tcp, 0.0.0.0:8080->8080/tcp   testapp1

访问 http://192.168.1.111:8080/testapp/

在这里插入图片描述

11.3.4:Redis 镜像

修改之前制作好的 192.168.1.121/redis/centos-redis:4.0.14 镜像,将 redis 密码注释掉:

#requirepass 123456

因为用 redis 作为 tomcat 的 session server 时,context.xml 中没有配置 redis 连接密码的选项;

root@Images-Server:~# cd /Dockerfile/Services/Redis/
root@Images-Server:/Dockerfile/Services/Redis# vim redis.conf
root@Images-Server:/Dockerfile/Services/Redis# cat redis.conf
bind 0.0.0.0
protected-mode yes
port 6379
tcp-backlog 511
timeout 0
tcp-keepalive 300
daemonize no
#supervised systemd
#pidfile /apps/redis/run/redis_6379.pid
loglevel notice
logfile "/apps/redis/logs/redis_6379.log"
databases 16
always-show-logo yes
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error no
rdbcompression yes
rdbchecksum yes
dbfilename dump_6379.rdb
dir /apps/redis/data
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync yes
repl-diskless-sync-delay 0
repl-disable-tcp-nodelay no
repl-backlog-size 100mb
slave-priority 100
#requirepass 123456
maxmemory 536870912
lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
slave-lazy-flush no
appendonly yes
appendfilename "appendonly_6379.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
aof-use-rdb-preamble no
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes

在 docker-compose.yml 中定义了启动 Redis 容器时,映射 redis 的工作目录 /apps/redis/data 到 宿主机的 /data/redis 目录,实现容器数据的持久化;

11.4:准备相关目录和文件

11.4.1:创建容器数据目录

创建 Redis 的数据目录:

root@Docker-Server:~# mkdir -pv /data/redis

11.4.2:准备配置文件

HAProxy 容器的配置文件
root@Docker-Server:~# cd /ComposeFile/testapp/
root@Docker-Server:/ComposeFile/testapp# mkdir conf

root@Docker-Server:/ComposeFile/testapp# vim conf/haproxy.cfg
global
        maxconn 100000
        uid 99
        gid 99
        #daemon
        nbproc 1
        #pidfile /run/haproxy.pid
        log 127.0.0.1 local3 info
        chroot /usr/local/haproxy
        stats socket /var/lib/haproxy/haproxy.socket mode 600 level admin

defaults
        option redispatch
        option abortonclose
        option http-keep-alive
        option forwardfor
        maxconn 100000
        mode http
        timeout connect 10s
        timeout client 20s
        timeout server 30s
        timeout check 5s

listen stats
        bind :9999
        stats enable
        stats hide-version
        stats uri /haproxy-status
        stats realm HAPorxy\ Stats\ Page
        stats auth haadmin:123456
        stats auth admin:123456
        stats refresh 30s
        stats admin if TRUE

listen nginx
        bind :80
        mode tcp
        server testapp_nginx1 nginx1:80 check inter 3s fall 3 rise 5
        server testapp_nginx2 nginx2:80 check inter 3s fall 3 rise 5
Nginx 容器的配置文件
nginx1.conf

后端 Tomcat 为 tomcat1 和 tomcat2;

root@Docker-Server:/ComposeFile/testapp# egrep -v '(^$|^[[:space:]]*#)' conf/nginx1.conf 
user  www;
worker_processes  1;
events {
    worker_connections  1024;
}
http {
    include       mime.types;
    default_type  application/octet-stream;
    sendfile        on;
    keepalive_timeout  65;
    upstream testapp {
        server tomcat1:8080 weight=1 fail_timeout=5s max_fails=3;
        server tomcat2:8080 weight=1 fail_timeout=5s max_fails=3;
    }
    server {
        listen       80;
        server_name  localhost;
        location / {
            root   /data/nginx/html;
            index  index.html index.htm;
        }
        location /testapp {
            proxy_pass http://testapp;
            proxy_set_header Host $host;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Real-IP $remote_addr;
        }
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
    }
}
nginx2.conf

后端 Tomcat 为 tomcat3 和 tomcat4;

root@Docker-Server:/ComposeFile/testapp# egrep -v '(^$|^[[:space:]]*#)' conf/nginx2.conf  
user  www;
worker_processes  1;
events {
    worker_connections  1024;
}
http {
    include       mime.types;
    default_type  application/octet-stream;
    sendfile        on;
    keepalive_timeout  65;
    upstream testapp {
        server tomcat3:8080 weight=1 fail_timeout=5s max_fails=3;
        server tomcat4:8080 weight=1 fail_timeout=5s max_fails=3;
    }
    server {
        listen       80;
        server_name  localhost;
        location / {
            root   /data/nginx/html;
            index  index.html index.htm;
        }
        location /testapp {
            proxy_pass http://testapp;
            proxy_set_header Host $host;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Real-IP $remote_addr;
        }
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
    }
}

11.5:启动并验证 Web 站点

11.5.1:启动容器服务

后台启动 docker-compose.yml 中定义的容器服务:

root@Docker-Server:~# cd /ComposeFile/testapp/

root@Docker-Server:/ComposeFile/testapp# docker-compose up -d
Creating testapp_redis_1 ... done
Creating testapp_tomcat3_1 ... done
Creating testapp_tomcat2_1 ... done
Creating testapp_tomcat1_1 ... done
Creating testapp_tomcat4_1 ... done
Creating testapp_nginx2_1  ... done
Creating testapp_nginx1_1  ... done
Creating testapp_haproxy_1 ... done

如果后期容器镜像发生变更,可以使用 docker-compose pull 来重新拉取新的镜像,再使用 docker-compose up -d 启动容器服务,保证容器是最新更改的镜像启动的;

11.5.2:验证 Web 站点

访问 testapp

http://192.168.1.201/testapp/

在这里插入图片描述

访问 HAProxy 状态页

http://192.168.1.201:9999/haproxy-status

在这里插入图片描述

验证会话保持

首次访问调度到 fc1d80a4189b 容器(nginx1–tomcat2):

root@Docker-Server:~# docker ps -f id=fc1d80a4189b
CONTAINER ID   IMAGE                              COMMAND                  CREATED         STATUS         PORTS                NAMES
fc1d80a4189b   192.168.1.121/testapp/testapp:v1   "/apps/tomcat/bin/ru…"   7 minutes ago   Up 7 minutes   8009/tcp, 8080/tcp   testapp_tomcat2_1

在这里插入图片描述

再次访问调度到 5332b2617aa6 容器(nginx1–tomcat1):

root@Docker-Server:~# docker ps -f id=5332b2617aa6
CONTAINER ID   IMAGE                              COMMAND                  CREATED         STATUS         PORTS                NAMES
5332b2617aa6   192.168.1.121/testapp/testapp:v1   "/apps/tomcat/bin/ru…"   8 minutes ago   Up 8 minutes   8009/tcp, 8080/tcp   testapp_tomcat1_1

在这里插入图片描述

再访问几次,被调度到 e7bed7ce676b 容器(nginx2–tomcat3):

root@Docker-Server:~# docker ps -f id=e7bed7ce676b
CONTAINER ID   IMAGE                              COMMAND                  CREATED          STATUS          PORTS                NAMES
e7bed7ce676b   192.168.1.121/testapp/testapp:v1   "/apps/tomcat/bin/ru…"   10 minutes ago   Up 10 minutes   8009/tcp, 8080/tcp   testapp_tomcat3_1

在这里插入图片描述

又被调度到 4d498824074e 容器(tomcat4):

root@Docker-Server:~# docker ps -f id=4d498824074e
CONTAINER ID   IMAGE                              COMMAND                  CREATED          STATUS          PORTS                NAMES
4d498824074e   192.168.1.121/testapp/testapp:v1   "/apps/tomcat/bin/ru…"   11 minutes ago   Up 11 minutes   8009/tcp, 8080/tcp   testapp_tomcat4_1

在这里插入图片描述

全程 sessionId 保持不变,均为 557B623666B60263E93FC474672B50C0

11.5.3:验证 Redis 数据持久化

到宿主机的 Redis 数据目录,查看 RDB 和 AOF 文件:

root@Docker-Server:~# ll /data/redis/
total 16
drwxr-xr-x 2 root root 4096 Feb  2 14:49 ./
drwxr-xr-x 3 root root 4096 Feb  2 10:29 ../
-rw-r--r-- 1 root root 3229 Feb  2 14:51 appendonly_6379.aof
-rw-r--r-- 1 root root  298 Feb  2 14:49 dump_6379.rdb

11.6:跨宿主机迁移 Web 站点

准备另一台 Docker Server 作为宿主机,将 testapp 的编排目录 /Composefile/testapp 拷贝到新的宿主机,验证是否可以直接启动 Web 站点;

11.6.1:准备新的宿主机

  • 192.168.1.202,Docker-Server2
安装并启动 Docker

拷贝 Docker 安装脚本,执行安装:

root@Docker-Server2:~# scp 192.168.1.201:/root/apt_install_docker.sh /root
root@Docker-Server2:~# bash apt_install_docker.sh 

添加 insecure-registry 为 192.168.1.121(Harbor):

root@Docker-Server2:~# vim /lib/systemd/system/docker.service 
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --bip=10.10.2.1/24 --insecure-registry 192.168.1.121

root@Docker-Server2:~# systemctl daemon-reload
root@Docker-Server2:~# systemctl restart docker && systemctl enable docker
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
拷贝 Docker Compose 二进制程序

从 Docker-Server 拷贝 docker-compose 二进制程序到 Docker-Server2:

root@Docker-Server2:~# scp 192.168.1.201:/usr/bin/docker-compose /usr/bin/

验证 docker-compose 版本:

root@Docker-Server2:~# docker-compose version
docker-compose version 1.25.4, build 8d51620a
docker-py version: 4.1.0
CPython version: 3.7.5
OpenSSL version: OpenSSL 1.1.0l  10 Sep 2019

11.6.2:拷贝编排目录到新宿主机

打包 testapp 的编排目录:

root@Docker-Server:~# cd /ComposeFile/testapp/
root@Docker-Server:/ComposeFile/testapp# tar zcvf testapp.tar.gz ./*

拷贝到 Docker-Server2:

root@Docker-Server:/ComposeFile/testapp# scp testapp.tar.gz 192.168.1.202:/root

11.6.3:新宿主机启动 Web 站点

宿主机创建 Redis 数据目录

root@Docker-Server2:~# mkdir -p /data/redis

解压 testapp 编排目录:

root@Docker-Server2:~# mkdir testapp
root@Docker-Server2:~# cd testapp/
root@Docker-Server2:~/testapp# tar xf /root/testapp.tar.gz

root@Docker-Server2:~/testapp# tree ./
./
├── conf
│   ├── haproxy.cfg
│   ├── nginx1.conf
│   └── nginx2.conf
└── docker-compose.yml

启动容器服务:

root@Docker-Server2:~/testapp# docker-compose up -d

11.6.4:新宿主机验证 Web 站点

访问 testapp:

http://192.168.1.202/testapp/

在这里插入图片描述

查看 HAProxy 状态页:

http://192.168.1.202:9999/haproxy-status

在这里插入图片描述

验证会话保持:

调度到不同的 host,sessionId 均为 8A277C6D0AD4A70EDC4F8924B9DFAF91;

在这里插入图片描述

在这里插入图片描述

在这里插入图片描述

在这里插入图片描述

Redis 数据持久化:

root@Docker-Server2:~# ll /data/redis/
total 20
drwxr-xr-x 2 root root 4096 Feb  2 15:13 ./
drwxr-xr-x 3 root root 4096 Feb  2 15:11 ../
-rw-r--r-- 1 root root 9004 Feb  2 15:16 appendonly_6379.aof

root@Docker-Server2:/data/redis# tail appendonly_6379.aof 
validity:8A277C6D0AD4A70EDC4F8924B9DFAF91
$20
wa
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值