从零开始学Docker(精髓)

说明

  • 感谢B站up主–狂神–提供的免费学习资料,学到了很多,此博客都是摘自up主发布的视频,自己做的笔记,将来忘了可以有个查阅的地方
  • 狂神B站
  • 资料取自该视频

容器数据卷

是什么

docker理念

将应用和环境打包成一个镜像!

问题

目前数据都是存储在容器中,如果容器删除,则数据丢失!

需求

数据可以持久化

需要容器之间可以有一个数据共享的技术!Docker容器中产生的数据,同步到本地。—这就是卷技术

实际就是目录的挂载,将容器内部的目录,挂载到Linux上面!

总结一句话:容器的持久化和同步操作!容器间也是可以数据共享的。

使用数据卷

方式一:直接使用命令来挂载 -v

docker run -it -v 主机目录:容器内目录

# 测试
docker run -it -v /home/ceshi:/home centso /bin/bash
# 启动起来的时候我们可以通过 docker inspect 容器id 命令查看Mounts信息
docker inspect 2f30fb14d6dd

好处:我们以后修改只需要在本地修改即可,容器内会自动同步

实战安装mysql

# 获取镜像
docker pull mysql:5.7
# 运行容器(并做数据挂载 && mysql需要配置密码可以在dockerhub上查看)
# 官方启动mysql命令:$ docker run --name some-mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:tag
# 其中-e是指定参数,当前指定的是root账户的密码

# 运行mysql镜像
docker run -d -p 3310:3306 -v /home/mysql/conf:/etc/mysql/conf.d -v /home/mysql/data:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=123456 --name mysql01 mysql:5.7

# 使用sqlyog连接 ip+3310查看是否连接成功

# 如果将容器删除,我们挂载到本地的数据卷不会丢失,这就实现了容器数据持久化功能

匿名挂载

# 匿名挂载
-v 容器内路径
 
# 启动nginx01
[root@VM-0-17-centos ~]# docker run -d -P --name nginx01 -v /etc/nginx nginx
8986206600ae42933d50b9bf31ec60d55caa129e68f1aa3ad39d6666ee53ec68
# 查看所有卷的情况
[root@VM-0-17-centos ~]# docker volume ls
DRIVER              VOLUME NAME
local               36d6a959457edecd30988619cc6ae125c20919de07a1faeb7480356ebff8c508
# 这就是匿名挂载,我们在-v只写了容器内的路径,没有写容器外的路径!

具名挂载(推荐使用)

# 启动nginx02
[root@VM-0-17-centos ~]# docker run -d -P --name nginx02 -v juming-nginx:/etc/nginx nginx
cf2df683198830f083fc1d612d12077f56ab04197a8bee789df727d5447dc057
# 查看所有卷的情况
[root@VM-0-17-centos ~]# docker volume ls
DRIVER              VOLUME NAME
local               juming-nginx

# 通过-v 卷名:容器内路径     具名挂载

# 查看一下这个卷
[root@VM-0-17-centos ~]# docker volume inspect juming-nginx 
[
    {
        "CreatedAt": "2020-11-17T13:59:31+08:00",
        "Driver": "local",
        "Labels": null,
        "Mountpoint": "/var/lib/docker/volumes/juming-nginx/_data",
        "Name": "juming-nginx",
        "Options": null,
        "Scope": "local"
    }
]

所有的docker容器内的卷,没有指定目录的情况下都是在/var/lib/docker/volumes/xxx/_data

我们通过具名挂载可以方便的找到我们的一个卷,大多数情况下使用具名挂载

指定路径挂载

-v /宿主机路径:容器内路径 # 指定路径挂载

如何区分是具名挂载还是匿名挂载还是指定路径挂载(重要)

-v 容器内路径   			 # 匿名挂载
-v 卷名:容器内路径   		# 具名挂载
-v /宿主机路径:容器内路径    # 指定路径挂载

拓展

[root@VM-0-17-centos ~]# docker run -d -P --name nginx02 -v juming-nginx:/etc/nginx:ro nginx
[root@VM-0-17-centos ~]# docker run -d -P --name nginx02 -v juming-nginx:/etc/nginx:rw nginx

# 通过-v 容器内路径:ro/rw 改变读写权限
ro  readonly  只读        - 就说明这个路径只能通过宿主机来操作,容器内部是无法操作的!
rw  readwrite 可读可写
# 一旦设置了容器权限,容器对我们挂载出来的内容就有限定了!

初识Dockerfile

用来构建docker镜像的构建文件!

命令脚本,通过这个脚本可以生成镜像,镜像是一层层的,脚本一个个的命令

方式二:

# 1、创建一个dockerfile文件,名字可以随机,建议Dockerfile,我是在/home目录下创建的
# 2、文件中的内容 (指令是大写)
FROM centos

VOLUME ["volume01","volume02"]

CMD echo "-------------end-------------"

CMD /bin/bash
# 这里每个命令,就是镜像的一层

# 3、构建镜像 不用忘了最后的那个`.` 
docker build -f /home/Dockerfile -t yang-centos:1.0 .
# 4、启动构建的镜像
[root@VM-0-17-centos home]# docker run -it yang-centos:1.0
[root@2c481d474dc4 /]# ls -l
total 56
lrwxrwxrwx   1 root root    7 May 11  2019 bin -> usr/bin
drwxr-xr-x   5 root root  360 Nov 17 07:35 dev
drwxr-xr-x   1 root root 4096 Nov 17 07:35 etc
drwxr-xr-x   2 root root 4096 May 11  2019 home
lrwxrwxrwx   1 root root    7 May 11  2019 lib -> usr/lib
lrwxrwxrwx   1 root root    9 May 11  2019 lib64 -> usr/lib64
drwx------   2 root root 4096 Aug  9 21:40 lost+found
drwxr-xr-x   2 root root 4096 May 11  2019 media
drwxr-xr-x   2 root root 4096 May 11  2019 mnt
drwxr-xr-x   2 root root 4096 May 11  2019 opt
dr-xr-xr-x 117 root root    0 Nov 17 07:35 proc
dr-xr-x---   2 root root 4096 Aug  9 21:40 root
drwxr-xr-x  11 root root 4096 Aug  9 21:40 run
lrwxrwxrwx   1 root root    8 May 11  2019 sbin -> usr/sbin
drwxr-xr-x   2 root root 4096 May 11  2019 srv
dr-xr-xr-x  13 root root    0 Aug 16 13:47 sys
drwxrwxrwt   7 root root 4096 Aug  9 21:40 tmp
drwxr-xr-x  12 root root 4096 Aug  9 21:40 usr
drwxr-xr-x  20 root root 4096 Aug  9 21:40 var
drwxr-xr-x   2 root root 4096 Nov 17 07:35 volume01
drwxr-xr-x   2 root root 4096 Nov 17 07:35 volume02
# 5、查看容器信息
docker inspect yang-centos
# 6、找到Mounts(挂载)节点、查看绑定的目录
"Mounts": [
            {
                "Type": "volume",
                "Name": "fdddcc08c3232e8949d41745cf113510d9aac6fb146ee0ef1e7ee86345bb8dd6",
                "Source": "/var/lib/docker/volumes/fdddcc08c3232e8949d41745cf113510d9aac6fb146ee0ef1e7ee86345bb8dd6/_data",
                "Destination": "volume01",
                "Driver": "local",
                "Mode": "",
                "RW": true,
                "Propagation": ""
            },
            {
                "Type": "volume",
                "Name": "4d180a68676a67fe3926baec260b317a437fecb46694a1897c72a9177f44ad50",
                "Source": "/var/lib/docker/volumes/4d180a68676a67fe3926baec260b317a437fecb46694a1897c72a9177f44ad50/_data",
                "Destination": "volume02",
                "Driver": "local",
                "Mode": "",
                "RW": true,
                "Propagation": ""
            }
        ],

# 7、测试  在绑定的目录下新建文件,查看是否同步了
# linux
[root@VM-0-17-centos ~]# cd /var/lib/docker/volumes/4d180a68676a67fe3926baec260b317a437fecb46694a1897c72a9177f44ad50/_data
[root@VM-0-17-centos _data]# ls
[root@VM-0-17-centos _data]# touch test.java
[root@VM-0-17-centos _data]# ls
test.java

# 容器中
[root@2c481d474dc4 /]# ls volume02/
test.java

这种方式未来使用的非常多,因为我们通常会构建自己的镜像!

假设构建镜像时候没有挂载卷,要手动镜像挂载-v 卷名:容器内路径!

数据卷容器 volumes-from

–volumes-from 容器名称

# 启动docker01
[root@VM-0-17-centos home]# docker run -d -P --name docker01 yang-centos:1.0
b9550def8bbd6aa86f5be2cf27dcb5e18bed9914eefb796b229fe4c2a26b4f9b
# 启动docker02
[root@VM-0-17-centos home]# docker run -it -P --name docker02 --volumes-from docker01 yang-centos:1.0
[root@e8293ecd7951 /]# ls
bin  dev  etc  home  lib  lib64  lost+found  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var  volume01  volume02
[root@e8293ecd7951 /]# ls volume01/
[root@e8293ecd7951 /]# 

# 查看docker01容器卷信息 下边信息只是一部分
[root@VM-0-17-centos ~]# docker inspect b9550def8bbd
"Mounts": [
{
"Type": "volume",
"Name": "67eefcae0c67d12fa3b77e00ad64278254ce7667cba59475be2a4329ad5d2d43",
"Source": "/var/lib/docker/volumes/67eefcae0c67d12fa3b77e00ad64278254ce7667cba59475be2a4329ad5d2d43/_data",
"Destination": "volume01",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
},
{
"Type": "volume",
"Name": "e6e21afaa5362b47618dbfb6175e58aa8d57462233e6d852d2b6fa96480d08ad",
"Source": "/var/lib/docker/volumes/e6e21afaa5362b47618dbfb6175e58aa8d57462233e6d852d2b6fa96480d08ad/_data",
"Destination": "volume02",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
],
# 测试
[root@VM-0-17-centos ~]# ls /var/lib/docker/volumes/67eefcae0c67d12fa3b77e00ad64278254ce7667cba59475be2a4329ad5d2d43/_data
# docker02中什么都没有
[root@e8293ecd7951 /]# ls volume01/
# 数据卷中添加test.java文件
[root@VM-0-17-centos ~]# touch /var/lib/docker/volumes/67eefcae0c67d12fa3b77e00ad64278254ce7667cba59475be2a4329ad5d2d43/_data/test.java
[root@VM-0-17-centos ~]# ls /var/lib/docker/volumes/67eefcae0c67d12fa3b77e00ad64278254ce7667cba59475be2a4329ad5d2d43/_data/test.java
/var/lib/docker/volumes/67eefcae0c67d12fa3b77e00ad64278254ce7667cba59475be2a4329ad5d2d43/_data/test.java
# 查看docker02容器中的数据卷
[root@e8293ecd7951 /]# ls volume01/
test.java

两个mysql同步数据

# 启动第一个mysql
docker run -d -p 3310:3306 -v /home/mysql/conf:/etc/mysql/conf.d -v /home/mysql/data:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=123456 --name mysql01 mysql:5.7
# 第二个mysql
docker run -d -p 3310:3306 -v /home/mysql/conf:/etc/mysql/conf.d -v /home/mysql/data:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=123456 --volume-from mysql01 --name mysql02 mysql:5.7
# 这个时候,可以实现两个容器数据同步!

结论

容器之间配置信息的传递,数据卷容器的生命周期一直持续到没有容器使用为止。但是一旦持久化到了本地,这个时候本地的数据是不会删除的!

Dockerfile

介绍

用来构建docker镜像的构建文件!

命令脚本,通过这个脚本可以生成镜像,镜像是一层层的,脚本一个个的命令

构建步骤

1、编写一个Dockerfile文件
2、docker build 构建成为一个镜像
3、docker run 运行镜像
4、docker push 发布镜像(DockerHub、阿里云镜像仓库)

DockerFile构建过程

  • 基础知识
    • 1、每个关键字(指令)都必须是大写字母
    • 2、执行从上到下顺序执行
    • 3、# 表示注释
    • 4、每一个指令都会创建提交一个新的镜像层,并提交!

dockerfile是面向开发的,以后发布项目,做镜像就需要编写dockerfile文件,这个文件十分简单!

Dcoker镜像逐渐成为企业的标准,必须掌握!

步骤:

DockerFile :构建文件,定义了一-切的步骤,源代码

Dockerlmages:通过DockerFile构建生成的镜像,最终发布和运行的产品,原来是jar war

Dcoker容器:就是镜像运行起来提供服务器

DockerFile的指令

FROM				# 基础镜像
MAINTAINER			 # 镜像是谁写的,姓名+邮箱
RUN					# 镜像构建的时候需要运行的命令
ADD					# 添加内容
WORKDIR				# 镜像的工作目录
VOLUME				# 挂载的目录
EXPOSE				# 暴露端口配置,跟-p差不多
CMD					# 指定容器启动时要运行的命令,只有最后一个会生效 && 可被替代
ENTRYPOINT			 # 指定容器启动时要运行的命令,可以追加命令
ONBUILD				 # 当构建一个被继承的`Dockerfile`这个时候就会运行ONBUILD指令,触发指令
COPY				 # 类似ADD,将我们文件拷贝到镜像中
ENV					 # 构建的时候设置环境变量
Dockerfile制作tomcat镜像
[root@VM-0-17-centos /]# ll /home/
总用量 150808
-rw-r--r-- 1 root root  11282879 11月 18 14:06 apache-tomcat-9.0.39.tar.gz
-rw-r--r-- 1 root root 143142634 11月 18 14:06 jdk-8u271-linux-x64.tar.gz
# 创建Dockerfile文件,并编写,内容如下  ---start---
FROM centos
MAINTAINER yangzhengjava@foxmail.com

COPY readme.txt /usr/local/readme.txt

ADD jdk-8u271-linux-x64.tar.gz /usr/local/
ADD apache-tomcat-9.0.39.tar.gz /usr/local/

RUN yum -y install vim

ENV MYPATH /usr/local
WORKDIR $MYPATH

# jdk环境配置
ENV JAVA_HOME /usr/local/jdk1.8.0_271
ENV CLASSPATH $JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

# tomcat 环境配置
ENV CATALINA_HOME /usr/local/apache-tomcat-9.0.39
ENV CATALINA_BASH /usr/local/apache-tomcat-9.0.39
ENV PATH $PATH:$JAVA_HOME/bin:$CATALINA_HOME/lib:$CATALINA_HOME/bin

# 暴露端口
EXPOSE 8080

CMD /usr/local/apache-tomcat-9.0.39/bin/startup.sh && tail -F /usr/local/apache-tomcat-9.0.39/
bin/logs/catalina.out
# 内容编写完毕  ---end---
# 构建镜像,因为文件名字为官方名字Dockerfile,所以不需要用-f指定文件了
docker build -t diytomcat .
# 查看镜像
[root@VM-0-17-centos /]# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
diytomcat           latest              da2e17cc1376        55 seconds ago      643MB
centos              latest              0d120b6ccaa8        3 months ago        215MB

# 启动镜像
[root@VM-0-17-centos /]# docker run -d -p 9090:8080 --name mytomcat -v /home/tomcat_dir:/usr/local/apache-tomcat-9.0.39/webapps/test -v /home/tomcat_logs/:/usr/local/apache-tomcat-9.0.39/logs diytomcat
8984f671af7f863369eecb5c6489a0c80652737b4f14ffb282b6eb1f38da2f26
# 访问测试
curl http://localhost:9090
# OK!

# 再/home/tomcat_dir中添加WEB-INFO目录,并添加web.xml文件,文件内容如下
<?xml version="1.0" encoding="UTF-8"?>
<web-app version="2.4" 
    xmlns="http://java.sun.com/xml/ns/j2ee" 
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee 
        http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd">
</web-app>

# 再WEB-INFO同级目录下添加index.jsp文件,内容如下
<%@ page language="java" contentType="text/html; charset=UTF-8"
    pageEncoding="UTF-8"%>
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>ceshi</title>
</head>
<body>
Hello World!
<%
System.out.println("hahahhahahahahhaahahhaha");
%>
</body>
</html>

# 访问路径
curl localhost:9090/test
# 显示
Hello World!

# 查看日志
tail -f /home/tomcat_logs/catalina.out

18-Nov-2020 07:46:42.969 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/usr/local/apache-tomcat-9.0.39/webapps/host-manager] has finished in [51] ms
18-Nov-2020 07:46:42.969 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/usr/local/apache-tomcat-9.0.39/webapps/manager]
18-Nov-2020 07:46:43.003 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/usr/local/apache-tomcat-9.0.39/webapps/manager] has finished in [34] ms
18-Nov-2020 07:46:43.003 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/usr/local/apache-tomcat-9.0.39/webapps/ROOT]
18-Nov-2020 07:46:43.055 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/usr/local/apache-tomcat-9.0.39/webapps/ROOT] has finished in [52] ms
18-Nov-2020 07:46:43.055 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/usr/local/apache-tomcat-9.0.39/webapps/test]
18-Nov-2020 07:46:43.119 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/usr/local/apache-tomcat-9.0.39/webapps/test] has finished in [64] ms
18-Nov-2020 07:46:43.126 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-8080"]
18-Nov-2020 07:46:43.160 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in [1188] milliseconds
hahahhahahahahhaahahhaha
hahahhahahahahhaahahhaha
hahahhahahahahhaahahhaha
hahahhahahahahhaahahhaha

发布自己的镜像

  1. hub.docker.com上注册账号

  2. 命令登录

    [root@VM-0-17-centos tomcat_logs]# docker login -u yangzhengdocker
    Password: 
    WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
    Configure a credential helper to remove this warning. See
    https://docs.docker.com/engine/reference/commandline/login/#credentials-store
    
    Login Succeeded
    
    
  3. 上传到仓库

    docker push diytomcat:1.0
    

Dcoker网络

理解Docker0

查看linux ip

在这里插入图片描述

# 问题:docker 是如何处理容器网络访问的?

# 启动tomcat容器
[root@VM-0-17-centos ~]# docker run -d -P --name tomcat01 tomcat
# 查看容器的内部网络地址
# 发现容器启动的时候会得到一个eth0@if9 ip地址,docker分配的!
[root@VM-0-17-centos ~]# docker exec -it tomcat01 ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
8: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever

# 思考:linux能不能ping通容器内部?
PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.053 ms
64 bytes from 172.17.0.2: icmp_seq=2 ttl=64 time=0.022 ms
64 bytes from 172.17.0.2: icmp_seq=3 ttl=64 time=0.024 ms
# linux可以ping通docker容器内部

再次查看linux ip

在这里插入图片描述

再启动一个tomcat02

# 启动
[root@VM-0-17-centos ~]# docker run -d -P --name tomcat02 tomcat
4f6dcf5ff54fcd45a3a44ef52b013e5d9f7aebeb48ddc59a6e3eed4a4f539bf7
# 查看tomcat02的ip
[root@VM-0-17-centos ~]# docker exec -it tomcat02 ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
10: eth0@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever

再次查看linux ip

在这里插入图片描述

发现容器网卡都是一对对的

使用的是evth-pair技术,就是一对虚拟设备接口,他们都是成对出现的,一端连着协议,一端彼此相连,正因为有这个特性,evth-pair充当一个桥梁,连接各种虚拟网络设备。OpenStacDocker容器之间的连接,OVS的连接,都是使用evth-pair技术

# 现在来测试一下tomcat01和tomcat02是否可以ping通?
# 查看tomcat02 的ip addr
[root@VM-0-17-centos ~]# docker exec -it tomcat02 ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
10: eth0@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
# tomcat01 ping tomcat02
[root@VM-0-17-centos ~]# docker exec -it tomcat01 ping 172.17.0.3
PING 172.17.0.3 (172.17.0.3) 56(84) bytes of data.
64 bytes from 172.17.0.3: icmp_seq=1 ttl=64 time=0.073 ms
64 bytes from 172.17.0.3: icmp_seq=2 ttl=64 time=0.066 ms

结论:容器和容器之间事可以互相ping通的!

在这里插入图片描述
结论:

tomcat01tomcat02是公用一个路由器:docker0,所有的容器不指定网络的情况下,都是docker0路由的,docker会给我们的容器分配一个默认的可用IP

Docker中所有网络接口都是虚拟的,虚拟的转发效率高!(内网传递文件!),只要容器删除,对应网桥就没一对!

docker使用的是linux的桥接,宿主机是一个Docker容器的网桥docker0 .

在这里插入图片描述

容器互联

–link(不建议了)

思考一个场景,我们编写了一个微服务,database URL=IP:,项目不重启,数据库IP换掉了, 我们希望可以处理这个问题,可以名字来进行访问容器?

# 现在有两个容器启动着
[root@VM-0-17-centos ~]# docker ps 
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS                     NAMES
4f6dcf5ff54f        tomcat              "catalina.sh run"   About an hour ago   Up About an hour    0.0.0.0:32769->8080/tcp   tomcat02
6035eb00e8eb        tomcat
# tomcat01用名字ping tomcat02
[root@VM-0-17-centos ~]# docker exec -it tomcat01 ping tomcat02
6035eb00e8ebping: tomcat02: Name or service not known
[root@VM-0-17-centos ~]# 

# 通过--link在启动一个tomcat03
[root@VM-0-17-centos ~]# docker run -d -P --name tomcat03 --link tomcat02 tomcat 
d0e3f78809bf80025c141d61f1016c9ac2ec96c7a4c154b0d5efe7418cf7c806
# tomcat03 通过名字 ping tomcat02 发现可以ping通
[root@VM-0-17-centos ~]# docker exec -it tomcat03 ping tomcat02
PING tomcat02 (172.17.0.3) 56(84) bytes of data.
64 bytes from tomcat02 (172.17.0.3): icmp_seq=1 ttl=64 time=0.066 ms
64 bytes from tomcat02 (172.17.0.3): icmp_seq=2 ttl=64 time=0.054 ms

# 反向可以ping通吗?  ----  不可以,没有
[root@VM-0-17-centos ~]# docker exec -it tomcat02 ping tomcat03
ping: tomcat03: Name or service not known
# 查看tomcat03 host文件
[root@VM-0-17-centos ~]# docker exec -it tomcat03 cat /etc/hosts
127.0.0.1       localhost
::1     localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.3      tomcat02 4f6dcf5ff54f
172.17.0.4      d0e3f78809bf

# 本质就是我们再host配置中增加一个172.17.0.3      tomcat02 4f6dcf5ff54f

自定义网络

查看docker所有网络
[root@VM-0-17-centos ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
4c5c9773d7c2        bridge              bridge              local
bb9bc2375667        host                host                local
d693cd58b6b9        none                null                local
网络模式
  • bridge:桥接(默认)
  • none:不配置网络
  • host:和宿主机共享网络
  • container:容器内网络联通(用的少!局限很大)
自定义网络
# 创建网络
--driver  默认桥接
--subnet 子网地址 192.168.0.2 - 192.168.255.255  = 255*255
--geteway 路由地址
[root@VM-0-17-centos ~]# docker network create --driver bridge --subnet 192.168.0.0/16 --gateway 192.168.0.1 mynet
963b362091dfebe0f0315c6e372417fb0ed856b10e7b8fda80c7e855cd09c34d
[root@VM-0-17-centos ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
4c5c9773d7c2        bridge              bridge              local
bb9bc2375667        host                host                local
963b362091df        mynet               bridge              local
d693cd58b6b9        none                null                local
# 查看网络信息
[root@VM-0-17-centos ~]# docker network inspect mynet 
[
    {
        "Name": "mynet",
        "Id": "963b362091dfebe0f0315c6e372417fb0ed856b10e7b8fda80c7e855cd09c34d",
        "Created": "2020-11-19T11:06:40.373129529+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "192.168.0.0/16",
                    "Gateway": "192.168.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]
# 自己的网络就创建好了

# 在自己的网络上启动容器
[root@VM-0-17-centos ~]# docker run -d -P --name tomcat01 --network mynet tomcat
ba41fac798849d451cdbd5bc649a7d1754ff75d99a682b8f6ee9a0e3433f2aff
# 此时查看mynet的网络信息,发现多了一个tomcat01的Containers
[root@VM-0-17-centos ~]# docker network inspect mynet
[
    {
        "Name": "mynet",
        "Id": "963b362091dfebe0f0315c6e372417fb0ed856b10e7b8fda80c7e855cd09c34d",
        "Created": "2020-11-19T11:06:40.373129529+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "192.168.0.0/16",
                    "Gateway": "192.168.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "ba41fac798849d451cdbd5bc649a7d1754ff75d99a682b8f6ee9a0e3433f2aff": {
                "Name": "tomcat01",
                "EndpointID": "f64825d64a50cbbc9326de1c7e4dbfb7df92b304e9a2e7779408feaaa5ea3315",
                "MacAddress": "02:42:c0:a8:00:02",
                "IPv4Address": "192.168.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }
]
# 在启动一个tomcat02
[root@VM-0-17-centos ~]# docker run -d -P --name tomcat02 --network mynet tomcat
cf2d69e803e702e73856250db242cfdcf58ef14ba6ee61ca334fd3ce16f6146e
# 再次查看,发现又多了一个tomcat02的Containers
[root@VM-0-17-centos ~]# docker network inspect mynet
[
    {
        "Name": "mynet",
        "Id": "963b362091dfebe0f0315c6e372417fb0ed856b10e7b8fda80c7e855cd09c34d",
        "Created": "2020-11-19T11:06:40.373129529+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "192.168.0.0/16",
                    "Gateway": "192.168.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "ba41fac798849d451cdbd5bc649a7d1754ff75d99a682b8f6ee9a0e3433f2aff": {
                "Name": "tomcat01",
                "EndpointID": "f64825d64a50cbbc9326de1c7e4dbfb7df92b304e9a2e7779408feaaa5ea3315",
                "MacAddress": "02:42:c0:a8:00:02",
                "IPv4Address": "192.168.0.2/16",
                "IPv6Address": ""
            },
            "cf2d69e803e702e73856250db242cfdcf58ef14ba6ee61ca334fd3ce16f6146e": {
                "Name": "tomcat02",
                "EndpointID": "2da197fafab599312a7285b66dffb88efad355f1ee5d20e8076c521d598a1e1f",
                "MacAddress": "02:42:c0:a8:00:03",
                "IPv4Address": "192.168.0.3/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }
]
# 尝试tomcat01 ping tomcat02的ip   ----- 可以ping通
[root@VM-0-17-centos ~]# docker exec -it tomcat01 ping 192.168.0.3
PING 192.168.0.3 (192.168.0.3) 56(84) bytes of data.
64 bytes from 192.168.0.3: icmp_seq=1 ttl=64 time=0.075 ms
64 bytes from 192.168.0.3: icmp_seq=2 ttl=64 time=0.058 ms
^C
--- 192.168.0.3 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1ms
rtt min/avg/max/mdev = 0.058/0.066/0.075/0.011 ms

# 尝试tomcat01 ping tomcat02的 容器名称   ----- 可以ping通
[root@VM-0-17-centos ~]# docker exec -it tomcat01 ping tomcat02
PING tomcat02 (192.168.0.3) 56(84) bytes of data.
64 bytes from tomcat02.mynet (192.168.0.3): icmp_seq=1 ttl=64 time=0.031 ms
64 bytes from tomcat02.mynet (192.168.0.3): icmp_seq=2 ttl=64 time=0.042 ms
64 bytes from tomcat02.mynet (192.168.0.3): icmp_seq=3 ttl=64 time=0.060 ms
^C
--- tomcat02 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2ms
rtt min/avg/max/mdev = 0.031/0.044/0.060/0.013 ms

# 尝试tomcat02 ping tomcat01的 容器名称   ----- 可以ping通
[root@VM-0-17-centos ~]# docker exec -it tomcat02 ping tomcat01
PING tomcat01 (192.168.0.2) 56(84) bytes of data.
64 bytes from tomcat01.mynet (192.168.0.2): icmp_seq=1 ttl=64 time=0.047 ms
64 bytes from tomcat01.mynet (192.168.0.2): icmp_seq=2 ttl=64 time=0.049 ms
^C
--- tomcat01 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1ms
rtt min/avg/max/mdev = 0.047/0.048/0.049/0.001 ms

自定义的网络docker都已经帮我们维护好了对应的关系,推荐这样使用网络!

好处
  • 不同集群使用不同的网络,保证集群是健康和安全的

网络联通

docker network connect [OPTIONS] NETWORK CONTAINER

连接一个容器到一个网络

# 在docker0网络上启动两个tomcat:tomcat-docker0-1、tomcat-docker0-2
[root@VM-0-17-centos ~]# docker run -d -P --name tomcat-docker0-1 tomcat
3764a0dd1b83c6dbf5c2201050379bda41b1d48a00c8c88fe446774dd3424972
[root@VM-0-17-centos ~]# docker run -d -P --name tomcat-docker0-2 tomcat
a116102fd494d826e175ae739a0511d05923f7750130ede622e9cd5567087c6e
# 加上mynet网络上的两个tomcat:tomcat01、tomcat02
[root@VM-0-17-centos ~]# docker ps 
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS                     NAMES
a116102fd494        tomcat              "catalina.sh run"   4 seconds ago       Up 3 seconds        0.0.0.0:32774->8080/tcp   tomcat-docker0-2
3764a0dd1b83        tomcat              "catalina.sh run"   10 seconds ago      Up 9 seconds        0.0.0.0:32773->8080/tcp   tomcat-docker0-1
cf2d69e803e7        tomcat              "catalina.sh run"   13 minutes ago      Up 13 minutes       0.0.0.0:32772->8080/tcp   tomcat02
ba41fac79884        tomcat              "catalina.sh run"   16 minutes ago      Up 16 minutes       0.0.0.0:32771->8080/tcp   tomcat01

# 测试打通mynet - tomcat-docker0-1
[root@VM-0-17-centos ~]# docker network connect mynet tomcat-docker0-1
# 查看mynet信息,发现Containers中添加了tomcat-docker0-1
[root@VM-0-17-centos ~]# docker network inspect mynet 
[
    {
        "Name": "mynet",
        "Id": "963b362091dfebe0f0315c6e372417fb0ed856b10e7b8fda80c7e855cd09c34d",
        "Created": "2020-11-19T11:06:40.373129529+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "192.168.0.0/16",
                    "Gateway": "192.168.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "3764a0dd1b83c6dbf5c2201050379bda41b1d48a00c8c88fe446774dd3424972": {
                "Name": "tomcat-docker0-1",
                "EndpointID": "a01c359a1a82b679533d24cb622d2acb54ef25f7e8a1d25f9c2e02c1ac0f61b7",
                "MacAddress": "02:42:c0:a8:00:04",
                "IPv4Address": "192.168.0.4/16",
                "IPv6Address": ""
            },
            "ba41fac798849d451cdbd5bc649a7d1754ff75d99a682b8f6ee9a0e3433f2aff": {
                "Name": "tomcat01",
                "EndpointID": "f64825d64a50cbbc9326de1c7e4dbfb7df92b304e9a2e7779408feaaa5ea3315",
                "MacAddress": "02:42:c0:a8:00:02",
                "IPv4Address": "192.168.0.2/16",
                "IPv6Address": ""
            },
            "cf2d69e803e702e73856250db242cfdcf58ef14ba6ee61ca334fd3ce16f6146e": {
                "Name": "tomcat02",
                "EndpointID": "2da197fafab599312a7285b66dffb88efad355f1ee5d20e8076c521d598a1e1f",
                "MacAddress": "02:42:c0:a8:00:03",
                "IPv4Address": "192.168.0.3/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }
]

# 测试tomcat01和tomcat-docker0-1是否可以ping通 -----  可以
[root@VM-0-17-centos ~]# docker exec -it tomcat01 ping tomcat-docker0-1
PING tomcat-docker0-1 (192.168.0.4) 56(84) bytes of data.
64 bytes from tomcat-docker0-1.mynet (192.168.0.4): icmp_seq=1 ttl=64 time=0.051 ms
64 bytes from tomcat-docker0-1.mynet (192.168.0.4): icmp_seq=2 ttl=64 time=0.053 ms

# 因为tomcat-docker0-2还没有打通,所以tomcat01和tomcat-docker0-2还是ping不通的
[root@VM-0-17-centos ~]# docker exec -it tomcat01 ping tomcat-docker0-2
ping: tomcat-docker0-2: Name or service not known
结论:

假设要跨网络操作别人,就需要使用docker network connect 连通

Redis集群实战

在这里插入图片描述

# 创建网卡
docker network create redis --subnet 172.18.0.0/16

# 通过脚本创建六个redis配置,内容如下  -----  start -----
for port in $(seq 1 6); \
do \
mkdir -p /mydata/redis/node-${port}/conf
touch /mydata/redis/node-${port}/conf/redis.conf
cat << EOF >/mydata/redis/node-${port}/conf/redis.conf
port 6379
bind 0.0.0.0
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
cluster-announce-ip 172.18.0.1${port}
cluster-announce-port 6379
cluster-announce-bus-port 16379
appendonly yes
EOF
done
# ------- 脚本内容完毕 end ---------

# 启动redis镜像
docker run -p 6371:6379 -p 16371:16379 --name redis-1 \
-v /mydata/redis/node-1/data:/data \
-v /mydata/redis/node-1/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.18.0.11 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf

docker run -p 6372:6379 -p 16372:16379 --name redis-2 \
-v /mydata/redis/node-2/data:/data \
-v /mydata/redis/node-2/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.18.0.12 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf

docker run -p 6373:6379 -p 16373:16379 --name redis-3 \
-v /mydata/redis/node-3/data:/data \
-v /mydata/redis/node-3/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.18.0.13 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf

docker run -p 6374:6379 -p 16374:16379 --name redis-4 \
-v /mydata/redis/node-4/data:/data \
-v /mydata/redis/node-4/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.18.0.14 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf

docker run -p 6375:6379 -p 16375:16379 --name redis-5 \
-v /mydata/redis/node-5/data:/data \
-v /mydata/redis/node-5/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.18.0.15 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf

docker run -p 6376:6379 -p 16376:16379 --name redis-6 \
-v /mydata/redis/node-6/data:/data \
-v /mydata/redis/node-6/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.18.0.16 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf

# 启动六个

# 查看启动的容器
[root@VM-0-17-centos ~]# docker ps 
CONTAINER ID        IMAGE                    COMMAND                  CREATED              STATUS              PORTS                                              NAMES
f6c6730b65f9        redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   10 seconds ago       Up 9 seconds        0.0.0.0:6376->6379/tcp, 0.0.0.0:16376->16379/tcp   redis-6
9ab40f67b485        redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   38 seconds ago       Up 37 seconds       0.0.0.0:6375->6379/tcp, 0.0.0.0:16375->16379/tcp   redis-5
7677868757da        redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   About a minute ago   Up About a minute   0.0.0.0:6374->6379/tcp, 0.0.0.0:16374->16379/tcp   redis-4
8e2cf1cb279c        redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   About a minute ago   Up About a minute   0.0.0.0:6373->6379/tcp, 0.0.0.0:16373->16379/tcp   redis-3
62828b53de92        redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   2 minutes ago        Up 2 minutes        0.0.0.0:6372->6379/tcp, 0.0.0.0:16372->16379/tcp   redis-2
f14b2397c438        redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   3 minutes ago        Up 3 minutes        0.0.0.0:6371->6379/tcp, 0.0.0.0:16371->16379/tcp   redis-1

# 创建redis集群
# 进入redis-1
[root@VM-0-17-centos ~]# docker exec -it redis-1 /bin/sh
# 创建集群
/data # redis-cli --cluster create 172.18.0.11:6379 172.18.0.12:6379 172.18.0.13:6379 172.18.0.14:6379 172.18.0.15:6379 172.18.0.16:63
79 --cluster-replicas 1
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 172.18.0.15:6379 to 172.18.0.11:6379
Adding replica 172.18.0.16:6379 to 172.18.0.12:6379
Adding replica 172.18.0.14:6379 to 172.18.0.13:6379
M: 01dde8b4feccbf7c2883b50ea705f5193ddec4b7 172.18.0.11:6379
   slots:[0-5460] (5461 slots) master
M: 7e22740de59bf2787643080afdcf2f9a84c125d6 172.18.0.12:6379
   slots:[5461-10922] (5462 slots) master
M: 1f602721c695b78971db1a1147dc5f36dd950bc3 172.18.0.13:6379
   slots:[10923-16383] (5461 slots) master
S: 823a49c0adc7315b6da5555ef34b02b7a780624a 172.18.0.14:6379
   replicates 1f602721c695b78971db1a1147dc5f36dd950bc3
S: 16070825427afc8fe5c66ec35ef3544f0a603058 172.18.0.15:6379
   replicates 01dde8b4feccbf7c2883b50ea705f5193ddec4b7
S: e8e2ab1a23a7108a113c4170d389b1b86c2d589f 172.18.0.16:6379
   replicates 7e22740de59bf2787643080afdcf2f9a84c125d6
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
...
>>> Performing Cluster Check (using node 172.18.0.11:6379)
M: 01dde8b4feccbf7c2883b50ea705f5193ddec4b7 172.18.0.11:6379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
M: 1f602721c695b78971db1a1147dc5f36dd950bc3 172.18.0.13:6379
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
M: 7e22740de59bf2787643080afdcf2f9a84c125d6 172.18.0.12:6379
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: 16070825427afc8fe5c66ec35ef3544f0a603058 172.18.0.15:6379
   slots: (0 slots) slave
   replicates 01dde8b4feccbf7c2883b50ea705f5193ddec4b7
S: e8e2ab1a23a7108a113c4170d389b1b86c2d589f 172.18.0.16:6379
   slots: (0 slots) slave
   replicates 7e22740de59bf2787643080afdcf2f9a84c125d6
S: 823a49c0adc7315b6da5555ef34b02b7a780624a 172.18.0.14:6379
   slots: (0 slots) slave
   replicates 1f602721c695b78971db1a1147dc5f36dd950bc3
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

# 进入集群,查看集群信息
/data # redis-cli -c
# 集群信息
127.0.0.1:6379> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:258
cluster_stats_messages_pong_sent:266
cluster_stats_messages_sent:524
cluster_stats_messages_ping_received:261
cluster_stats_messages_pong_received:258
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:524
# 集群节点
127.0.0.1:6379> cluster nodes
01dde8b4feccbf7c2883b50ea705f5193ddec4b7 172.18.0.11:6379@16379 myself,master - 0 1605774253000 1 connected 0-5460
1f602721c695b78971db1a1147dc5f36dd950bc3 172.18.0.13:6379@16379 master - 0 1605774255825 3 connected 10923-16383
7e22740de59bf2787643080afdcf2f9a84c125d6 172.18.0.12:6379@16379 master - 0 1605774255525 2 connected 5461-10922
16070825427afc8fe5c66ec35ef3544f0a603058 172.18.0.15:6379@16379 slave 01dde8b4feccbf7c2883b50ea705f5193ddec4b7 0 1605774255000 5 connected
e8e2ab1a23a7108a113c4170d389b1b86c2d589f 172.18.0.16:6379@16379 slave 7e22740de59bf2787643080afdcf2f9a84c125d6 0 1605774255000 6 connected
823a49c0adc7315b6da5555ef34b02b7a780624a 172.18.0.14:6379@16379 slave 1f602721c695b78971db1a1147dc5f36dd950bc3 0 1605774254522 4 connected

# 添加数据,发现是172.18.0.13:6379这个节点添加的数据
127.0.0.1:6379> set a b
-> Redirected to slot [15495] located at 172.18.0.13:6379
OK
172.18.0.13:6379> 

# 把redis-3这个节点stop看看还能不能拿到a的值
[root@VM-0-17-centos ~]# docker stop redis-3
redis-3

# 发现还是可以获取到a的值,只不过是从172.18.0.14:6379 节点拿到的
[root@VM-0-17-centos ~]# docker exec -it redis-1 /bin/sh
/data # redis-cli -c
127.0.0.1:6379> get a
-> Redirected to slot [15495] located at 172.18.0.14:6379
"b"

# 然后查看节点 -- 有:172.18.0.13:6379@16379 master,fail
172.18.0.14:6379> cluster nodes
823a49c0adc7315b6da5555ef34b02b7a780624a 172.18.0.14:6379@16379 myself,master - 0 1605774707000 7 connected 10923-16383
1f602721c695b78971db1a1147dc5f36dd950bc3 172.18.0.13:6379@16379 master,fail - 1605774513773 1605774513000 3 connected
7e22740de59bf2787643080afdcf2f9a84c125d6 172.18.0.12:6379@16379 master - 0 1605774708311 2 connected 5461-10922
e8e2ab1a23a7108a113c4170d389b1b86c2d589f 172.18.0.16:6379@16379 slave 7e22740de59bf2787643080afdcf2f9a84c125d6 0 1605774707310 6 connected
01dde8b4feccbf7c2883b50ea705f5193ddec4b7 172.18.0.11:6379@16379 master - 0 1605774708000 1 connected 0-5460
16070825427afc8fe5c66ec35ef3544f0a603058 172.18.0.15:6379@16379 slave 01dde8b4feccbf7c2883b50ea705f5193ddec4b7 0 1605774708512 5 connected
172.18.0.14:6379> 

# 集群搭建成功

最后奉上docker整个流程的图片

在这里插入图片描述

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值