docker学习5------docker网络(34-40集)

目录

Docker 网络

理解Docker0

问题,docker 是如何处理容器网络访问的?

原理

 结论:tomcat01 和 tomcat02 是公用的一个路由器docker0

小结

​编辑

存在问题和解决问题

 自定义网络----容器互联

查看所有的docker网络

网络模式

 测试

网络连通 

结论:假设要跨网络操作别人,就需要使用docker network connect 连通

实战:部署Redis集群

idea整合docker,SpringBoot微服务打包Docker镜像


Docker 网络

理解Docker0

清空所有环境

[root@k2 ~]# docker rm -f $(docker ps -aq)
3ef6f9de6ce9
a69d884b5f4b
[root@k2 ~]# docker rmi -f $(docker images -aq)
Untagged: nginx:latest
Untagged: nginx@sha256:2d17cc4981bf1e22a87ef3b3dd20fbb72c3868738e3f307662eb40e2630d4320
Deleted: sha256:de2543b9436b7b0e2f15919c0ad4eab06e421cecc730c9c20660c430d4e5bc47
Deleted: sha256:2deefd67059cad4693c3ec996e7db36c70b4b0aa77d6401ece78f7a6448a976c
Untagged: mysql:5.7
Untagged: mysql@sha256:16e159331007eccc069822f7b731272043ed572a79a196a05ffa2ea127caaf67
Deleted: sha256:2dfc45a2fa416c9a9d8e5eca5507872dce078db1512d7a88d97bfd77381b2e3c
[root@k2 ~]# docker images
REPOSITORY   TAG       IMAGE ID   CREATED   SIZE
[root@k2 ~]# docker ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES

测试

 三个网络

问题,docker 是如何处理容器网络访问的?

[root@k2 /]# docker run -d -P --name tomcat01 tomcat

# 查看容器的内部网络地址	ip addr , 发现容器启动的时候会得到一个 etho$if262 ip地址,docker分配的

[root@k2 /]# docker exec -it tomcat01  ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
  ..........
14: eth0@if15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 

如果 ip addr 报错

[root@k2 /]# docker exec -it tomcat01 ip addr
OCI runtime exec failed: exec failed: unable to start container process: exec: "ip": executable file not found in $PATH: unknown
[root@k2 /]# docker exec -it tomcat01 /bin/bash
root@c33985d3669a:/usr/local/tomcat# 
root@c33985d3669a:/usr/local/tomcat# apt update && apt install -y iproute2
root@c33985d3669a:/usr/local/tomcat# exit
exit


# 思考,Linux 能不能ping通容器内部
[root@k2 /]# ping  172.17.0.2
PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.050 ms
64 bytes from 172.17.0.2: icmp_seq=2 ttl=64 time=0.029 ms

# Linux 可以,ping通docker容器内部

原理

1、我们每启动一个docker容器,docker就会给docker容器分配一个ip,只要安装了docker,就会有一个网卡docker0 是桥接模式,使用的技术是evth-pair技术
再次测试 ip addr

[root@k2 /]#  ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group 
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP 
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state 
15: vethbc038f8@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc 

2、再启动一个容器测试,发现又多了一对网卡

[root@k2 /]# docker run -d -P --name tomcat02 tomcat
e8e8c66666666666665e05c8e3bd6db57f3
[root@k2 /]#  ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group 
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP 
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state 
15: vethbc038f8@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc 
17: vetha5232c9@if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc 
[root@k2 /]# 


* 我们发现这个容器带来的网卡,都是一对一对的
* evth-pair 就是一对虚拟设备接口,他们都是成对出现的,一端连着协议,一端彼此相连
* 正因为有这个特性,evth-pair 充当一个桥梁,连接各种虚拟网络设备
* OpenStac , Docker 容器之间的连接,OVS的连接,都是使用 evth-pair 技术

3、来测试下 Tomcat01 和Tomcat02 是否可以ping通

[root@k2 /]# docker exec -it tomcat02 ping  172.17.0.2

# 结论: 容器和容器之间可以互相ping

如果ping报错

[root@k2 /]# docker exec -it tomcat02 ping  172.17.0.2
OCI runtime exec failed: exec failed: unable to start container process: exec: "ping": executable file not found in $PATH: unknown

[root@k2 /]# docker exec -it tomcat02  /bin/bash
root@e8e8cfbbcd61:/usr/local/tomcat# apt-get update && apt-get install iputils-ping
Hit:1 http://deb.debian.org/debian bullseye InRelease                              
Hit:2 http://deb.debian.org/debian bullseye-updates InRelease                      
Hit:3 http://security.debian.org/debian-security bullseye-security InRelease
Reading package lists... Done                                
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following NEW packages will be installed:
  iputils-ping
0 upgraded, 1 newly installed, 0 to remove and 2 not upgraded.
Need to get 49.8 kB of archives.
After this operation, 116 kB of additional disk space will be used.
Get:1 http://deb.debian.org/debian bullseye/main amd64 iputils-ping amd64 3:20210202-1 [49.8 kB]
Fetched 49.8 kB in 0s (157 kB/s)     
debconf: delaying package configuration, since apt-utils is not installed
Selecting previously unselected package iputils-ping.
(Reading database ... 12909 files and directories currently installed.)
Preparing to unpack .../iputils-ping_3%3a20210202-1_amd64.deb ...
Unpacking iputils-ping (3:20210202-1) ...
Setting up iputils-ping (3:20210202-1) ...
root@e8e8cfbbcd61:/usr/local/tomcat# exit
exit
[root@k2 /]# docker exec -it tomcat02 ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.072 ms
64 bytes from 172.17.0.2: icmp_seq=2 ttl=64 time=0.038 ms
64 bytes from 172.17.0.2: icmp_seq=3 ttl=64 time=0.042 ms
[root@k2 /]# 

绘制一个网络模型图

 结论:tomcat01 和 tomcat02 是公用的一个路由器docker0

所有的容器不指定网络的情况下,都是docker0路由的,docker 会给容器分配一个默认的可用ip

小结

Docker 使用的是Linux的桥接,宿主机中是一个Docker容器的网络 Docker0

 Docker 中的所有的网络接口都是虚拟的,虚拟的转发效率高(内网传递文件),只要容器删除,对应网桥一对就没了

存在问题和解决问题

思考一个场景,编写一个微服务,database url=ip; 项目不重启,数据库ip换掉了,希望可以处理这个问题,可以名字来访问容器

[root@k2 /]# docker exec -it tomcat02 ping tomcat01
ping: tomcat01: Name or service not known

# 如何可以解决
# 通过 --link 就可以解决网络连接问题
[root@k2 /]# docker run -d -P --name tomcat03 --link tomcat02 tomcat
cb77a58df319ac2410355eb8153e4aaea2ed3c45a9b30d5de39f314b4b427346
[root@k2 /]# docker exec -it tomcat03 ping tomcat02
PING tomcat02 (172.17.0.3) 56(84) bytes of data.
64 bytes from tomcat02 (172.17.0.3): icmp_seq=1 ttl=64 time=0.077 ms
64 bytes from tomcat02 (172.17.0.3): icmp_seq=2 ttl=64 time=0.045 ms

#  反向可以ping通吗?
[root@k2 /]# docker exec -it tomcat02 ping tomcat03
ping: tomcat03: Name or service not known

其实 tomcat03就是在本地配置了Tomcat02的配置

# 查看 hosts 配置
[root@k2 /]# docker exec -it tomcat03 cat /etc/hosts
127.0.0.1       localhost
::1     localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.3      tomcat02 e8e8cfbbcd61
172.17.0.4      cb77a58df319

 自定义网络----容器互联

查看所有的docker网络

[root@k2 /]# docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
29505cc77bbf   bridge    bridge    local
b7c447beef3a   host      host      local
09316664fa6e   none      null      local

网络模式

 测试

[root@k2 /]# docker rm -f $(docker ps -aq)
9evgtgtge4514
[root@k2 /]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc 
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP>   
3: docker0: <BROADCAST,MULTICAST,UP> mtu 1500 
[root@k2 /]# docker  run -d -P --name tomcat01 --net bridge tomcat
ad18666666666666e622241666666666666666
[root@k2 /]# 
# 我们直接启动的命令  --net bridge ,这个就是我们的docker0
docker  run -d -P --name tomcat01  tomcat
docker  run -d -P --name tomcat01 --net bridge tomcat

# docker0 的特点:是默认的,域名不能访问, --link 可以打通连接

# 可以自定义一个网络
# --driver bridge
# --subnet 192.168.0.0/16 
# --gateway 192.168.0.1 
[root@k2 /]# docker network create --driver bridge --subnet 192.168.0.0/16 --gateway 192.168.0.1 mynet
74657fc8666666666666666abe68045561ed9
[root@k2 /]# docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
29505cc77bbf   bridge    bridge    local
b7c447beef3a   host      host      local
74657fc8f106   mynet     bridge    local
09316664fa6e   none      null      local

自己的网络就创建好了

[root@k2 ~]# docker network inspect mynet
[
    {
        "Name": "mynet",
        "Id": "74657fc8f1666666666666661dabe68045561ed9",
        "Created": "2022-05-25T01:48:05.384514751+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "192.168.0.0/16"
                    "gateway": "192.168.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]


[root@k2 ~]# docker run -d -P --name tomcat-net-01 --net mynet tomcat
fe44da6666666666666664063a4d730849e4
[root@k2 ~]# docker run -d -P --name tomcat-net-02 --net mynet tomcat
9343d5f57666666666666666917db6666cc7bb5
[root@k2 ~]# docker network inspect mynet
[
    {
        "Name": "mynet",
        "Id": "7465766661dabe6666666666666661ed9",
        "Created": "2022-05-25T01:48:05.384514751+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "192.168.0.0/16"
                     "gateway": "192.168.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "9343d5f6666666666666cf3821ecc7bb5": {
                "Name": "tomcat-net-02",
                "EndpointID": "1384666666b80d6666669006b9268d9182571",
                "MacAddress": "02:42:c0:a8:00:03",
                "IPv4Address": "192.168.0.3/16",
                "IPv6Address": ""
            },
            "fe44da666666666730849e4": {
                "Name": "tomcat-net-01",
                "EndpointID": "ffe7daf5666666666666ad80633e1",
                "MacAddress": "02:42:c0:a8:00:02",
                "IPv4Address": "192.168.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }
]

再次测试ping连接

[root@k2 ~]# docker exec -it tomcat-net-01 ping 192.168.0.3
PING 192.168.0.3 (192.168.0.3) 56(84) bytes of data.
64 bytes from 192.168.0.3: icmp_seq=1 ttl=64 time=0.081 ms
64 bytes from 192.168.0.3: icmp_seq=2 ttl=64 time=0.056 ms

不使用 --link,也可以ping名字

[root@k2 ~]# docker exec -it tomcat-net-01 ping tomcat-net-02
PING tomcat-net-02 (192.168.0.3) 56(84) bytes of data.
64 bytes from tomcat-net-02.mynet (192.168.0.3): icmp_seq=1 ttl=64 time=0.066 ms
64 bytes from tomcat-net-02.mynet (192.168.0.3): icmp_seq=2 ttl=64 time=0.042 ms
64 bytes from tomcat-net-02.mynet (192.168.0.3): icmp_seq=3 ttl=64 time=0.057 ms

我们自定义的网络docker都已经帮我们维护好了对应的关系,推荐平时这样使用网络

网络连通 

[root@k2 ~]# docker run -d -P --name tomcat03  tomcat
d7a8b6666666666c053ca6666668273
[root@k2 ~]# docker run -d -P --name tomcat04  tomcat
c166666666625b121a66666666092cdb
[root@k2 ~]# docker ps
CONTAINER ID   IMAGE     COMMAND             CREATED          STATUS          PORTS                                         NAMES
c1a107b87899   tomcat    "catalina.sh run"   7 seconds ago    Up 6 seconds    0.0.0.0:49161->8080/tcp, :::49161->8080/tcp   tomcat04
d7a8b82d1b95   tomcat    "catalina.sh run"   11 seconds ago   Up 11 seconds   0.0.0.0:49160->8080/tcp, :::49160->8080/tcp   tomcat03
9343d5f57456   tomcat    "catalina.sh run"   29 minutes ago   Up 29 minutes   0.0.0.0:49159->8080/tcp, :::49159->8080/tcp   tomcat-net-02
fe44da7e8adb   tomcat    "catalina.sh run"   30 minutes ago   Up 30 minutes   0.0.0.0:49158->8080/tcp, :::49158->8080/tcp   tomcat-net-01
[root@k2 ~]# docker exec -it tomcat03 ping tomcat-net-01
ping: tomcat-net-01: Name or service not known

# 测试打通 tomcat03 - mynet 
[root@k2 ~]# docker network connect mynet tomcat03

# 连通之后就是将 tomcat03 放到 mynet 网络下

# 一个容器,两个ip地址:
# 比如阿里云服务器,一个公网ip,一个私网ip
[root@k2 ~]# docker network connect mynet tomcat03
[root@k2 ~]# docker network inspect mynet 
[
    {
        "Name": "mynet",
        "Id": "74657fc86666666666e68045561ed9",
        "Created": "2022-05-25T01:48:05.384514751+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "192.168.0.0/16"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "9343d5f57456695e566666667bb5": {
                "Name": "tomcat-net-02",
                "EndpointID": "1384f2435fb80d745066666666d9182571",
                "MacAddress": "02:42:c0:a8:00:03",
                "IPv4Address": "192.168.0.3/16",
                "IPv6Address": ""
            },
            "d7a8b82d1b956666666b996f9da4349c053ca1dd84f8273": {
                "Name": "tomcat03",
                "EndpointID": "0583fe2ffc6502adef3b766666666ec28761a63b0c6",
                "MacAddress": "02:42:c0:a8:00:04",
                "IPv4Address": "192.168.0.4/16",
                "IPv6Address": ""
            },
            "fe44da7e8adb9616666666668eba22a34063a4d730849e4": {
                "Name": "tomcat-net-01",
                "EndpointID": "ffe7daf5853cb2b66666666666f1ad80633e1",
                "MacAddress": "02:42:c0:a8:00:02",
                "IPv4Address": "192.168.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }
]
# 03连通OK
[root@k2 ~]# docker exec -it tomcat03 ping tomcat-net-01  
PING tomcat-net-01 (192.168.0.2) 56(84) bytes of data.
64 bytes from tomcat-net-01.mynet (192.168.0.2): icmp_seq=1 ttl=64 time=0.080 ms
64 bytes from tomcat-net-01.mynet (192.168.0.2): icmp_seq=2 ttl=64 time=0.056 ms

#  04 依然打不通
[root@k2 ~]# docker exec -it tomcat04 ping tomcat-net-01  
ping: tomcat-net-01: Name or service not known

结论:假设要跨网络操作别人,就需要使用docker network connect 连通

实战:部署Redis集群

 shell脚本

# 创建网卡
docker network create redis --subnet 172.38.0.0/16

# 通过脚本创建六个redis配置
for port in $(seq 1 6); \
do \
mkdir -p /mydata/redis/node-${port}/conf
touch /mydata/redis/node-${port}/conf/redis.conf
cat << EOF >/mydata/redis/node-${port}/conf/redis.conf
port 6379
bind 0.0.0.0
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
cluster-announce-ip 172.38.0.1${port}
cluster-announce-port 6379
cluster-announce-bus-port 16379
appendonly yes
EOF
done

docker run -p 637${port}:6379 -p 1637${port}:16379 --name redis-${port} \
-v /mydata/redis/node-${port}/data:/data \
-v /mydata/redis/node-${port}/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.38.0.1${port} redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf; 

docker run -p 6371:6379 -p 16371:16379 --name redis-1 \
-v /mydata/redis/node-1/data:/data \
-v /mydata/redis/node-1/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.38.0.11 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf; 

docker run -p 6376:6379 -p 16376:16379 --name redis-6 \
-v /mydata/redis/node-6/data:/data \
-v /mydata/redis/node-6/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.38.0.16 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf; 

# 创建集群
redis-cli --cluster create 172.38.0.11:6379 172.38.0.12:6379 172.38.0.13:6379 172.38.0.14:6379 172.38.0.15:6379 172.38.0.16:6379 --cluster-rep
licas 1
[root@k2 ~]# docker ps
CONTAINER ID   IMAGE     COMMAND             CREATED             STATUS             PORTS                                         NAMES
c1a107b87899   tomcat    "catalina.sh run"   34 minutes ago      Up 34 minutes      0.0.0.0:49161->8080/tcp, :::49161->8080/tcp   tomcat04
d7a8b82d1b95   tomcat    "catalina.sh run"   34 minutes ago      Up 34 minutes      0.0.0.0:49160->8080/tcp, :::49160->8080/tcp   tomcat03
9343d5f57456   tomcat    "catalina.sh run"   About an hour ago   Up About an hour   0.0.0.0:49159->8080/tcp, :::49159->8080/tcp   tomcat-net-02
fe44da7e8adb   tomcat    "catalina.sh run"   About an hour ago   Up About an hour   0.0.0.0:49158->8080/tcp, :::49158->8080/tcp   tomcat-net-01
e06d18697809   tomcat    "catalina.sh run"   17 hours ago        Up 17 hours        0.0.0.0:49157->8080/tcp, :::49157->8080/tcp   tomcat01
[root@k2 ~]# docker rm -f $(docker ps -aq)
c1a107b87899
d7a8b82d1b95
9343d5f57456
fe44da7e8adb
e06d18697809
[root@k2 ~]# docker network create redis --subnet 172.38.0.0/16
134b86666666695f576666666666beed
[root@k2 ~]# docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
29505cc77bbf   bridge    bridge    local
b7c447beef3a   host      host      local
74657fc8f106   mynet     bridge    local
09316664fa6e   none      null      local
134b8cc63439   redis     bridge    local
[root@k2 ~]# docker network inspect redis
[
    {
        "Name": "redis",
        "Id": "134b8cc666666666666666d15c70d6beed",
        "Created": "2022-05-25T18:30:05.797037488+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "172.38.0.0/16"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]
[root@k2 ~]# for port in $(seq 1 6); \
> do \
> mkdir -p /mydata/redis/node-${port}/conf
> touch /mydata/redis/node-${port}/conf/redis.conf
> cat << EOF >/mydata/redis/node-${port}/conf/redis.conf
> port 6379
> bind 0.0.0.0
> cluster-enabled yes
> cluster-config-file nodes.conf
> cluster-node-timeout 5000
> cluster-announce-ip 172.38.0.1${port}
> cluster-announce-port 6379
> cluster-announce-bus-port 16379
> appendonly yes
> EOF
> done
[root@k2 ~]# cd /mydata/redis/
[root@k2 redis]# ls
node-1  node-2  node-3  node-4  node-5  node-6
[root@k2 redis]# cd node-1/conf/
[root@k2 conf]# cat redis.conf 
port 6379
bind 0.0.0.0
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
cluster-announce-ip 172.38.0.11
cluster-announce-port 6379
cluster-announce-bus-port 16379
appendonly yes
[root@k2 conf]# docker run -p 6371:6379 -p 16371:16379 --name redis-1 \
> -v /mydata/redis/node-1/data:/data \
> -v /mydata/redis/node-1/conf/redis.conf:/etc/redis/redis.conf \
> -d --net redis --ip 172.38.0.11 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf; 
Unable to find image 'redis:5.0.9-alpine3.11' locally
5.0.9-alpine3.11: Pulling from library/redis
cbdbe7a5bc2a: Pull complete 
dc0373118a0d: Pull complete 
cfd369fe6256: Pull complete 
3e45770272d9: Pull complete 
558de8ea3153: Pull complete 
a2c652551612: Pull complete 
Digest: sha256:83a3af3666664ba86ad966666e35d0
Status: Downloaded newer image for redis:5.0.9-alpine3.11
693f88a5f666666666611ea8a669d2494666666a900f
[root@k2 conf]# docker ps
CONTAINER ID   IMAGE                    COMMAND                  CREATED          STATUS          PORTS                                                                                      NAMES
693f88a5f828   redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   11 seconds ago   Up 10 seconds   0.0.0.0:6371->6379/tcp, :::6371->6379/tcp, 0.0.0.0:16371->16379/tcp, :::16371->16379/tcp   redis-1
.....
[root@k2 conf]# docker run -p 6376:6379 -p 16376:16379 --name redis-6 \
> -v /mydata/redis/node-6/data:/data \
> -v /mydata/redis/node-6/conf/redis.conf:/etc/redis/redis.conf \
> -d --net redis --ip 172.38.0.16 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf; 
51a543266666666666b3abf0
[root@k2 conf]# docker ps
CONTAINER ID   IMAGE                    COMMAND                  CREATED              STATUS              PORTS                                                                                      NAMES
51a54321333f   redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   21 seconds ago       Up 20 seconds       0.0.0.0:6376->6379/tcp, :::6376->6379/tcp, 0.0.0.0:16376->16379/tcp, :::16376->16379/tcp   redis-6
d7a37b7e1e49   redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   About a minute ago   Up 59 seconds       0.0.0.0:6375->6379/tcp, :::6375->6379/tcp, 0.0.0.0:16375->16379/tcp, :::16375->16379/tcp   redis-5
359020af8325   redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   About a minute ago   Up About a minute   0.0.0.0:6374->6379/tcp, :::6374->6379/tcp, 0.0.0.0:16374->16379/tcp, :::16374->16379/tcp   redis-4
37e7191b1760   redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   About a minute ago   Up About a minute   0.0.0.0:6373->6379/tcp, :::6373->6379/tcp, 0.0.0.0:16373->16379/tcp, :::16373->16379/tcp   redis-3
6d4da93dbfc3   redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   3 minutes ago        Up 3 minutes        0.0.0.0:6372->6379/tcp, :::6372->6379/tcp, 0.0.0.0:16372->16379/tcp, :::16372->16379/tcp   redis-2
693f88a5f828   redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   8 minutes ago        Up 8 minutes        0.0.0.0:6371->6379/tcp, :::6371->6379/tcp, 0.0.0.0:16371->16379/tcp, :::16371->16379/tcp   redis-1
[root@k2 conf]# docker exec -it redis-1 /bin/sh
/data # ls
appendonly.aof  nodes.conf
/data # redis-cli -cluster create 172.38.0.11:6379 172.38.0.12:6379 172.38.0.13:6379 172.38.0.14:6379 172.38.0.15:6379 172.38.0.16:6379 --cluster-repl
icas 1
Unrecognized option or bad number of args for: '-cluster'
/data # redis-cli --cluster create 172.38.0.11:6379 172.38.0.12:6379 172.38.0.13:6379 172.38.0.14:6379 172.38.0.15:6379 172.38.0.16:6379 --cluster-rep
licas 1
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 172.38.0.15:6379 to 172.38.0.11:6379
Adding replica 172.38.0.16:6379 to 172.38.0.12:6379
Adding replica 172.38.0.14:6379 to 172.38.0.13:6379
M: 7c7366666666641ae0dca 172.38.0.11:6379
   slots:[0-5460] (5461 slots) master
M: ef6666666666b4d5 172.38.0.12:6379
   slots:[5461-10922] (5462 slots) master
M: d5d2666666dcbd39c 172.38.0.13:6379
   slots:[10923-16383] (5461 slots) master
S: d966666666b18979c 172.38.0.14:6379
   replicates d5d23736666666c7e9666bd39c
S: 72588966666b44e 172.38.0.15:6379
   replicates 7c73d5e6666666666666666dca
S: 2a03a4d0db1aaa49c9e0003eb6b2831ec11b30ee 172.38.0.16:6379
   replicates ef9977666666666d5
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
...
>>> Performing Cluster Check (using node 172.38.0.11:6379)
M: 7c7366666666666666666dca 172.38.0.11:6379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
M: ef66666666e266666666b4d5 172.38.0.12:6379
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: 2a03666666666666666630ee 172.38.0.16:6379
   slots: (0 slots) slave
   replicates ef997769c6666666666666664d5
S: 72666666662666666666664e 172.38.0.15:6379
   slots: (0 slots) slave
   replicates 7c73d6666666666666666666dca
S: d9e06666666666666666679c 172.38.0.14:6379
   slots: (0 slots) slave
   replicates d5d23666666666666666666669c
M: d5666666666666666666d39c 172.38.0.13:6379
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

进入集群,插数据

/data # redis-cli -c
127.0.0.1:6379> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:908
cluster_stats_messages_pong_sent:894
cluster_stats_messages_sent:1802
cluster_stats_messages_ping_received:889
cluster_stats_messages_pong_received:908
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:1802
127.0.0.1:6379> cluster nodes
ef996666666fee6b4d5 172.38.0.12:6379@16379 master - 0 1653477040000 2 connected 5461-10922
2a03a4d066666661b30ee 172.38.0.16:6379@16379 slave ef9666666e6b4d5 0 1653477041517 6 connected
7c73d5e6666666dd66dca 172.38.0.11:6379@16379 myself,master - 0 1653477041000 1 connected 0-5460
7256666666d92f664e 172.38.0.15:6379@16379 slave 7c66661666660dca 0 1653477040717 5 connected
d9e66666979c 172.38.0.14:6379@16379 slave d5d236666666d39c 0 1653477041000 4 connected
d5d2666666bd39c 172.38.0.13:6379@16379 master - 0 1653477041724 3 connected 10923-16383
127.0.0.1:6379> set a b
-> Redirected to slot [15495] located at 172.38.0.13:6379
OK

另开一个窗口,停了redis-3

[root@k2 ~]# docker ps
CONTAINER ID   IMAGE                    COMMAND                  CREATED          STATUS          PORTS                                                                                      NAMES
51a54321333f   redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   16 minutes ago   Up 16 minutes   0.0.0.0:6376->6379/tcp, :::6376->6379/tcp, 0.0.0.0:16376->16379/tcp, :::16376->16379/tcp   redis-6
d7a37b7e1e49   redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   16 minutes ago   Up 16 minutes   0.0.0.0:6375->6379/tcp, :::6375->6379/tcp, 0.0.0.0:16375->16379/tcp, :::16375->16379/tcp   redis-5
359020af8325   redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   17 minutes ago   Up 17 minutes   0.0.0.0:6374->6379/tcp, :::6374->6379/tcp, 0.0.0.0:16374->16379/tcp, :::16374->16379/tcp   redis-4
37e7191b1760   redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   17 minutes ago   Up 17 minutes   0.0.0.0:6373->6379/tcp, :::6373->6379/tcp, 0.0.0.0:16373->16379/tcp, :::16373->16379/tcp   redis-3
6d4da93dbfc3   redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   18 minutes ago   Up 18 minutes   0.0.0.0:6372->6379/tcp, :::6372->6379/tcp, 0.0.0.0:16372->16379/tcp, :::16372->16379/tcp   redis-2
693f88a5f828   redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   24 minutes ago   Up 24 minutes   0.0.0.0:6371->6379/tcp, :::6371->6379/tcp, 0.0.0.0:16371->16379/tcp, :::16371->16379/tcp   redis-1
[root@k2 ~]# docker stop redis-3
redis-3

docker搭建Redis集群完成

172.38.0.13:6379> get a
^Z[1]+  Stopped                    redis-cli -c
/data # redis-cli -c
127.0.0.1:6379> get a
-> Redirected to slot [15495] located at 172.38.0.14:6379
"b"
172.38.0.14:6379> cluster nodes
2a66664d066666661b30ee 172.38.0.16:6379@16379 slave ef666666632410bd666664d5 0 1653477295000 6 connected
725866666925666666666e 172.38.0.15:6379@16379 slave 7c73d66666666666666666ca 0 1653477295280 5 connected
ef99766666666666666666 172.38.0.12:6379@16379 master - 0 1653477295000 2 connected 5461-10922
7c666666666666666666ca 172.38.0.11:6379@16379 master - 0 1653477296284 1 connected 0-5460
d56666666666666666d39c 172.38.0.13:6379@16379 master,fail - 1653477160569 1653477159000 3 connected
d9e0666666666666666666 172.38.0.14:6379@16379 myself,master - 0 1653477296000 7 connected 10923-16383
172.38.0.14:6379> 

我们使用docker之后,所有的技术都会慢慢的变得简单起来

idea整合docker,SpringBoot微服务打包Docker镜像

1、构建springboot项目
idea – Spring Initializr --下一步–Web (Spring Web)-- 完成

#  依赖
<dependencies>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>

        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <scope>test</scope>
        </dependency>

        <dependency>
            <groupId>org.junit.vintage</groupId>
            <artifactId>junit-vintage-engine</artifactId>
        </dependency>
    </dependencies>

 HelloController:

package com.example.demo.controller;

import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class HelloController {

 @RequestMapping("/hello")
 public String hello(){
     return "hello,davina";
 }
}

运行后,查看网页

 2、打包应用

 

 3、编写dockerfile

# demo -- 新建 --文件-- Dockerfile 输入如下
FROM java:8

COPY *.jar /app.jar

CMD ["--server.port=8080"]

EXPOSE 8080

ENTRYPOINT ["java","-jar","/app.jar"]

# 上传到Linux里
[root@k2 ~]# cd /home/
[root@k2 home]# ls
[root@k2 home]# mkdir idea
[root@k2 home]# ll
总用量 0
drwxr-xr-x. 2 root root 6 5月  25 21:39 idea
[root@k2 home]# cd idea/
[root@k2 idea]# ll
总用量 18060
-rw-r--r--. 1 root root 18487524 5月  25 21:43 demo-0.0.1-SNAPSHOT.jar
-rw-r--r--. 1 root root      122 5月  25 21:43 Dockerfile

4、构建镜像

[root@k2 idea]# docker build -t davina .
Sending build context to Docker daemon  18.49MB
Step 1/5 : FROM java:8
8: Pulling from library/java
5040bd298390: Pull complete 
fce5728aad85: Pull complete 
76610ec20bf5: Pull complete 
60170fec2151: Pull complete 
e98f73de8f0d: Pull complete 
11f7af24ed9c: Pull complete 
49e2d6393f32: Pull complete 
bb9cdec9c7f3: Pull complete 
Digest: sha256:c1ff613e8b666618156666666669d
Status: Downloaded newer image for java:8
 ---> d23bdf5b1b1b
Step 2/5 : COPY *.jar /app.jar
 ---> 814def847a05
Step 3/5 : CMD ["--server.port=8080"]
 ---> Running in c54e95b1fb61
Removing intermediate container c54e95b1fb61
 ---> c8b5a40794dc
Step 4/5 : EXPOSE 8080
 ---> Running in cdee05e19fea
Removing intermediate container cdee05e19fea
 ---> 9cce71680080
Step 5/5 : ENTRYPOINT ["java","-jar","/app.jar"]
 ---> Running in 945e2c80dfd5
Removing intermediate container 945e2c80dfd5
 ---> fe363d9a6ba4
Successfully built fe363d9a6ba4
Successfully tagged davina:latest
[root@k2 idea]# docker images
REPOSITORY   TAG                IMAGE ID       CREATED          SIZE
davina       latest             fe363d9a6ba4   18 seconds ago   662MB
tomcat       latest             5eb506608219   7 days ago       685MB
redis        5.0.9-alpine3.11   3661c84ee9d0   2 years ago      29.8MB
java         8                  d23bdf5b1b1b   5 years ago      643MB

5、发布运行

[root@k2 idea]# docker run -d -P --name 66-springboot-web davina
10b66666666666666666666666666660b83
[root@k2 idea]# docker ps
CONTAINER ID   IMAGE     COMMAND                  CREATED          STATUS          PORTS                                         NAMES
10b62dbb98ac   davina    "java -jar /app.jar …"   13 seconds ago   Up 13 seconds   0.0.0.0:49163->8080/tcp, :::49163->8080/tcp   66-springboot-web
[root@k2 idea]# curl localhost:49163
{"timestamp":"2022-05-25T13:48:59.919+00:00","status":404,"error":"Not Found","path":"/"}[root@k2 idea]# curl localhost:49163/hello
hello,davina[root@k2 idea]# 

我们使用docker之后,交付给别人一个镜像即可

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值