发布进行到DockerHub 31
— 04:53
https://www.bilibili.com/video/BV1og4y1q7M4?p=31
发布自己的镜像到 dockhub
- 地址 https://hub.docker.com 注册账号(gousheng8601/***)
- 发布自己的镜像到dockerhub
- docker login -u 用户名
- docker push
docker ps
docker push 用户名/镜像名:标签名
docker tag 镜像id weile/tomcat:1.0
[root@hell39 /]# docker login -u gousheng8601
Password:
....
....
Login Succeeded
镜像重命名
hello-world:latest 重命名为 leohelloworld:2.3
[root@hadoop1 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.cn-hangzhou.aliyuncs.com/xcmg_swy/schwing_web 1.1 b0f2c8a42361 22 hours ago 355MB
mcr.microsoft.com/dotnet/core/aspnet 2.1 4dbedd4849bf 3 days ago 253MB
hello-world latest bf756fb1ae65 14 months ago 13.3kB
[root@hadoop1 ~]# docker tag --help
Usage: docker tag SOURCE_IMAGE[:TAG] TARGET_IMAGE[:TAG]
Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE
[root@hadoop1 ~]# docker tag bf756fb1ae65 leohelloworld:2.3
[root@hadoop1 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.cn-hangzhou.aliyuncs.com/xcmg_swy/schwing_web 1.1 b0f2c8a42361 23 hours ago 355MB
mcr.microsoft.com/dotnet/core/aspnet 2.1 4dbedd4849bf 3 days ago 253MB
hello-world latest bf756fb1ae65 14 months ago 13.3kB
leohelloworld 2.3 bf756fb1ae65 14 months ago 13.3kB
[root@hadoop1 ~]#
发布自己的镜像到 阿里云 32
https://www.bilibili.com/video/BV1og4y1q7M4?p=32
- 登录阿里云
- 找到容器镜像服务
- 创建命名空间
- 创建镜像仓库
- 浏览阿里云
注意,第四步,创建镜像仓库要选择本地仓库
----- 注意登录方式与 登录 dockerhub 不一样 ,要加进行地址
1. docker login --username=***8621ali registry.cn-hangzhou.aliyuncs.com
2. docker images
docker push ****
docker login --help
[root@hadoop1 ~]# docker login --help
Usage: docker login [OPTIONS] [SERVER]
Log in to a Docker registry.
If no server is specified, the default is defined by the daemon.
Options:
-p, --password string Password
--password-stdin Take the password from stdin
-u, --username string Username
[root@hadoop1 ~]#
docker 小结 33
docker 网络 docker0 — 34
https://www.bilibili.com/video/BV1og4y1q7M4?p=34
我们每启动一个docker 容器,docker就给docker 容器分配一个ip , 只要安装了docker,就会有一个网卡docker0 ,桥接模式,使用的技术是veth-pair技术 !
//删除所有容器
docker rm -f $(docker ps -aq)
doker ps
//删除所有镜像
docker rmi -f $(docker images -aq)
docker images
ip addr
docker run -d -P --name tomcat01 tomcat
step01 --------------- 之前直接进docker里面去
[root@shell37 ~]# docker exec -it 3ffdd215139f /bin/bash
root@3ffdd215139f:/usr/local/tomcat#
root@3ffdd215139f:/usr/local/tomcat#
root@3ffdd215139f:/usr/local/tomcat# ll
bash: ll: command not found
root@3ffdd215139f:/usr/local/tomcat#
step02 ----- 现在查看 内部网络 ip 地址 ,发现容器启动的时候得到一个 eth0@if7 ip地址,docker 分配的 172.17.0.2
[root@shell37 ~]# docker exec -it 3ffdd215139f ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
6: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
step03 ------ 查看本机各个网卡ip地址 发现
----------------------------1 有 172.17.0.1
----------------------------2 多了 7: veth690225f@if6: 这个网卡 ,网卡恰好与 docker 内部的 6: eth0@if7: , 组成了6,7 互斥 对
[root@shell37 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:b0:37:c7 brd ff:ff:ff:ff:ff:ff
inet 192.168.121.37/24 brd 192.168.121.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
...
3...
4...
5: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:d7:3e:d8:2f brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:d7ff:fe3e:d82f/64 scope link
valid_lft forever preferred_lft forever
7: veth690225f@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether 4a:6f:30:17:e7:f9 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::486f:30ff:fe17:e7f9/64 scope link
valid_lft forever preferred_lft forever
step04 ------- 本机可以ping 通
[root@shell37 ~]# ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.058 ms
64 bytes from 172.17.0.2: icmp_seq=2 ttl=64 time=0.040 ms
64 bytes from 172.17.0.2: icmp_seq=3 ttl=64 time=0.044 ms
宿主机 : 172.17.0.1
Tomcat01: 172.17.0.2
Tomcat02: 172.17.0.3
veth Pair : 一对虚拟设备接口,它们都是成对出现的,一段连着协议,一段彼此相连 , veth-pair 充当一个桥梁,链接各种虚拟设备,openstack , Docker 容器之间链接,OVS链接 都使用 evth-pair 技术。
[root@shell37 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
89c282ef9d52 tomcat "catalina.sh run" About a minute ago Up About a minute 0.0.0.0:49154->8080/tcp tomcat02
3ffdd215139f tomcat "catalina.sh run" 43 minutes ago Up 43 minutes 0.0.0.0:49153->8080/tcp tomcat01
----- 查看 tomcat2
[root@shell37 ~]# docker exec -it 89c282ef9d52 ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
8: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
----- 查看 tomcat1
[root@shell37 ~]# docker exec -it 3ffdd215139f ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
6: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
--------- tomcat 2 ping tomcat 1 可以ping 通
[root@shell37 ~]# docker exec -it 89c282ef9d52 ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.162 ms
64 bytes from 172.17.0.2: icmp_seq=2 ttl=64 time=0.126 ms
tomcat 2 ping tomcat 1 可以ping 通 的原因
[root@shell37 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:b0:37:c7 brd ff:ff:ff:ff:ff:ff
inet 192.168.121.37/24 brd 192.168.121.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet6 fe80::a2cd:d26a:151d:8896/64 scope link tentative noprefixroute dadfailed
valid_lft forever preferred_lft forever
inet6 fe80::5da6:1659:a42d:dc48/64 scope link tentative noprefixroute dadfailed
valid_lft forever preferred_lft forever
inet6 fe80::e755:619a:88f1:ef0d/64 scope link tentative noprefixroute dadfailed
valid_lft forever preferred_lft forever
3: 。。。。。。。。
4: 。。。。。。。。
5: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:d7:3e:d8:2f brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:d7ff:fe3e:d82f/64 scope link
valid_lft forever preferred_lft forever
7: veth690225f@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether 4a:6f:30:17:e7:f9 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::486f:30ff:fe17:e7f9/64 scope link
valid_lft forever preferred_lft forever
9: veth1a484dd@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether e2:76:97:e5:e3:99 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::e076:97ff:fee5:e399/64 scope link
valid_lft forever preferred_lft forever
结论 : tomcat1 tomcat2 公用一个路由器 ,docker 0
所有容器不指定网络情况下,使用docker0 路由,docker 会给我们的容器分配一个默认的IP
// 在tomcat02(172.17.0.3) 容器 ping tomcat01(172.17.0.2) 可以ping通 ,原理见上图
docker exec -it tomcat02 ping 172.17.0.2
//移除数据后自动删除网卡数据
docker rm -f 991f923c27bb
ip addr
当前情况下,容器重启,ip地址变更,是否可以用服务名替换Ip地址访问呢?
容器互联 --link 35
https://www.bilibili.com/video/BV1og4y1q7M4?p=35
服务重启后Ip变掉了,所有需要使用服务名代替ip,但是通过普通方式ping 服务名 ,无法ping通,需要使用Link 技术实现 服务名代替Ip地址的功能
docker run -d -P --name tomcat02 tomcat
docker exec -it tomcat01 ip addr
*** 172.17.0.2
docker exec -it tomcat02 ip addr
*** 172.17.0.3
docker exec -it tomcat02 ping 172.17.0.2
//上面可以ping通
docker exec -it tomcat02 ping tomcat01
//上面可以ping不通
//启动一个tomcat03 之前的启动方式 如下
//docker run -d -P --name tomcat03 tomcat
//启动一个tomcat03 link tomcat02
docker run -d -P --name tomcat03 --link tomcat02 tomcat
docker exec -it tomcat03 ping tomcat02 //可以ping通
docker exec -it tomcat02 ping tomcat03 //不可以ping通
docker network ls
docker network inspect e752b44b4bef
docker ps
docker inspect ****(docker03的ID )
之前用语句03绑定02 ,在03里面可以看到这方面的信息
docker run -d -P --name tomcat03 --link tomcat02 tomcat
docker exec -it tomcat03 cat /etc/hosts
//172.17.0.3 tomcat02 ef********* //这句话的意思是,将请求的tomcat02转换为 172.17.0.3 ,所以我们在tomcat03 上面ping tomcat02 相当于ping 172.17.0.3 ,能够ping同
这样做比较笨,所有设备要联通,都需要做增量配置 ,太低级(虽然实现了按照服务名访问服务这一目的)
[root@aliyunleo ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
673031365ef2 tomcat "catalina.sh run" 17 minutes ago Up 17 minutes 0.0.0.0:32771->8080/tcp tomcat03
e753d41b5fd5 tomcat "catalina.sh run" 31 minutes ago Up 31 minutes 0.0.0.0:32769->8080/tcp tomcat02
9734a781049c tomcat "catalina.sh run" 31 minutes ago Up 31 minutes 0.0.0.0:32768->8080/tcp tomcat01
08b96b95a4bc mysql:5.7 "docker-entrypoint.s…" 41 hours ago Up 41 hours 33060/tcp, 0.0.0.0:3310->3306/tcp mysql01
---tomcat01
[root@aliyunleo ~]# docker exec -it 9734a781049c cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.3 9734a781049c
---tomcat02
[root@aliyunleo ~]# docker exec -it e753d41b5fd5 cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.4 e753d41b5fd5
---tomcat03
[root@aliyunleo ~]# docker exec -it 673031365ef2 cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.4 tomcat02 e753d41b5fd5 ----------------tomcat3 能够连接tomcat2 的原理就在这里,只要请求tomcat02直接转发到172.17.0.4 这个地址上面
172.17.0.5 673031365ef2
本质
tomcat03 执行如下语句连接 tomcat02
docker run -d -P --name tomcat03 --link tomcat02 tomcat
实际的工作是 在 tomcat03 的 /etc/hosts 文件中配置了一个转向 ,当连接tomcat02 直接站到对应的地址上面。
172.17.0.4 tomcat02 e753d41b5fd5
当然也能看到 172.17.0.5 673031365ef2 ,这个随机数673031365ef2就代表了tomcat03
容器互联–自定义网路 36
20210316 二刷
https://www.bilibili.com/video/BV1og4y1q7M4?p=36
网络模式(()
bridge : 桥接(默认)
none: 不配置网络(一般不用)
host: 和宿主机共享网络
container : 容器网络互连
[root@shell37 ~]# docker network ls
[root@shell37 ~]# docker rm -f $(docker ps -aq)
[root@shell37 ~]# ip addr
-----查看所有的Docker网络
docker network ls
//如下两句 效果等同
docker run -d -P --name tomcat01 tomcat
docker run -d -P --name tomcat01 --net bridge tomcat
--- docker0 特点,默认,域名不能访问,--link 可以打通链接,比较麻烦。
docker network create --help
// 创建一个网路 参数
//--driver bridge 类型
//--subnet 子网配置
//--gateway 网关
[root@iZm5e72eis7guszkqcstrfZ ~]# docker network create --driver bridge --subnet 192.168.0.0/16 --gateway 192.168.0.1 mynet
a07a74a23037f757b03fc98c9b776febf1b54d14950b80aa82e26b7dc7cf2731
[root@iZm5e72eis7guszkqcstrfZ ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
e752b44b4bef bridge bridge local
3cec39d00ee6 host host local
a07a74a23037 mynet bridge local
7bf20b998b84 none null local
mynet 原始参数
[root@iZm5e72eis7guszkqcstrfZ ~]# docker network inspect mynet
[
{
"Name": "mynet",
"Id": "a07a74a23037f757b03fc98c9b776febf1b54d14950b80aa82e26b7dc7cf2731",
"Created": "2020-10-16T00:24:01.345609226+08:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "192.168.0.0/16",
"Gateway": "192.168.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {},
"Labels": {}
}
]
使用功能docker run *** --name
–net
镜像id
docker run -d -P --name tomcat-net-02 --net mynet tomcat
//对比桥接模式,比较差异在 --net net名 ,–link container名 这一块
docker run -d -P --name tomcat03 --link tomcat02 tomcat
docker run -d -P --name tomcat-net-02 --net mynet tomcat
docker run -d -P --name tomcat-net-01 --net mynet tomcat
[root@iZm5e72eis7guszkqcstrfZ ~]# docker exec -it tomcat-net-01 ping tomcat-net-02
PING tomcat-net-02 (192.168.0.2) 56(84) bytes of data.
64 bytes from tomcat-net-02.mynet (192.168.0.2): icmp_seq=1 ttl=64 time=0.099 ms
配置完成后mynet 网路会增加一些数据
[root@iZm5e72eis7guszkqcstrfZ ~]# docker network inspect mynet
[
{
"Name": "mynet",
"Id": "a07a74a23037f757b03fc98c9b776febf1b54d14950b80aa82e26b7dc7cf2731",
"Created": "2020-10-16T00:24:01.345609226+08:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "192.168.0.0/16",
"Gateway": "192.168.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"2aa302a78916fcc897b45c01112d3687810fd8569ef3e956ef30ec7a30fef806": {
"Name": "tomcat-net-02",
"EndpointID": "029c6f9f05444c61c0f8d0f7de75c0bb4b6a7fc84f97a7cf53a898a7eeb770e2",
"MacAddress": "02:42:c0:a8:00:02",
"IPv4Address": "192.168.0.2/16",
"IPv6Address": ""
},
"faa620aace60fb35142db17d7275815700d748b79074edd10d1ef81115971ec5": {
"Name": "tomcat-net-01",
"EndpointID": "fd3b42d00d512edf3d994228262c2df30fa2e234ec451371e92e270eab860b9d",
"MacAddress": "02:42:c0:a8:00:03",
"IPv4Address": "192.168.0.3/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
如下两个都可以ping通
docker exec -it tomcat-net-01 ping tomcat-net-02
docker exec -it tomcat-net-01 ping 192.168.0.2
网络连通 37
之前的设置是同一个网络 各虚拟设备的连通,可以通过–link --net 两种方式实现
现在要实现 两个网络设备之间的通信,如下图所示
docker run -d -P --name tomcat01 tomcat
docker run -d -P --name tomcat02 tomcat
[root@iZm5e72eis7guszkqcstrfZ ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
25cbc3875d1e tomcat "catalina.sh run" 5 seconds ago Up 4 seconds 0.0.0.0:32780->8080/tcp tomcat02
aea54da784a1 tomcat "catalina.sh run" 15 seconds ago Up 14 seconds 0.0.0.0:32779->8080/tcp tomcat01
faa620aace60 tomcat "catalina.sh run" 17 minutes ago Up 17 minutes 0.0.0.0:32778->8080/tcp tomcat-net-01
2aa302a78916 tomcat "catalina.sh run" 17 minutes ago Up 17 minutes 0.0.0.0:32777->8080/tcp tomcat-net-02
# docker network connect --help
[root@iZm5e72eis7guszkqcstrfZ ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
e752b44b4bef bridge bridge local
3cec39d00ee6 host host local
a07a74a23037 mynet bridge local
7bf20b998b84 none null local
//查看默认网关bridge
[root@iZm5e72eis7guszkqcstrfZ ~]# docker network inspect e752b44b4bef
"Containers": {
"25cbc3875d1ef41eee0e578587df04b6bcd0521edf6cd51aa7cc45c23b6f20b8": {
"Name": "tomcat02",
"EndpointID": "93766fd015228b172d0753f22f6b9ede9f3c8ae260fbec36009541a1a3982611",
"MacAddress": "02:42:ac:11:00:03",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": ""
},
"aea54da784a1bf94368606156131a54eb9e6ffdc49d20c840fc6a650e5a134be": {
"Name": "tomcat01",
"EndpointID": "2da487895e66cec98c04d7bf697a4f2c2a85bf6bc31b1764bf135a0b8adf07e1",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
//查看mynet网关
[root@iZm5e72eis7guszkqcstrfZ ~]# docker network inspect a07a74a23037
"Containers": {
"2aa302a78916fcc897b45c01112d3687810fd8569ef3e956ef30ec7a30fef806": {
"Name": "tomcat-net-02",
"EndpointID": "029c6f9f05444c61c0f8d0f7de75c0bb4b6a7fc84f97a7cf53a898a7eeb770e2",
"MacAddress": "02:42:c0:a8:00:02",
"IPv4Address": "192.168.0.2/16",
"IPv6Address": ""
},
"faa620aace60fb35142db17d7275815700d748b79074edd10d1ef81115971ec5": {
"Name": "tomcat-net-01",
"EndpointID": "fd3b42d00d512edf3d994228262c2df30fa2e234ec451371e92e270eab860b9d",
"MacAddress": "02:42:c0:a8:00:03",
"IPv4Address": "192.168.0.3/16",
"IPv6Address": ""
}
},
tomcat01 链接到mynet 上面
docker network connect mynet tomcat01
tomcat01 在 mynet 网络下Ip 为 192.168.0.4
tomcat01 在bridge 网络叫做 172.17.0.2
也就是说 一个容器两个ip (阿里云的服务器就是 外网ip 内网ip )
结论:跨网络操作需要使用 docker network connection **** 连通主机与网络
docker network connect mynet tomcat01
// 下面代码可以看到 tomcat01 在 mynet 网络下Ip 为 192.168.0.4
docker network inspect mynet
"Containers": {
"2aa302a78916fcc897b45c01112d3687810fd8569ef3e956ef30ec7a30fef806": {
"Name": "tomcat-net-02",
"EndpointID": "029c6f9f05444c61c0f8d0f7de75c0bb4b6a7fc84f97a7cf53a898a7eeb770e2",
"MacAddress": "02:42:c0:a8:00:02",
"IPv4Address": "192.168.0.2/16",
"IPv6Address": ""
},
"aea54da784a1bf94368606156131a54eb9e6ffdc49d20c840fc6a650e5a134be": {
"Name": "tomcat01",
"EndpointID": "ae9363e68c5458bb6d69770f82dcb087c79fca1d4ef224e80da9f8d2c2361c6b",
"MacAddress": "02:42:c0:a8:00:04",
"IPv4Address": "192.168.0.4/16",
"IPv6Address": ""
},
"faa620aace60fb35142db17d7275815700d748b79074edd10d1ef81115971ec5": {
"Name": "tomcat-net-01",
"EndpointID": "fd3b42d00d512edf3d994228262c2df30fa2e234ec451371e92e270eab860b9d",
"MacAddress": "02:42:c0:a8:00:03",
"IPv4Address": "192.168.0.3/16",
"IPv6Address": ""
}
},
docker network inspect bridge
// tomcat01 在bridge 网络叫做 172.17.0.2
"Containers": {
"25cbc3875d1ef41eee0e578587df04b6bcd0521edf6cd51aa7cc45c23b6f20b8": {
"Name": "tomcat02",
"EndpointID": "93766fd015228b172d0753f22f6b9ede9f3c8ae260fbec36009541a1a3982611",
"MacAddress": "02:42:ac:11:00:03",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": ""
},
"aea54da784a1bf94368606156131a54eb9e6ffdc49d20c840fc6a650e5a134be": {
"Name": "tomcat01",
"EndpointID": "2da487895e66cec98c04d7bf697a4f2c2a85bf6bc31b1764bf135a0b8adf07e1",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
Redis 集群部署 38
https://www.bilibili.com/video/BV1og4y1q7M4?p=38
// 查看当前宿主机所有网卡
[root@iZm5e72eis7guszkqcstrfZ ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
e752b44b4bef bridge bridge local
3cec39d00ee6 host host local
a07a74a23037 mynet bridge local
7bf20b998b84 none null local
// 查看上面某个网卡mynet的进本信息
[root@iZm5e72eis7guszkqcstrfZ ~]# docker network inspect mynet
[
{
"Name": "mynet",
"Id": "a07a74a23037f950b80aa82e26b7dc7cf2731",
"Created": "2020-10-16T00:24:01.345609226+08:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "192.168.0.0/16",
"Gateway": "192.168.0.1"
}
]
},
}
]
//创建一个新网卡 名为 redis ip 是 172.38 开头的
[root@iZm5e72eis7guszkqcstrfZ ~]# docker network create redis --subnet 172.38.0.0/16
e07eaf124bce9365532cabfbb7cb8099
脚本创建 /mydata/redis/ 目录下的六个文件
for port in $(seq 1 6); \
do \
mkdir -p /mydata/redis/node-${port}/conf
touch /mydata/redis/node-${port}/conf/redis.conf
cat << EOF >/mydata/redis/node-${port}/conf/redis.conf
port 6379
bind 0.0.0.0
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
cluster-announce-ip 172.38.0.1${port}
cluster-announce-port 6379
cluster-announce-bus-port 16379
appendonly yes
EOF
done
运行 并配置 reids 5.0.9 这个版本的服务
方式一: 手动版一个一个创建
docker run -p 6371:6379 -p 16371:16379 --name redis-1 \
-v /mydata/redis/node-1/data:/data \
-v /mydata/redis/node-1/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.38.0.11 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
docker run -p 6372:6379 -p 16372:16379 --name redis-2 \
-v /mydata/redis/node-2/data:/data \
-v /mydata/redis/node-2/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.38.0.12 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
...3
...4
...5
docker run -p 6376:6379 -p 16376:16379 --name redis-6 \
-v /mydata/redis/node-6/data:/data \
-v /mydata/redis/node-6/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.38.0.16 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
方式二: 脚本方式 一次想创建 6 的 redis 的镜像
for port in $(seq 1 6); \
do \
docker run -p 637${port}:6379 -p 1637${port}:16379 --name redis-${port} \
-v /mydata/redis/node-${port}/data:/data \
-v /mydata/redis/node-${port}/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.38.0.1${port} redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
done
//进入容器1
docker exec -it redis-1 /bin/sh
ls
//创建 集群
redis-cli --cluster create 172.38.0.11:6379 172.38.0.12:6379 172.38.0.13:6379 172.38.0.14:6379 172.38.0.15:6379 172.38.0.16:6379 --cluster-replicas 1
redis-cli -c 进入redis 进群组
cluster info 查看当前 redis 集 信息
cluster nodes 查看当前cluster所有信息
/data # redis-cli -c
127.0.0.1:6379> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:367
cluster_stats_messages_pong_sent:356
cluster_stats_messages_sent:723
cluster_stats_messages_ping_received:351
cluster_stats_messages_pong_received:367
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:723
127.0.0.1:6379> cluster nodes
d028c9f79f7829398eafe579efeb9e4af56221a5 172.38.0.12:6379@16379 master - 0 1602905915533 2 connected 5461-10922
3d52e935e75d0db8043c2246102a02a7c06c43be 172.38.0.15:6379@16379 slave 92799215d06b3061280b19d7c63089d5974e4416 0 1602905915031 5 connected
6c86dd410250deec822c0419bd60df1a0911c3b7 172.38.0.13:6379@16379 master - 0 1602905916635 3 connected 10923-16383
8b2c537bea077b4ddfa8b9d6f0579c10add26389 172.38.0.14:6379@16379 slave 6c86dd410250deec822c0419bd60df1a0911c3b7 0 1602905915634 4 connected
5acc6365199cafd802cf9fa4a19ad883ef65976f 172.38.0.16:6379@16379 slave d028c9f79f7829398eafe579efeb9e4af56221a5 0 1602905915000 6 connected
92799215d06b3061280b19d7c63089d5974e4416 172.38.0.11:6379@16379 myself,master - 0 1602905916000 1 connected 0-5460
127.0.0.1:6379>
redis-cli -c 进入redis进群组
set key value //设置值
get key //获取值
/data # redis-cli -c
127.0.0.1:6379> set a a2020
-> Redirected to slot [15495] located at 172.38.0.13:6379
OK
172.38.0.13:6379> get a
"a2020"
//另起一个会话 开始
docker stop redis-3
//另起一个会话 结束
172.38.0.13:6379> get a
Could not connect to Redis at 172.38.0.13:6379: Host is unreachable
(33.10s)
// 退出当前 redis 集群
not connected> exit
//重新进入 redis 集群
/data # redis-cli -c
//获取 a 可以看到 由于之前设置的时候是 放到 0.13 这台服务器的,现在0.13关闭了,从其备用机 0.14 获取数据了
127.0.0.1:6379> get a
-> Redirected to slot [15495] located at 172.38.0.14:6379
"a2020"
172.38.0.14:6379>
springboot微服务打包docker 镜像 39
https://www.bilibili.com/video/BV1og4y1q7M4?p=39
步骤:
1.架构springboot项目
2.打包应用
3.编写Dockerfile
4.构建镜像
5.发布运行
pox.xml 文件
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.3.4.RELEASE</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>
<groupId>com.tiza.leo</groupId>
<artifactId>springboot4docker</artifactId>
<version>1.0-SNAPSHOT</version>
<name>demo</name>
<description>Demo project for Spring Boot</description>
<properties>
<java.version>1.8</java.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
<exclusions>
<exclusion>
<groupId>org.junit.vintage</groupId>
<artifactId>junit-vintage-engine</artifactId>
</exclusion>
</exclusions>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</project>
package com.example.demo;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
/**
* @author leowei
* @date 2021/3/16 - 23:55
*/
@SpringBootApplication
public class DemoApplication {
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class,args);
}
}
package com.example.demo.controller;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
/**
* @author leowei
* @date 2021/3/16 - 23:56
*/
@RestController
public class helloController {
@RequestMapping("/hello")
public String hello(){
return "hello leo";
}
}
写Dockerfile 脚本前要先按照docker 插件,可以让IDEA高亮显示
FROM java:8
COPY *.jar /app.jar
CMD ["--server.port=8080"]
EXPOSE 8080
ENTRYPOINT ["java","-jar","/app.jar"]
在目录 /home/idea/ 下面执行 如下语句构建 docker 镜像
docker build -t leo666
docker images
docker run -d -P --name leoweb leo6666
docker ps
curl localhost:32782
docker build -t leo666 .
.... ...
[root@iZm5e72eis7guszkqcstrfZ idea]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
leoweb666 latest bc4870dd6486 3 minutes ago 660MB
redis 5.0.9-alpine3.11 3661c84ee9d0 5 months ago 29.8MB
java 8 d23bdf5b1b1b 3 years ago 643MB
[root@iZm5e72eis7guszkqcstrfZ idea]# docker run -d -P --name leoweb leoweb666
9f5fcc04441c943a4748797e0c279a098c6847ca830acca7a56e31eed7154611
[root@iZm5e72eis7guszkqcstrfZ idea]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9f5fcc04441c leoweb666 "java -jar /app.jar …" 2 minutes ago Up 2 minutes 0.0.0.0:32782->8080/tcp leoweb
a668d9c51a90 redis:5.0.9-alpine3.11 "docker-entrypoint.s…" 2 hours ago Up 2 hours 0.0.0.0:6375->6379/tcp, 0.0.0.0:16375->16379/tcp redis-5
d4da63a3d5e9 redis:5.0.9-alpine3.11 "docker-entrypoint.s…" 2 hours ago Up 2 hours 0.0.0.0:6372->6379/tcp, 0.0.0.0:16372->16379/tcp redis-2
d3d775e6ca59 redis:5.0.9-alpine3.11 "docker-entrypoint.s…" 2 hours ago Up 2 hours 0.0.0.0:6376->6379/tcp, 0.0.0.0:16376->16379/tcp redis-6
3333ae50cfaa redis:5.0.9-alpine3.11 "docker-entrypoint.s…" 2 hours ago Up 2 hours 0.0.0.0:6371->6379/tcp, 0.0.0.0:16371->16379/tcp redis-1
[root@iZm5e72eis7guszkqcstrfZ idea]# curl localhost:32782
{"timestamp":"2020-10-17T04:54:20.981+00:00","status":404,"error":"Not Found","message":"","path":"/"}
[root@iZm5e72eis7guszkqcstrfZ idea]#
[root@iZm5e72eis7guszkqcstrfZ idea]# curl localhost:32782/hello
leo return
[root@iZm5e72eis7guszkqcstrfZ idea]#