Docker入门到实践(下)
DockerFile
dockerfile 是用来构建docker镜像文件,
docker build 构建一个镜像
docker pull 拉取一个镜像
docker push 发布镜像(DockerHub,阿里云镜像仓库)
DockerFile构建过程
DockerFile 指令
FROM #基础镜像
MAINTAINER # 镜像是谁写的,姓名+邮箱
RUN # 镜像构建的时候需要运行的命令
ADD # 步骤:tomcat镜像,这个tomcat压缩包!添加内容
WORKDIR # 镜像的工作目录
VOLUME # 挂载的目录
EXPOSE # 保留端口配置
CMD # 指定这个容器启动的时候要运行的命令,只有最后一个会失效,可被替代
ENTRYPOINT #指定这个容器启动的时候要运行的命令,可以追加命令
ONBUILD # 当构建一个被继承DockerFile 这个时候就会运行ONBUILD 的指令,触发指令。
COPY # 类似于add 我们文件拷贝到镜像中
ENV # 构建的时候设置环境变量
构建自己的centos
默认的centos 是没有vim 等命令的,我们构建一个拥有vim的centos
vi dockerFile_centos
FROM centos
MAINTAINER xiaoming<xxx@qq.com>
ENV MYPATH /usr/local
WORKDIR $MYPATH
RUN yum -y install vim
RUN yum -y install net-tools
EXPOSE 80
CMD echo $MYPATH
CMD echo "----end---"
CMD /bin/bash
~
# 构建镜像
docker build -f dockerFile_centos -t xiaoming/centos:0.1 .
[root@localhost docker_file]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
xiaoming/centos 0.1 ea9b4e8a0d4d About a minute ago 302MB
启动构建容器,并查看vim是否成功
[root@localhost docker_file]# docker run -it xiaoming/centos:0.1
# 容器中的vim可以使用
[root@13b8fbc6aed2 local]# vi xiaoming
[root@13b8fbc6aed2 local]# cat xiaoming
xiaomingji
查看镜像历史记录
[root@localhost docker_file]# docker history 8366
IMAGE CREATED CREATED BY SIZE COMMENT
8366530eb6a6 5 minutes ago /bin/sh -c #(nop) CMD ["/bin/sh" "-c" "/bin… 0B
acb8f8b7f8cc 5 minutes ago /bin/sh -c #(nop) CMD ["/bin/sh" "-c" "echo… 0B
0a3275f00677 5 minutes ago /bin/sh -c #(nop) CMD ["/bin/sh" "-c" "echo… 0B
6ad97e8a93c2 5 minutes ago /bin/sh -c #(nop) EXPOSE 80 0B
ea4df529e625 5 minutes ago /bin/sh -c yum -y install net-tools 27.7MB
1f3340b3050c 5 minutes ago /bin/sh -c yum -y install vim 65.2MB
b460e776a150 6 minutes ago /bin/sh -c #(nop) WORKDIR /usr/local 0B
c501f78a3a69 6 minutes ago /bin/sh -c #(nop) ENV MYPATH=/usr/local 0B
df03ba0a2faf 6 minutes ago /bin/sh -c #(nop) MAINTAINER xiaoming<xxx@q… 0B
300e315adb2f 7 months ago /bin/sh -c #(nop) CMD ["/bin/bash"] 0B
<missing> 7 months ago /bin/sh -c #(nop) LABEL org.label-schema.sc… 0B
<missing> 7 months ago /bin/sh -c #(nop) ADD file:bd7a2aed6ede423b7… 209MB
CMD 和ENTRYPOINT 的区别!
CMD`被覆盖`
ENTRYPOINT `是不会覆盖的,是在后面进行追加`
Docker 镜像发布到阿里云容器
登陆阿里云
搜索 容器镜像服务 并立即开通
创建个人版本容器
创建容器仓库
将我们的centos镜像上传到阿里云容器
[root@localhost ~]# docker login --username=xxxx registry.cn-shanghai.aliyuncs.com
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
[root@localhost ~]# docker tag 8366530eb6a6 registry.cn-shanghai.aliyuncs.com/xiaoming1998/centos:1.0
[root@localhost ~]# docker push registry.cn-shanghai.aliyuncs.com/xiaoming1998/centos:1.0
The push refers to repository [registry.cn-shanghai.aliyuncs.com/xiaoming1998/centos]
b4c78064adb6: Pushed
08a329822dcc: Pushed
2653d992f4ef: Pushing [===============================================> ] 199.2MB/209.3MB
发布成功
小结
整体操作如下图
Docker网络
理解Docker0
测试:启动一个tomcat 网络信息会发生一个什么变化
[root@localhost ~]# docker ps -a
779cc476b4c5 tomcat:8.5 "catalina.sh run" 5 days ago Exited (143) 5 days ago tomcat1
[root@localhost ~]# docker start 779
779
[root@localhost ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:b8:69:c5 brd ff:ff:ff:ff:ff:ff
inet 10.88.61.222/22 brd 10.88.63.255 scope global noprefixroute enp0s3
valid_lft forever preferred_lft forever
inet6 fe80::b5aa:708d:ee30:9c9/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:9f:2e:63:b9 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:9fff:fe2e:63b9/64 scope link
valid_lft forever preferred_lft forever
5: veth7e1bf11@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether 12:79:d4:da:d4:1c brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::1079:d4ff:feda:d41c/64 scope link
valid_lft forever preferred_lft forever
总结
1.我们每启动一个docker容器 ,docker就会给docker容器分配一个ip,我们只要安装了docker,就会有一个网卡docker0桥接模式,使用的技术是veth-pair技术
- 再启动一个tomcat2 ,发现又多一对
- 我们可以看出这个容器带来的网卡,都是一对一对的
- veth-pair 就是一对的虚拟设备接口,他们都是成对出现的,彼此相连
- veth-pair 充当一个桥梁连接 各种虚拟网络设备的
测试 tomcat1和tomcat2是否可以ping 通?
[root@localhost ~]# docker exec -it tomcat1 ping 172.17.0.3
PING 172.17.0.3 (172.17.0.3) 56(84) bytes of data.
64 bytes from 172.17.0.3: icmp_seq=1 ttl=64 time=0.080 ms
64 bytes from 172.17.0.3: icmp_seq=2 ttl=64 time=0.075 ms
结论:容器与容器之间是相互可以ping通的
容器之间网络通讯流程
结论:tomcat1和tomcat2是公用的一个路由器 docker0。
所有的容器不指定网络的情况下,都是docker0路由的,docker会给我们的容器分配一个默认的可用ip
小结
docker都是使用Linux 桥接。宿主机中是一个Docker容器的网桥 docker0.
Docker中所以的网络接口都是虚拟的,因为虚拟的网络接口效率高!
–link
容器之间互联怎么在不直接使用ip呢?
[root@localhost ~]# docker run -d --name tomcat3 -p 8003:8080 --link tomcat1 tomcat:8.5
807c8caeb2718565ed5c6a10c7593af1bfbc3d657922c32d844f9cea1f083bd7
[root@localhost ~]# docker exec -it 807 ping tomcat1
PING tomcat1 (172.17.0.2) 56(84) bytes of data.
64 bytes from tomcat1 (172.17.0.2): icmp_seq=1 ttl=64 time=0.141 ms
64 bytes from tomcat1 (172.17.0.2): icmp_seq=2 ttl=64 time=0.218 ms
64 bytes from tomcat1 (172.17.0.2): icmp_seq=3 ttl=64 time=0.069 ms
–link 可以直接使用容器名称 ping
原理如下
[root@localhost ~]# docker exec -it 807 /bin/bash
root@807c8caeb271:/usr/local/tomcat# cat /etc/hosts
172.17.0.2 tomcat1 779cc476b4c5
172.17.0.3 807c8caeb271
本质还是访问的ip
这样带来一个很大的问题 容器之间是相互隔离的,这默认的情况下容器之间可以互联的,没有隔离,这个安全性不好。而且 –link 是 修改 /etc/hosts 文件来完成的,只要是使用了 –link 就会修改文件,来达到互通的效果。
现在是非常不建议使用了
其次,修改 /etc/hosts 文件有很多弊病。比如,高频繁的容器启停环境时,容易产生竞争冒险,导致 /etc/hosts 文件损坏,出现访问故障;或者有些应用发现是来自于 /etc/hosts 文件后,就假定其为静态文件,而缓存结果不再查询,从而导致容器启停 IP 变更后,使用旧的条目而无法连接到正确的容器等等。
docker0问题:它不支持容器名称连接访问!
应该是有 docker network建立网络。
自定义网络
[root@localhost ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
25463fc507cb bridge bridge local
77b421abcbd9 host host local
c2509facb374 none null local
网络模式分为
- bridge :桥接网络 (默认,自定义也是有桥接)
- host:主机网络 和宿主机共享
- none:不配置网络
- container:容器网络联通(很少使用,局限偶然很大)
在默认创建容器的时候默认使用 bridge网络模式
# 我们直接启动的命令 --net bridge 是默认 执行的
docker run -d -p 8080:8080 --name tomcat5 tomcat
docker run -d -p 8080:8080 --net bridge --name tomcat5 tomcat
# 这两个都是一样的
我们可以自定义一个网络进行使用
[root@localhost ~]# docker network cteate --help
[root@localhost ~]# docker network create --driver bridge --subnet 192.168.0.0/16 --gateway 192.168.0.1 mynet
b7ade70af5bc15f5948209eff784fe00302a7f83b86f27bd79ca0b9ef0d990a0
[root@localhost ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
25463fc507cb bridge bridge local
77b421abcbd9 host host local
b7ade70af5bc mynet bridge local
c2509facb374 none null local
[root@localhost ~]# docker network inspect b7ade70af5bc
这里就是我们自定义的ip。
下面,我们创建两个容器 使用我们自定义网络。这不使用–link后也是可以ping通的。
[root@localhost ~]# docker run -d --net mynet -p 8004:8080 --name tomcat4 tomcat:8.5
5bd4352e4767b51cf8536647e994e7f184aff5cdaa157e9b8e8fac1dca44653b
[root@localhost ~]# docker run -d --net mynet -p 8005:8080 --name tomcat5 tomcat:8.5
34b8303064eee26fe3b5bbd8e09f95a820200b5554654a10c612f7e18c1b503c
[root@localhost ~]# docker exec -it tomcat5 ping tomcat4
PING tomcat4 (192.168.0.2) 56(84) bytes of data.
64 bytes from tomcat4.mynet (192.168.0.2): icmp_seq=1 ttl=64 time=0.069 ms
64 bytes from tomcat4.mynet (192.168.0.2): icmp_seq=2 ttl=64 time=0.066 ms
64 bytes from tomcat4.mynet (192.168.0.2): icmp_seq=3 ttl=64 time=0.074 ms
64 bytes from tomcat4.mynet (192.168.0.2): icmp_seq=4 ttl=64 time=0.067 ms
我们自定义网络docker都以及帮我们维护好了对应的关系,推荐我们平时这样使用网络!
自定义网络的好处:
redis : 不同的机器使用不同的网络,保证集群是安全和健康的。
mysql : 不同的机器使用不同的网络,保证集群是安全和健康的。
不同容器之间的网络是相互隔离的。当然也可以网络联通。
网络连通
创建两个tomcat。一个使用默认的桥接(docker0),另一个使用自定义桥接网络。
[root@localhost ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
25463fc507cb bridge bridge local
77b421abcbd9 host host local
b7ade70af5bc mynet bridge local
c2509facb374 none null local
[root@localhost ~]# docker run -d -p 8001:8080 --name tomcat01 tomcat:8.5
7c915d34abb0c59631cd92418942dafa5255a9a79111038ecfd60264d5aecc47
[root@localhost ~]# docker run -d -p 8082:8080 --net mynet --name tomcat-mynet-02 tomcat:8.5
4e26621406c49fa81037710f18f0983a03ee23f65812716c6e15d3bd55eef61a
[root@localhost ~]# docker inspect tomcat01|grep "IPAddress"
"SecondaryIPAddresses": null,
"IPAddress": "172.17.0.2",
"IPAddress": "172.17.0.2",
[root@localhost ~]# docker inspect tomcat-mynet-02|grep "IPAddress"
"SecondaryIPAddresses": null,
"IPAddress": "",
"IPAddress": "192.168.0.2",
[root@localhost ~]# docker exec -it tomcat1 ping tomcat-mynet-02
Error: No such container: tomcat1
我们发现他们两个容器 ip地址都在一个网段中肯定ping不同,那么怎么让这两个容器网络互连呢??
解决方案如下
容器之间将docker0 和 mynet 连通是不可能的。只能将容器连通到mynet (自定义网络)
使用 docker network
# 将mynet 与 tomcat01连通
[root@localhost ~]# docker network connect mynet tomcat01
[root@localhost ~]# docker network inspect mynet
[root@localhost ~]# docker network inspect mynet
[
{
"Name": "mynet",
"Id": "b7ade70af5bc15f5948209eff784fe00302a7f83b86f27bd79ca0b9ef0d990a0",
"Created": "2021-08-02T17:03:52.835502895+08:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "192.168.0.0/16",
"Gateway": "192.168.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"7c915d34abb0c59631cd92418942dafa5255a9a79111038ecfd60264d5aecc47": {
"Name": "tomcat01",
"EndpointID": "7c0d2ffa1f58632e5a7c2c9eec488a8807b7b55920a67fd75a2b70b8409c72be",
"MacAddress": "02:42:c0:a8:00:03",
"IPv4Address": "192.168.0.3/16",
"IPv6Address": ""
},
"871beec4e8b4c81ae9d6b58e7e21f470a23d04efa1232caabbb1bded83a1afcf": {
"Name": "tomcat-mynet-02",
"EndpointID": "4d3949a846f64177e9b0a8f36a3c2b56f148a15e55aac19dca06afdb02ec2b14",
"MacAddress": "02:42:c0:a8:00:02",
"IPv4Address": "192.168.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
我们发现 已经将tomcat01 连通到 mynet 自定义网络中。同时tomcat-mynet-02 容器也在这个网络下,这个时候两个容器可以ping通了
将tomcat01 与 mynet 连通
连通之后就是将 tomcat01 放到 mynet 网络下
一个容器两个ip地址! 阿里云服务:公网 私网ip
最后测试ping两个容器
## 01 已经可以ping通了
[root@localhost ~]# docker exec -it tomcat01 ping tomcat-mynet-02
PING tomcat-mynet-02 (192.168.0.2) 56(84) bytes of data.
64 bytes from tomcat-mynet-02.mynet (192.168.0.2): icmp_seq=1 ttl=64 time=0.063 ms
64 bytes from tomcat-mynet-02.mynet (192.168.0.2): icmp_seq=2 ttl=64 time=0.065 ms
[root@localhost ~]# docker run -d -p 8002:8080 --name tomcat02 tomcat:8.5
## 这个还是ping不通
df02c4c59f80ee45dcae007523577c1f8ec3f8376ce393d0e33b3b583525311e
[root@localhost ~]# docker exec -it tomcat02 ping tomcat-mynet-02
ping: tomcat-mynet-02: Name or service not known
小结
假设需要跨网络操作别人,就需要使用docker network connect 连通!。。
实战:部署redis集群
安装启动配置redis 3主 3从集群
[root@localhost ~]# docker network create redis --subnet 172.38.0.0/16 --gateway 172.38.0.1
3f39b14d72a74e9d0279af315c21285017da974af91f3c8eecdbc9f4c0df9446
[root@localhost ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
25463fc507cb bridge bridge local
77b421abcbd9 host host local
b7ade70af5bc mynet bridge local
c2509facb374 none null local
3f39b14d72a7 redis bridge local
## 编辑下载安装配置redis 脚本
vi create-redis.sh
#!/bin/bash
for port in $(seq 1 6);
do
mkdir -p /mydata/redis/node-${port}/conf
touch /mydata/redis/node-${port}/conf/redis.conf
cat << EOF >/mydata/redis/node-${port}/conf/redis.conf
port 6379
bind 0.0.0.0
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
cluster-announce-ip 172.38.0.1${port}
cluster-announce-port 6379
cluster-announce-bus-port 16379
appendonly yes
EOF
done
## 编辑 启动 6个redis脚本
vi start_redis-cluster.sh
#!/bin/bash
for port in $(seq 1 6);
do
echo `docker run -p 637${port}:6379 -p 1637${port}:16379 --name redis-${port} \
-v /mydata/redis/node-${port}/data:/data \
-v /mydata/redis/node-${port}/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.38.0.1${port} redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
`
done
## 创建集群 需要进入容器内
redis-cli --cluster create 172.38.0.11:6379 172.38.0.12:6379 172.38.0.13:6379 172.38.0.14:6379 172.38.0.15:6379 172.38.0.16:6379 --cluster-replicas 1
## 执行创建6个redis 脚本
[root@localhost create-redis]# sh create-redis.sh
## 启动6个redis 脚本
[root@localhost create-redis]# sh start_redis-cluster.sh
[root@localhost create-redis]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4385a3fc4909 redis:5.0.9-alpine3.11 "docker-entrypoint.s…" 10 seconds ago Up 9 seconds 0.0.0.0:6376->6379/tcp, :::6376->6379/tcp, 0.0.0.0:16376->16379/tcp, :::16376->16379/tcp redis-6
b780c7f7454f redis:5.0.9-alpine3.11 "docker-entrypoint.s…" 10 seconds ago Up 9 seconds 0.0.0.0:6375->6379/tcp, :::6375->6379/tcp, 0.0.0.0:16375->16379/tcp, :::16375->16379/tcp redis-5
e5ade5326647 redis:5.0.9-alpine3.11 "docker-entrypoint.s…" 11 seconds ago Up 9 seconds 0.0.0.0:6374->6379/tcp, :::6374->6379/tcp, 0.0.0.0:16374->16379/tcp, :::16374->16379/tcp redis-4
756e4522721e redis:5.0.9-alpine3.11 "docker-entrypoint.s…" 11 seconds ago Up 10 seconds 0.0.0.0:6373->6379/tcp, :::6373->6379/tcp, 0.0.0.0:16373->16379/tcp, :::16373->16379/tcp redis-3
f6362ef49223 redis:5.0.9-alpine3.11 "docker-entrypoint.s…" 11 seconds ago Up 10 seconds 0.0.0.0:6372->6379/tcp, :::6372->6379/tcp, 0.0.0.0:16372->16379/tcp, :::16372->16379/tcp redis-2
93e50d44cd5e redis:5.0.9-alpine3.11 "docker-entrypoint.s…" 12 seconds ago Up 11 seconds 0.0.0.0:6371->6379/tcp, :::6371->6379/tcp, 0.0.0.0:16371->16379/tcp, :::16371->16379/tcp redis-1
## 进入任意一个容器内 进行创建集群
[root@localhost create-redis]# docker exec -it redis-1 /bin/sh
data # redis-cli --cluster create 172.38.0.11:6379 172.38.0.12:6379 172.38.0.13:6379 172.38.0.14:6379 172.38.0.15:6379 172.38.0.16:6379 --cluster-replicas 1
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 172.38.0.15:6379 to 172.38.0.11:6379
Adding replica 172.38.0.16:6379 to 172.38.0.12:6379
Adding replica 172.38.0.14:6379 to 172.38.0.13:6379
M: f5edbc9de24decd0e07fffb4ae74dfa79e760b8c 172.38.0.11:6379
slots:[0-5460] (5461 slots) master
M: 7ae4523a49cdae08d6dfa0446597fa0254e87b8d 172.38.0.12:6379
slots:[5461-10922] (5462 slots) master
M: e940de19cc91472f871a93c943bf492fd6015420 172.38.0.13:6379
slots:[10923-16383] (5461 slots) master
S: 3c922c8869335c30db1b582ddda8d536cfbdbd97 172.38.0.14:6379
replicates e940de19cc91472f871a93c943bf492fd6015420
S: a87a4feb78c76b0a405babf2ae661e759c0151b6 172.38.0.15:6379
replicates f5edbc9de24decd0e07fffb4ae74dfa79e760b8c
S: 29f32bd03c0c45282fdba39bf8d00e5c9a4ce883 172.38.0.16:6379
replicates 7ae4523a49cdae08d6dfa0446597fa0254e87b8d
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
.....
>>> Performing Cluster Check (using node 172.38.0.11:6379)
M: f5edbc9de24decd0e07fffb4ae74dfa79e760b8c 172.38.0.11:6379
slots:[0-5460] (5461 slots) master
1 additional replica(s)
S: a87a4feb78c76b0a405babf2ae661e759c0151b6 172.38.0.15:6379
slots: (0 slots) slave
replicates f5edbc9de24decd0e07fffb4ae74dfa79e760b8c
M: e940de19cc91472f871a93c943bf492fd6015420 172.38.0.13:6379
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
M: 7ae4523a49cdae08d6dfa0446597fa0254e87b8d 172.38.0.12:6379
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
S: 3c922c8869335c30db1b582ddda8d536cfbdbd97 172.38.0.14:6379
slots: (0 slots) slave
replicates e940de19cc91472f871a93c943bf492fd6015420
S: 29f32bd03c0c45282fdba39bf8d00e5c9a4ce883 172.38.0.16:6379
slots: (0 slots) slave
replicates 7ae4523a49cdae08d6dfa0446597fa0254e87b8d
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
## 查看集群
/data # redis-cli -c
127.0.0.1:6379> cluster nodes
a87a4feb78c76b0a405babf2ae661e759c0151b6 172.38.0.15:6379@16379 slave f5edbc9de24decd0e07fffb4ae74dfa79e760b8c 0 1627901030093 5 connected
e940de19cc91472f871a93c943bf492fd6015420 172.38.0.13:6379@16379 master - 0 1627901028582 3 connected 10923-16383
7ae4523a49cdae08d6dfa0446597fa0254e87b8d 172.38.0.12:6379@16379 master - 0 1627901029000 2 connected 5461-10922
3c922c8869335c30db1b582ddda8d536cfbdbd97 172.38.0.14:6379@16379 slave e940de19cc91472f871a93c943bf492fd6015420 0 1627901030897 4 connected
29f32bd03c0c45282fdba39bf8d00e5c9a4ce883 172.38.0.16:6379@16379 slave 7ae4523a49cdae08d6dfa0446597fa0254e87b8d 0 1627901030000 6 connected
f5edbc9de24decd0e07fffb4ae74dfa79e760b8c 172.38.0.11:6379@16379 myself,master - 0 1627901029000 1 connected 0-5460
## 测试
127.0.0.1:6379> set a 111
-> Redirected to slot [15495] located at 172.38.0.13:6379
OK
172.38.0.13:6379> get a
到这里我们的redis 集群就搭建成功!!
SpringBoot微服务打包Docker镜像
- 创建项目并启动
编写controller
启动测试:
hello,xiaoming
- 打包
- 编写Dockerfile
FROM java:8
COPY *.jar /app.jar
CMD ["--server.port=8080"]
EXPOSE 8080
ENTRYPOINT ["java","-jar","/app.jar"]
4.上传到服务器进行打包并运行
[root@localhost idea]# ll
total 16932
-rw-r--r--. 1 root root 17332860 Aug 2 19:10 demo-0.0.1-SNAPSHOT.jar
-rw-r--r--. 1 root root 108 Aug 2 19:10 Dockerfile
[root@localhost idea]# docker build -t xiaoming .
[root@localhost idea]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
xiaoming latest 72e570a74418 13 minutes ago 661MB
[root@localhost idea]# docker run -d -p 8080:8080 xiaoming
// 测试
[root@localhost idea]# curl localhost:8080/hello
hello,xiaoming
完毕!!!
入门到放弃!