docker之入门及实践学习笔记
前言
在日常工作中,会遇到不少需要以docker镜像服务交付的需求,所以系统学习了一下docker,并在此记录了学习笔记,方便温故知新,同时也希望能帮助到对此感兴趣的人。本篇笔记是通过学习哔站一位叫遇见狂神说的up主的视频而总结的,在此由衷地感谢他,感兴趣的朋友可以关注一下
课程地址
概述
基于go语言开发的开源项目。
打包装箱,每个箱子是隔离的。
关键:隔离机制,将服务器用到了极致
devops(开发、运维)
优点:
- 应用更快的交付和部署
- 便捷的升级和扩容
- 简化系统运维
- 高效利用计算资源
架构组成如图所示:
该图片摘自网络,侵删!!
docker底层运行原理
client-server结构
其守护进程运行在主机上,通过socket从客户端访问。
server 接收到client指令,就会执行这个命令
原理如图所示:
安装
#查看系统内核
uname -r
cat /etc/os-release
#卸载旧版本
sudo yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-engine
#下载的安装包
sudo yum install -y yum-utils
#设置镜像仓库
#方式一:国外源
sudo yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
#方式二:国内源(推荐)
yum-config-manager --add-repo \
http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
#更新yum软件包索引
yum makecache fast
#安装docker相关 docker-ce 社区 docker-ee 企业
yum-config-manager --add-repo \
http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
#启动
sudo systemctl start docker
#检查是否安装成功
sudo docker run hello-world
#查看镜像
docker images
#卸载docker
#卸载依赖
sudo yum remove docker-ce docker-ce-cli containerd.io
#删除资源
sudo rm -rf /var/lib/docker
#阿里云镜像加速器(官方地址: https://cr.console.aliyun.com/cn-hangzhou/instances/mirrors)
#自行配置自己的地址
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://xxx.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
基本命令
主要给出几个帮助命令与官方帮助文档地址,需要时可速查。
docker version
docker info
docker --help
该图片摘自网络,侵删!!
核心概念
镜像(image)
- 概念
-
一个模板,可通过模板创建容器服务。
-
镜像都是只读的(镜像层),当容器启动时,一个新的可写层被加载到镜像的顶部(容器层)
- 组成
-
由一层一层文件系统组成,即UnionFS。
UnionFS(联合文件系统)一种分层、轻量级且高性能的文件系统。 -
bootfs:bootleader和kernel加载完后会存在内存,然后卸载bootfs
-
rootfs:包含标准的文件目录和文件
-
测试体验
提交容器成为一个新的副本
- docker commit -m=“提交信息” -a=“作者” 容器id 目标镜像名:[tag]
容器(container)
- 概念
- 独立运行的一个或者一组应用,
- 可理解为一个简易的linux系统。
-
容器端口暴露示例图
-
练习
#练习1:部署nginx
#练习2:部署tomcat
#发现问题: 1.linux命令少了 2. webapps为空
#原因:阿里云镜像默认是最小的的镜像,所有不必要的都已经删除
#解决=>进入容器拷贝一份可用文件到webapps:
cp -r webapps.dist/* webapps
#练习3:部署es
#es:暴露端口多,耗内存,需放到安全目录(挂载)
#--net somenetwork 网络
#启动es
docker run -d --name elasticsearch -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" elasticsearch:6.8.11
#结果发现:linux卡住了,查看内存(docker stats) 发现虚拟机内存几乎被吃完
#解决=>配置启动参数:
docker run -d --name elasticsearch02 -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" -e ES_JAVA_OPTS="-Xms64m -Xmx512m" elasticsearch:6.8.11
仓库(repository)
- 概念
存取镜像的地方
- 私有仓库
- 公有仓库
- docker hub(默认是国外的),所以一般会配置镜像加速器
容器数据卷(volume)
- 概念
- 容器的持久化和同步操作使容器之间数据可共享
- 核心:目录的挂载
- 不指定主机目录,默认挂载路径在:/var/lib/docker/volumes
- docker文件系统
该图片摘自网络,侵删!!
- 分类
- 具名挂载
docker run -d -P --name nginx01 -v havename:/etc/nginx nginx - 匿名挂载
docker run -d -P --name nginx02 -v /etc/nginx nginx
- 实现方式一
docker run -itd -v 主机目录:容器目录 镜像名
-d 后台运行
-p 端口映射
-v 卷挂载
-e 环境配置
–name 容器名
示例:
#挂载一个容器内的mysql目录到服务器
docker run -itd -p 3310:3306 \
-v /home/mysql/conf:/etc/mysql/conf.d -v /home/mysql/data:/var/lib/mysql \
-e MYSQL_ROOT_PASSWORD=root --name mysql01 mysql:5.7
#本地测试,ip为服务器地址
mysql -h 192.168.2.20 -P 3310 -uroot -proot
# 查看volume 情况
docker volume ls
docker inspect volume 卷名
- 实现方式二
通过dockerfile直接在创建镜像的时候挂载
示例:
#编写dockerfile
vim dockerfile1
FROM centos
VOLUME ["volume01","volume02"]
CMD echo "----end-----"
CMD /bin/bash
#构建镜像
docker build -f dockerfile1 -t yh/centos .
#运行容器
docker run -itd centosvolume:v1.0 --name mycentos /bin/bash
- 容器间的数据卷
–volumes-from
示例:
docker run -itd centosvolume:v1.0 --name parentcentos /bin/bash
docker run -itd --name childcentos --volumes-from parentcentos yh/centos
dockerfile
docker镜像的构建文件,命令脚本,定义了一切的步骤
构建步骤:
- 编写一个dockerfile文件
- docker build 构建成为一个镜像
- docker run 运行镜像
- docker push 发布镜像(dockerhub 阿里云镜像仓库)
基础知识:
- 指令都必须是大写字母
- 执行从上至下
- 每一个指令都会创建提交一个新的镜像层
- dockerfile是面向开发的
dockerfile命令:
#基础镜像
FROM
#镜像作者 姓名+邮箱
MAINTAINER
#镜像构建时需要的运行命令
RUN
#添加内容
ADD
#镜像工作目录
WORKDIR
#挂载的目录
VOLIME
#暴露端口配置
EXPOSE
#指定这个容器启动时需运行的命令,只有最后一个会生效,可被替代
CMD
#指定这个容器启动时需运行的命令,可追加命令
ENTRYPOINT
#当构建一个被继承的dockerfile,会运行ONBUILD,是触发指令
ONBUILD
#类似ADD,将文件拷贝到镜像中
COPY
#构建时设置环境变量
ENV
示例(制作tomcat镜像):
#准备jdk+tomcat压缩包
#编写dockerfile文件
FROM centos
MAINTAINER name<xxx710@qq.com>
COPY readme.md /usr/local/readme.md
ADD openjdk-14.0.2_linux-x64_bin.tar.gz /usr/local
ADD apache-tomcat-9.0.37.tar.gz /usr/local
RUN yum -y install vim
ENV MYPATH /usr/local
WORKDIR $MYPATH
ENV JAVA_HOME /usr/local/jdk-14.0.2
ENV CLASSPATH $JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
ENV CATALINA_HOME /usr/local/apache-tomcat-9.0.37
ENV CATALINA_BASH /usr/local/apache-tomcat-9.0.37
ENV PATH $PATH:$JAVA_HOME/bin:$CATALINA_HOME/lib:$CATALINA_HOME/bin
EXPOSE 8080
CMD /usr/local/apache-tomcat-9.0.37/bin/startup.sh && tail -F /usr/local/apache-tomcat-9.0.37/bin/logs/catalina.out
#构建镜像
docker build -t diytomcat .
#运行容器
docker run -d -p 9090:8080 --name huantomcat -v /home/huan/build/tomcat/test:/usr/local/apache-tomcat-9.0.37/webapps/test \
-v /home/huan/build/tomcat/logs/:/usr/local/apache-tomcat-9.0.37/logs diytomcat
#访问测试
#发布项目(由于做了卷挂载,可直接在本地编写项目发布,即test目录)
mkdir WEB-INF
cd WEB-INF/
vim web.xml
<?xml version="1.0" encoding="UTF-8"?>
<web-app xmlns="http://java.sun.com/xml/ns/javaee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/javaee
http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd"
version="2.5">
</web-app>
#在 test目录下创建index.jsp
<html>
<head><title>Hello World</title></head>
<body>
Hello World!<br/>
<%
out.println("lalalalala ");
%>
</body>
</html>
#测试是否成功访问:http://机器地址:9090/test/
发布自己的镜像(提交镜像也是按层级提交的)
- 方式一:发布到docker hub官方仓库
- 在https://hub.docker.com/注册自己的账号
- 登录账号
[root@localhost ~]# docker login --help
Usage: docker login [OPTIONS] [SERVER]
Log in to a Docker registry
Options:
-p, --password string Password
--password-stdin Take the password from stdin
-u, --username string Username
- 在服务器上提交自己的镜像docker push
- 方式二:发布到阿里云镜像服务上
- 登录阿里云
- 找到容器镜像服务
- 创建命名空间
- 创建容器镜像
- 进入创建的容器镜像参考阿里云官方文档即可
docker网络管理(network)
-
docker 网络原理
-
docker初始网络观察与总结
- 实验观察:
[root@localhost ~]# docker exec -it c8efc87b8bb9 ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
32: eth0@if33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
#清空所以容器
docker rm -f $(docker ps -aq)
#清空所有镜像
docker rmi -f $(docker images -aq)
#启动两个容器后发现:1)主机能ping通容器内部 2)容器与容器之间可以互相ping通
- 总结:
1)每启动一个docker 容器,docker就会为容器分配一个ip,
只要服务器安装了docker,就会有一个网卡docker0,
桥接模式,使用的是evth-pair技术。
evth-pair技术:是一对虚拟设备,成对出现,一段连接协议,一段彼此相连。
由这特性,evth-pair充当桥梁,连接各种虚拟网络设备。
2)容器之间共用一个路由器:docker0,
所以容器在不指定网络的情况下,都是docker0路由的,docker会给容器分配一个默认的可用ip
3)docker使用的是linux桥接,宿主机中是一个docker容器的网桥docker0
4)docker所有网络接口都是虚拟的,虚拟转发(内网传递文件) 效率高
5)只要网络删除,对应网桥对就没了
- 容器互联
- –link
实验观察:
docker run -d -P --name tomcat01 tomcat
docker run -d -P --name tomcat02 tomcat
#测试1:
[root@localhost ~]# docker exec -it tomcat02 ping tomcat01
ping: tomcat01: Name or service not known
#测试2(--link后,正向能ping通):
docker run -d -P --name tomcat03 --link tomcat02 tomcat
[root@localhost ~]# docker exec -it tomcat03 ping tomcat02
PING tomcat02 (172.17.0.3) 56(84) bytes of data.
64 bytes from tomcat02 (172.17.0.3): icmp_seq=1 ttl=64 time=0.095 ms
64 bytes from tomcat02 (172.17.0.3): icmp_seq=2 ttl=64 time=0.135 ms
#测试3(反向ping不通):
[root@localhost ~]# docker exec -it tomcat02 ping tomcat0
ping: tomcat0: Name or service not known
#--link的原理发现
[root@localhost ~]# docker exec -it tomcat03 cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.3 tomcat02 c89d8dceed50
172.17.0.4 e4f57c39bc7b
-
–link总结:
–link在hosts配置中增加了配置,172.17.0.3 tomcat02 c89d8dceed50
不建议使用这种–link
docker0问题:不支持容器名连接访问 -
自定义网络
docker network create
支持容器名连接访问
实验观察:
#自定义网络帮助命令
docker network create --help
#创建自定义网络
[root@localhost ~]# docker network create --driver bridge --subnet 192.168.0.0/16 --gateway 192.168.0.1 mynet
a3c3032a878061116f40485cced3304012095b4a0a0a3a7bfb26675b0b95e18b
#查询docker网络
[root@localhost ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
ca795b353a1b bridge bridge local
8f5d5ce7b4bd host host local
a3c3032a8780 mynet bridge local
c8b0c7f86db9 none null local
#查看具体网络详情
[root@localhost ~]# docker network inspect mynet
[
{
"Name": "mynet",
"Id": "a3c3032a878061116f40485cced3304012095b4a0a0a3a7bfb26675b0b95e18b",
"Created": "2020-08-23T17:06:00.767433079+08:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "192.168.0.0/16",
"Gateway": "192.168.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {},
"Labels": {}
}
]
#用自定义网络并启动容器
[root@localhost ~]# docker run -d -P --name tomcat-net-01 --net mynet tomcat
16f0577458359e0d7987f04bb93b3090bc871b1b1ee189a79fa735ec8be6be1b
[root@localhost ~]# docker run -d -P --name tomcat-net-02 --net mynet tomcat
28cc8f241a1740421f80c3b682364fdd155ff527700bfbaea80b3f867dd3b1ce
#再次观察mynet网络详情
root@localhost ~]# docker network inspect mynet
"Containers": {
"16f0577458359e0d7987f04bb93b3090bc871b1b1ee189a79fa735ec8be6be1b": {
"Name": "tomcat-net-01",
"EndpointID": "6705f838d9ff0a16cb1cf93d0845318240276df9c068c0ec8823a894d18a33df",
"MacAddress": "02:42:c0:a8:00:02",
"IPv4Address": "192.168.0.2/16",
"IPv6Address": ""
},
"28cc8f241a1740421f80c3b682364fdd155ff527700bfbaea80b3f867dd3b1ce": {
"Name": "tomcat-net-02",
"EndpointID": "0386a0e91947128b0db38a7cb10e5e561f8f08925f4850ba1dda925e2a815f52",
"MacAddress": "02:42:c0:a8:00:03",
"IPv4Address": "192.168.0.3/16",
"IPv6Address": ""
}
},
#结果:可以互相ping通,且支持容器名连接访问
[root@localhost ~]# docker exec -it tomcat-net-01 ping tomcat-net-02
PING tomcat-net-02 (192.168.0.3) 56(84) bytes of data.
64 bytes from tomcat-net-02.mynet (192.168.0.3): icmp_seq=1 ttl=64 time=0.159 ms
64 bytes from tomcat-net-02.mynet (192.168.0.3): icmp_seq=2 ttl=64 time=0.131 ms
- 网络连通
docker network connect
实验观察:
#网络连通帮助命令
docker network connect --help
#查看当前运行的容器
[root@localhost ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
28cc8f241a17 tomcat "catalina.sh run" 9 minutes ago Up 9 minutes 0.0.0.0:32772->8080/tcp tomcat-net-02
16f057745835 tomcat "catalina.sh run" 9 minutes ago Up 9 minutes 0.0.0.0:32771->8080/tcp tomcat-net-01
e4f57c39bc7b tomcat "catalina.sh run" 27 minutes ago Up 27 minutes 0.0.0.0:32770->8080/tcp tomcat03
c89d8dceed50 tomcat "catalina.sh run" 30 minutes ago Up 30 minutes 0.0.0.0:32769->8080/tcp tomcat02
9d92846c6354 tomcat "catalina.sh run" 30 minutes ago Up 30 minutes 0.0.0.0:32768->8080/tcp tomcat01
#测试打通tomcat01与mynet
[root@localhost ~]# docker network connect mynet tomcat01
#观察mynet详情
[root@localhost ~]# docker network inspect mynet
"Containers": {
"16f0577458359e0d7987f04bb93b3090bc871b1b1ee189a79fa735ec8be6be1b": {
"Name": "tomcat-net-01",
"EndpointID": "6705f838d9ff0a16cb1cf93d0845318240276df9c068c0ec8823a894d18a33df",
"MacAddress": "02:42:c0:a8:00:02",
"IPv4Address": "192.168.0.2/16",
"IPv6Address": ""
},
"28cc8f241a1740421f80c3b682364fdd155ff527700bfbaea80b3f867dd3b1ce": {
"Name": "tomcat-net-02",
"EndpointID": "0386a0e91947128b0db38a7cb10e5e561f8f08925f4850ba1dda925e2a815f52",
"MacAddress": "02:42:c0:a8:00:03",
"IPv4Address": "192.168.0.3/16",
"IPv6Address": ""
},
"9d92846c6354a3338228f8126963c91e667607bca7d9fa5a9925d3da8d746ac1": {
"Name": "tomcat01",
"EndpointID": "9331931127b9d8b84dbefd211bd89b1c3f27c3ae8e594510688ecbfc83c74d30",
"MacAddress": "02:42:c0:a8:00:04",
"IPv4Address": "192.168.0.4/16",
"IPv6Address": ""
}
}
#用tomcat01 ping tomcat-net-01
[root@localhost ~]# docker exec -it tomcat01 ping tomcat-net-01
PING tomcat-net-01 (192.168.0.2) 56(84) bytes of data.
64 bytes from tomcat-net-01.mynet (192.168.0.2): icmp_seq=1 ttl=64 time=0.096 ms
可视化
- portainer
docker图形化界面界面管理工具
docker run -d -p 8088:9000 \
--restart=always -v /var/run/docker.sock:/var/run/docker.sock --privileged=true portainer/portainer
- Rancher(CI/CD)
实战
- Redis集群(分片+高可用+负载均衡)
实验示例:
#创建redis网络
[root@localhost ~]# docker network create redis --subnet 172.38.0.0/16
cd9ffbf27ae73e7c92e59f7533024d7b39b601fa5c65be1f390e192d82ba0cad
#查询 docker网络
[root@localhost ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
ca795b353a1b bridge bridge local
8f5d5ce7b4bd host host local
a3c3032a8780 mynet bridge local
c8b0c7f86db9 none null local
cd9ffbf27ae7 redis bridge local
#redis配置创建脚本
for port in $(seq 1 6); \
do \
mkdir -p /mydata/redis/node-${port}/conf
touch /mydata/redis/node-${port}/conf/redis.conf
cat <<EOF >/mydata/redis/node-${port}/conf/redis.conf
port 6379
bind 0.0.0.0
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
cluster-announce-ip 172.38.0.1${port}
cluster-announce-port 6379
cluster-announce-bus-port 16379
appendonly yes
EOF
done
#启动容器(可自行编写自动化启动脚本)
docker run -p 6371:6379 -p 16371:16379 --name redis-1 \
-v /mydata/redis/node-1/data:/data \
-v /mydata/redis/node-1/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.38.0.11 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
docker run -p 6372:6379 -p 16372:16379 --name redis-2 \
-v /mydata/redis/node-2/data:/data \
-v /mydata/redis/node-2/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.38.0.12 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
docker run -p 6373:6379 -p 16373:16379 --name redis-3 \
-v /mydata/redis/node-3/data:/data \
-v /mydata/redis/node-3/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.38.0.13 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
docker run -p 6374:6379 -p 16374:16379 --name redis-4 \
-v /mydata/redis/node-4/data:/data \
-v /mydata/redis/node-4/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.38.0.14 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
docker run -p 6375:6379 -p 16375:16379 --name redis-5 \
-v /mydata/redis/node-5/data:/data \
-v /mydata/redis/node-5/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.38.0.15 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
docker run -p 6376:6379 -p 16376:16379 --name redis-6 \
-v /mydata/redis/node-6/data:/data \
-v /mydata/redis/node-6/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.38.0.16 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
#进入其中一个redis节点
[root@localhost /]# docker exec -it redis-1 /bin/sh
/data #
#查看当前目录文件
/data # ls
appendonly.aof nodes.conf
#创建集群
/data # redis-cli --cluster create 172.38.0.11:6379 172.38.0.12:6379 172.38.0.13
:6379 172.38.0.14:6379 172.38.0.15:6379 172.38.0.16:6379 --cluster-replicas 1
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 172.38.0.15:6379 to 172.38.0.11:6379
Adding replica 172.38.0.16:6379 to 172.38.0.12:6379
Adding replica 172.38.0.14:6379 to 172.38.0.13:6379
M: 74f72cc1a745a29967beba5795993956116f2fc6 172.38.0.11:6379
slots:[0-5460] (5461 slots) master
M: 087b5ce51ec4e7146aa811548ae78be0fbe012a7 172.38.0.12:6379
slots:[5461-10922] (5462 slots) master
M: 3ba3971fa44307dcb311676a853a7e690eb028ee 172.38.0.13:6379
slots:[10923-16383] (5461 slots) master
S: 3c4f9464ec6f26b185a1fc72356e6e86ce463c27 172.38.0.14:6379
replicates 3ba3971fa44307dcb311676a853a7e690eb028ee
S: 30700df1e229d70a754a2d069c8d6b0380d867e5 172.38.0.15:6379
replicates 74f72cc1a745a29967beba5795993956116f2fc6
S: 3916ef7a5f9adc99b53ec0eecd87ff44ad925d62 172.38.0.16:6379
replicates 087b5ce51ec4e7146aa811548ae78be0fbe012a7
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
...
>>> Performing Cluster Check (using node 172.38.0.11:6379)
M: 74f72cc1a745a29967beba5795993956116f2fc6 172.38.0.11:6379
slots:[0-5460] (5461 slots) master
1 additional replica(s)
S: 3916ef7a5f9adc99b53ec0eecd87ff44ad925d62 172.38.0.16:6379
slots: (0 slots) slave
replicates 087b5ce51ec4e7146aa811548ae78be0fbe012a7
S: 3c4f9464ec6f26b185a1fc72356e6e86ce463c27 172.38.0.14:6379
slots: (0 slots) slave
replicates 3ba3971fa44307dcb311676a853a7e690eb028ee
M: 087b5ce51ec4e7146aa811548ae78be0fbe012a7 172.38.0.12:6379
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
M: 3ba3971fa44307dcb311676a853a7e690eb028ee 172.38.0.13:6379
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
S: 30700df1e229d70a754a2d069c8d6b0380d867e5 172.38.0.15:6379
slots: (0 slots) slave
replicates 74f72cc1a745a29967beba5795993956116f2fc6
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
#使用测试
/data # redis-cli -c
127.0.0.1:6379> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:214
cluster_stats_messages_pong_sent:232
cluster_stats_messages_sent:446
cluster_stats_messages_ping_received:227
cluster_stats_messages_pong_received:214
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:446
127.0.0.1:6379>
#查看节点
127.0.0.1:6379> cluster nodes
3916ef7a5f9adc99b53ec0eecd87ff44ad925d62 172.38.0.16:6379@16379 slave 087b5ce51ec4e7146aa811548ae78be0fbe012a7 0 1598177087953 6 connected
3c4f9464ec6f26b185a1fc72356e6e86ce463c27 172.38.0.14:6379@16379 slave 3ba3971fa44307dcb311676a853a7e690eb028ee 0 1598177087000 4 connected
74f72cc1a745a29967beba5795993956116f2fc6 172.38.0.11:6379@16379 myself,master - 0 1598177085000 1 connected 0-5460
087b5ce51ec4e7146aa811548ae78be0fbe012a7 172.38.0.12:6379@16379 master - 0 1598177086534 2 connected 5461-10922
3ba3971fa44307dcb311676a853a7e690eb028ee 172.38.0.13:6379@16379 master - 0 1598177087000 3 connected 10923-16383
30700df1e229d70a754a2d069c8d6b0380d867e5 172.38.0.15:6379@16379 slave 74f72cc1a745a29967beba5795993956116f2fc6 0 1598177087544 5 connected
#设置变量a的值
127.0.0.1:6379> set a b
-> Redirected to slot [15495] located at 172.38.0.13:6379
OK
#停掉03节点
[root@localhost /]# docker stop e6dd9b112df7
#再次获取a的值
127.0.0.1:6379> get a
-> Redirected to slot [15495] located at 172.38.0.14:6379
"b"
#再次查看节点,发现故障已经转移
172.38.0.14:6379> cluster nodes
3916ef7a5f9adc99b53ec0eecd87ff44ad925d62 172.38.0.16:6379@16379 slave 087b5ce51ec4e7146aa811548ae78be0fbe012a7 0 1598177360000 6 connected
3c4f9464ec6f26b185a1fc72356e6e86ce463c27 172.38.0.14:6379@16379 myself,master - 0 1598177360000 7 connected 10923-16383
087b5ce51ec4e7146aa811548ae78be0fbe012a7 172.38.0.12:6379@16379 master - 0 1598177360507 2 connected 5461-10922
3ba3971fa44307dcb311676a853a7e690eb028ee 172.38.0.13:6379@16379 master,fail - 1598177191535 1598177189000 3 connected
74f72cc1a745a29967beba5795993956116f2fc6 172.38.0.11:6379@16379 master - 0 1598177361000 1 connected 0-5460
30700df1e229d70a754a2d069c8d6b0380d867e5 172.38.0.15:6379@16379 slave 74f72cc1a745a29967beba5795993956116f2fc6 0 1598177361212 5 connected
- spring boot 微服务打包docker镜像
- 构建springboot项目
- 打包应用
- 编写dockerfile
- 构建镜像
- 发布运行
部分步骤示例:
#Dockerfile内容示例:
FROM java:8
COPY *.jar /app.jar
CMD ["--server.port=8080"]
EXPOSE 8080
ENTRYPOINT ["java","-jar","/app.jar"]
#将jar包和Dockerfile拷贝到服务器
#构建镜像
[root@localhost myproject]# docker build -t hello .
#查看镜像
[root@localhost myproject]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
hello latest deb767e7c3b8 8 minutes ago 643MB
java 8 d23bdf5b1b1b 3 years ago 643MB
#启动容器
[root@localhost myproject]# docker run -d -p 8090:8080 --name myproject hello