1. Docker简介
1.1Docker中的几个概念
镜像(Image):类似于虚拟机中的镜像,是一个包含有文件系统的面向Docker引擎的只读模板。任何应用程序运行都需要环境,而镜像就是用来提供这种运行环境的。例如一个Ubuntu镜像就是一个包含Ubuntu操作系统环境的模板,同理在该镜像上装上Apache软件,就可以称为Apache镜像。
容器(Container):类似于一个轻量级的沙盒,可以将其看作一个极简的Linux系统环境(包括root权限、进程空间、用户空间和网络空间等),以及运行在其中的应用程序。Docker引擎利用容器来运行、隔离各个应用。容器是镜像创建的应用实例,可以创建、启动、停止、删除容器,各个容器之间是是相互隔离的,互不影响。注意:镜像本身是只读的,容器从镜像启动时,Docker在镜像的上层创建一个可写层,镜像本身不变。
仓库(Repository):类似于代码仓库,这里是镜像仓库,是Docker用来集中存放镜像文件的地方。注意与注册服务器(Registry)的区别:注册服务器是存放仓库的地方,一般会有多个仓库;而仓库是存放镜像的地方,一般每个仓库存放一类镜像,每个镜像利用tag进行区分,比如Ubuntu仓库存放有多个版本(12.04、14.04等)的Ubuntu镜像。
1.2Docker的架构图
1.3Docker安装
1.安装必备的依赖
sudo yum install -y yum-utils
2.设置镜像仓库
yum-config-manager \
--add-repo \
https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
3.更新yum软件包索引
yum makecache
4.安装Docker
sudo yum install docker-ce docker-ce-cli containerd.io
5.启动Docker
sudo systemctl start docker
Docker的卸载
#删除依赖
sudo yum remove docker-ce docker-ce-cli containerd.io
#删除资源
sudo rm -rf /var/lib/docker
sudo rm -rf /var/lib/containerd
2. Docker基础
2.1镜像命令
docker images 查看所有镜像
docker search [] 搜索镜像
docker pull [] 拉取镜像 分层下载
# docker pull mysql:5.7 指定版本
docker rmi -f [镜像ID] 删除镜像
# docker rmi -f $(docker images -aq) 删除所有
2.2容器命令
以centos镜像为例
运行容器
docker run [可选参数] image
#参数说明
--name="名字" 指定容器名字
-d 后台方式运行
-it 使用交互方式运行,进入容器查看内容
-p 指定容器的端口
(
-p ip:主机端口:容器端口 配置主机端口映射到容器端口
-p 主机端口:容器端口
-p 容器端口
)
-P 随机指定端口(大写的P)
#启动并进入容器
[root@hecs-131220 ~]# docker run -it centos /bin/bash
[root@cfa037db16f7 /]#
#cfa037db16f7用户名变成主机容器ID
#exit 停止并退出容器(后台方式运行则仅退出)
#Ctrl+P+Q 不停止容器退出
# !!!! docker容器后台运行,必须要有一个前台的进程,否则会自动停止
查看运行的容器
#docker ps
# 列出当前正在运行的容器
-a # 列出所有容器的运行记录
-n=? # 显示最近创建的n个容器
-q # 只显示容器的编号
[root@hecs-131220 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cfa037db16f7 centos "/bin/bash" 16 minutes ago Exited (127) About a minute ago agitated_sutherland
d98800725ab7 feb5d9fea6a5 "/hello" 42 minutes ago Exited (0) 42 minutes ago blissful_solomon
c447469378e9 feb5d9fea6a5 "/hello" 22 hours ago Exited (0) 22 hours ago zen_murdock
41b953228ad9 feb5d9fea6a5 "/hello" 3 days ago Exited (0) 3 days ago heuristic_hopper
删除容器
docker rm 容器id #删除指定的容器,不能删除正在运行的容器,强制删除使用 rm -f
docker rm -f $(docker ps -aq) #删除所有的容器
docker ps -a -q|xargs docker rm #删除所有的容器
启动和停止容器
docker start 容器id #启动容器
docker restart 容器id #重启容器
docker stop 容器id #停止当前运行的容器
docker kill 容器id #强制停止当前容器
2.3其他常用命令
查看日志
docker logs -tf 容器id
docker logs --tail number 容器id #number为要显示的日志条数
查看元数据
docker inspect 容器id
查看进程
docker top 容器id
进入容器
#因为通常我们的容器都是使用后台方式来运行的,有时需要进入容器修改配置
#方式一:
docker exec -it 容器id /bin/bash #docker exec 进入容器后开启一个新的终端,可以在里面操作
#方式二:
docker attach 容器id #docker attach 进入容器正在执行的终端,不会启动新的进程
拷贝操作
#拷贝容器的文件到主机中
docker cp 容器id:容器内路径 目的主机路径
#拷贝宿主机的文件到容器中
docker cp 目的主机路径 容器id:容器内路径
2.4常用命令总结
attach Attach to a running container #当前shell下attach连接指定运行镜像
build Build an image from a Dockerfile #通过Dockerfile定制镜像
commit Create a new image from a containers changes #提交当前容器为新的镜像
cp Copy files/folders from a container to a HOSTDIR or to STDOUT #从容器中拷贝指定文件或者目录到宿主机中
create Create a new container #创建一个新的容器,同run 但不启动容器
diff Inspect changes on a containers filesystem #查看docker容器变化
events Get real time events from the server#从docker服务获取容器实时事件
exec Run a command in a running container#在已存在的容器上运行命令
export Export a containers filesystem as a tar archive #导出容器的内容流作为一个tar归档文件(对应import)
history Show the history of an image #展示一个镜像形成历史
mages List images #列出系统当前镜像
import Import the contents from a tarball to create a filesystem image #从tar包中的内容创建一个新的文件系统映像(对应export)
info Display system-wide information #显示系统相关信息
inspect Return low-level information on a container or image #查看容器详细信息
kill Kill a running container #kill指定docker容器
load Load an image from a tar archive or STDIN #从一个tar包中加载一个镜像(对应save)
login Register or log in to a Docker registry#注册或者登陆一个docker源服务器
logout Log out from a Docker registry #从当前Docker registry退出
logs Fetch the logs of a container #输出当前容器日志信息
pause Pause all processes within a container#暂停容器
port List port mappings or a specific mapping for the CONTAINER #查看映射端口对应的容器内部源端口
ps List containers #列出容器列表
pull Pull an image or a repository from a registry #从docker镜像源服务器拉取指定镜像或者库镜像
push Push an image or a repository to a registry #推送指定镜像或者库镜像至docker源服务器
rename Rename a container #重命名容器
restart Restart a running container #重启运行的容器
rm Remove one or more containers #移除一个或者多个容器
rmi Remove one or more images #移除一个或多个镜像(无容器使用该镜像才可以删除,否则需要删除相关容器才可以继续或者-f强制删除)
run Run a command in a new container #创建一个新的容器并运行一个命令
save Save an image(s) to a tar archive#保存一个镜像为一个tar包(对应load)
search Search the Docker Hub for images #在dockerhub中搜索镜像
start Start one or more stopped containers#启动容器
stats Display a live stream of container(s) resource usage statistics #统计容器使用资源
stop Stop a running container #停止容器
tag Tag an image into a repository #给源中镜像打标签
top Display the running processes of a container #查看容器中运行的进程信息
unpause Unpause all processes within a container #取消暂停容器
version Show the Docker version information#查看容器版本号
wait Block until a container stops, then print its exit code #截取容器停止时的退出状态值
2.5练习
nginx
#启动nginx
# -d 后台启动
# --name 容器的名称
# -p 宿主机端口:容器内部端口
[root@hecs-131220 ~]# docker run -d --name nginx01 -p 3344:80 nginx
b1c511f9942dfc0005591b9f35b83666007eb74ca3468f468cf77cfb07240115
[root@hecs-131220 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b1c511f9942d nginx "/docker-entrypoint.…" 3 minutes ago Up 3 minutes 0.0.0.0:3344->80/tcp, :::3344->80/tcp nginx01
[root@hecs-131220 ~]# curl localhost:3344
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
#进入容器
[root@hecs-131220 ~]# docker exec -it nginx01 /bin/bash
root@b1c511f9942d:/# whereis nginx
nginx: /usr/sbin/nginx /usr/lib/nginx /etc/nginx /usr/share/nginx
root@b1c511f9942d:/# cd /etc/nginx
root@b1c511f9942d:/etc/nginx# ls
conf.d fastcgi_params mime.types modules nginx.conf scgi_params uwsgi_params
端口暴露
安装Tomcat
官方
#Docker Hub
docker run -it --rm tomcat:9.0
# --rm 用完即删
#下载
docker pull tomcat
#启动
docker run -d -p 3355:8080 --name tomcat01 tomcat
部署ES+kibana
#es暴露的端口多
#es十分耗内存
#es的数据一般需要放在安全目录,挂载
# --net somenetwork 网络配置
#启动ES
docker run -d --name elasticsearch -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" elasticsearch:7.6.2
#可能会爆内存
#限制内存 -e 环境配置
docker run -d --name elasticsearch02 -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" -e ES_JAVA_OPTS="-Xms64m -Xmx512m" elasticsearch:7.6.2
#测试
[root@hecs-131220 ~]# curl localhost:9200
{
"name" : "2828ac396c87",
"cluster_name" : "docker-cluster",
"cluster_uuid" : "H_dM2tzeRu2zM8Q5MC-vOA",
"version" : {
"number" : "7.6.2",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "ef48eb35cf30adf4db14086e8aabd07ef6fb113f",
"build_date" : "2020-03-26T06:34:37.794943Z",
"build_snapshot" : false,
"lucene_version" : "8.4.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
使用kibana连接ES
2.6可视化
portainer是常用的Docker图形化界面管理工具
#portainer安装
docker run -d -p 9000:9000 \
--restart=always \
-v /var/run/docker.sock:/var/run/docker.sock \
--privileged=true \
portainer/portainer
3. Docker镜像
3.1联合文件系统(UnionFS):
联合文件系统是一种分层、轻量级并且高性能的文件系统,它支持对文件系统的修改作为一次提交来一层层的叠加,同时可以将不同目录挂载到同一个虚拟文件系统下。联合文件系统是 Docker 镜像的基础。镜像可以通过分层来进行继承,基于基础镜像(没有父镜像),可以制作各种具体的应用镜像。
**特性:**一次同时加载多个文件系统,但从外面看起来只能看到一个文件系统。联合加载会把各层文件系统叠加起来,这样最终的文件系统会包含所有底层的文件和目录。
3.2镜像加载的原理
Docker的镜像实际由一层一层的文件系统组成
bootfs(boot file system)主要包含bootloader和kernel。bootloader主要是引导加载kernel,完成后整个内核就都在内存中了。此时内存的使用权已由bootfs转交给内核,加载完系统卸载bootfs。可以被不同的Linux发行版公用。
rootfs(root file system),包含典型Linux系统中的/dev,/proc,/bin,/etc等标准目录和文件。rootfs就是各种不同操作系统发行版(Ubuntu,Centos等)。因为底层直接用Host的kernel,rootfs只包含最基本的命令,工具和程序就可以了。
分层理解
所有的Docker镜像都起始于一个基础镜像层,当进行修改或增加新的内容时,就会在当前镜像层之上,创建新的容器层。
容器在启动时会在镜像最外层上建立一层可读写的容器层(R/W),操作发生在容器层,而镜像层是只读的(R/O)。
3.3Commit镜像
#docker commit提交镜像
docker commit -m="提交的描述信息" -a="作者" 容器id 目标镜像名:[TAG]
#实战
docker commit -a="zhaobin" -m="add webapps" 75e33d099389 tomcat01:1.0
[root@hecs-131220 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
tomcat01 1.0 d65fc856d255 15 seconds ago 684MB
wurstmeister/kafka latest 2dd91ce2efe1 2 weeks ago 508MB
nginx latest 605c77e624dd 2 weeks ago 141MB
tomcat 9.0 b8e65a4d736d 3 weeks ago 680MB
tomcat latest fb5657adc892 3 weeks ago 680MB
redis latest 7614ae9453d1 3 weeks ago 113MB
centos latest 5d0da3dc9764 4 months ago 231MB
portainer/portainer latest 580c0e4e98b0 10 months ago 79.1MB
elasticsearch 7.6.2 f29a1ee41030 21 months ago 791MB
4. 容器数据卷
容器数据卷:Docker容器产生的数据同步到本地,这样关闭容器的时候,数据是在本地的,不会影响数据的安全性。
Docker的容器卷技术也就是将容器内部目录和本地目录进行一个同步,即挂载。
容器的持久化和同步化操作,容器之间也是可以数据共享的
4.1使用数据卷
-方式一 直接使用命令来挂载 -v
docker run -it -v 主机目录:容器目录
#测试
[root@hecs-131220 home]# docker run -it -v /home/ceshi:/home centos /bin/bash
[root@7769f7d5d993 /]#
#查看挂载信息
[root@hecs-131220 home]# docker inspect 7769f7d5d993
"Mounts": [
{
"Type": "bind",
"Source": "/home/ceshi", #主机内地址
"Destination": "/home", #Docker容器内的地址
"Mode": "",
"RW": true,
"Propagation": "rprivate"
}
],
两个地址内的内容会双向绑定进行同步。即使容器停止运行也会进行同步。
**容器卷的好处:**修改仅需在本地修改即可,容器内会自动同步。
4.2部署MySQL
#下载MySQL
[root@hecs-131220 home]# docker pull mysql:5.7
#运行容器 数据挂载
#-d 后台运行
#-p 端口映射 -P随机映射端口
#-v 数据卷挂载
#-e 环境配置
[root@hecs-131220 home]# docker run -d -p 3306:3306 -v /home/mysql/conf:/etc/mysql/conf.d -v /home/mysql/data:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=15294834575zha --name mysql01 mysql:5.7
#通过Navicat测试连接
#删除容器数据依旧存在(持久化)
4.3具名和匿名挂载
#匿名挂载(只有容器内的名字)
docker run -d -P --name nginx01 -v /etc/nginx nginx
#查看卷的信息
[root@hecs-131220 /]# docker volume ls
DRIVER VOLUME NAME
local 9d36e2b91b2664367afc4ea3a11bcf416bcb09f4fc878e8a1031db77a2c76793
local e77674e49b48314bdbc338b909fae8eaf4c901b17619907acaaf7959f937262d
local edd51fff8e539210fd5756ff529ddac134ab683bb40cde6f85e57c749ef3d7dc
#具名挂载(卷名:容器内路径)【建议】
[root@hecs-131220 /]# docker run -d -P --name nginx03 -v juming-nginx:/etc/nginx nginx
eb62592e8fec8d788037225232e7cdeb99ed1f2a40351f818b890d4f7ab91f9d
[root@hecs-131220 /]# docker volume ls
DRIVER VOLUME NAME
local 9d36e2b91b2664367afc4ea3a11bcf416bcb09f4fc878e8a1031db77a2c76793
local e77674e49b48314bdbc338b909fae8eaf4c901b17619907acaaf7959f937262d
local edd51fff8e539210fd5756ff529ddac134ab683bb40cde6f85e57c749ef3d7dc
local juming-nginx
[root@hecs-131220 /]# docker volume inspect juming-nginx
[
{
"CreatedAt": "2022-01-14T20:35:55+08:00",
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/juming-nginx/_data",
"Name": "juming-nginx",
"Options": null,
"Scope": "local"
}
]
#docker容器中的卷在没有指定目录的情况下都是在/var/lib/docker/volumes/xxxx
-区分具名挂载匿名挂载和指定路径挂载
-v 容器内路径 #匿名挂载
-v 卷名:容器内路径 #具名挂载
-v /宿主机路径:容器内路径 #指定路径挂载
#通过ro(readonly) rw(read write)改变读写权限
[root@hecs-131220 /]# docker run -d -P --name nginx03 -v juming-nginx:/etc/nginx:ro nginx
[root@hecs-131220 /]# docker run -d -P --name nginx03 -v juming-nginx:/etc/nginx:rw nginx
#ro 只能通过宿主机来操作,容器的内部无法操作
5. DockerFile
dockerfile是用来构建docker镜像的文件。
5.1DockerFile构建过程
构建的步骤:
1、编写一个dockerdile文件
2、docker build构建成为一个镜像
3、docker run 运行镜像
4、docker push发布镜像
DockerFile命令:
#每个保留关键字都是大写的
#执行从上到下依次执行
#每一个指令都会创建一个新的镜像层并提交!
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-8Rk4CBZx-1644472855239)(/upload/2022/02/DockerFile%E5%91%BD%E4%BB%A4-1016673b7098429caa992e4d5d32c00d.png)]
实战(创建自己的centos)
#1、编写DockerFile文件
FROM centos:7 #如果不指定版本号则会默认下载最新的centos, 而最新的centos在2022年有变化可 #能会导致下面的yun源下载出错
MAINTAINER abina<zbhaut@163.com>
ENV MYPATH /usr/local
WORKDIR $MYPATH
RUN yum -y install vim
RUN yum -y install net-tools
EXPOSE 80
CMD echo $MYPATH
CMD echo "-----------end--------------"
CMD /bin/bash
#2、构建镜像
# docker build -f dockerfile文件路径 -t 镜像名:[tag]
[root@hecs-131220 dockerfile]# docker build -f mydockerfile -t mycentos:0.1 .
CMD和ENTRYPOINT的区别
CMD #只有最后一个才会生效 指定容器启动的时候需要运行的命令
ENTRYPOINT #可以追加命令 指定容器启动的时候需要运行的命令
#CMD
#dockerfile
FROM centos:7
CMD ["ls","-a"]
#构建
[root@hecs-131220 dockerfile]# docker build -f dockerfile-cmd-test -t cmdtest .
Sending build context to Docker daemon 3.072kB
Step 1/2 : FROM centos:7
---> eeb6ee3f44bd
Step 2/2 : CMD ["ls","-a"]
---> Running in 4bbbc96fb5b2
Removing intermediate container 4bbbc96fb5b2
---> ffd28b766bbd
Successfully built ffd28b766bbd
Successfully tagged cmdtest:latest
[root@hecs-131220 dockerfile]# docker run -it ffd28b766bbd
. .dockerenv bin etc lib media opt root sbin sys usr
.. anaconda-post.log dev home lib64 mnt proc run srv tmp var
#CMD的情况下 ["ls","-a"]被 -l替换,但是-l不是命令所以报错。
[root@hecs-131220 dockerfile]# docker run -it ffd28b766bbd -l
docker: Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "-l": executable file not found in $PATH: unknown.
#ENTRYPOINT
[root@hecs-131220 dockerfile]# docker run -it 6322c8228297 -l
total 64
drwxr-xr-x 1 root root 4096 Feb 4 07:06 .
drwxr-xr-x 1 root root 4096 Feb 4 07:06 ..
-rwxr-xr-x 1 root root 0 Feb 4 07:06 .dockerenv
-rw-r--r-- 1 root root 12114 Nov 13 2020 anaconda-post.log
lrwxrwxrwx 1 root root 7 Nov 13 2020 bin -> usr/bin
drwxr-xr-x 5 root root 360 Feb 4 07:06 dev
drwxr-xr-x 1 root root 4096 Feb 4 07:06 etc
drwxr-xr-x 2 root root 4096 Apr 11 2018 home
lrwxrwxrwx 1 root root 7 Nov 13 2020 lib -> usr/lib
lrwxrwxrwx 1 root root 9 Nov 13 2020 lib64 -> usr/lib64
drwxr-xr-x 2 root root 4096 Apr 11 2018 media
drwxr-xr-x 2 root root 4096 Apr 11 2018 mnt
drwxr-xr-x 2 root root 4096 Apr 11 2018 opt
dr-xr-xr-x 96 root root 0 Feb 4 07:06 proc
dr-xr-x--- 2 root root 4096 Nov 13 2020 root
drwxr-xr-x 11 root root 4096 Nov 13 2020 run
lrwxrwxrwx 1 root root 8 Nov 13 2020 sbin -> usr/sbin
drwxr-xr-x 2 root root 4096 Apr 11 2018 srv
dr-xr-xr-x 13 root root 0 Feb 4 07:06 sys
drwxrwxrwt 7 root root 4096 Nov 13 2020 tmp
drwxr-xr-x 13 root root 4096 Nov 13 2020 usr
drwxr-xr-x 18 root root 4096 Nov 13 2020 var
5.2实战制作Tomcat镜像
准备:Tomcat压缩包、JDK压缩包
#dockerfile文件
#DockerFile文件命名为Dockerfile后就不需要在build的时候使用-f dockerfile文件名了,会自动去寻找
FROM centos:7
MAINTAINER zhaobin<zbhaut@163.com>
ADD jdk-8u202-linux-x64.tar.gz /usr/local/
ADD apache-tomcat-9.0.58.tar.gz /usr/local/
RUN yum -y install vim
ENV MYPATH /usr/local
WORKDIR $MYPATH
ENV JAVA_HOME /usr/local/jdk1.8.0_202
ENV CLASSPATH $JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
ENV CATLINA_HOME /usr/local/apache-tomcat-9.0.58
ENV CATLINA_BASH /usr/local/apache-tomcat-9.0.58
ENV PATH $PATH:$JAVA_HOME/bin:$CATLINA_HOME/lib:$CATLINA_HOME/bin
EXPOSE 8080
CMD /usr/local/apache-tomcat-9.0.58/bin/startup.sh && tail -F /usr/local/apache-tomcat-9.0.58/logs/catlina.out
##构建DockerFile文件
[root@hecs-131220 zhaobin]# docker build -t mytomcat .
##运行并访问tomcat
[root@hecs-131220 zhaobin]# docker run -it -p 9090:8080 --name mytomcat01 mytomcat
[root@hecs-131220 zhaobin]# curl localhost:9090
5.3镜像的发布
#首先登陆DockerHub账号
docker login -u username -p password
#使用pull发布
docker pull imagename:tag
6. Docker网络
6.1 理解Docker0
常用安装命令
apt-get install vim 安装vim
apt-get install telnet 安装telnet
apt-get install net-tools 安装ifconfig
apt install iputils-ping 安装ping
apt install iproute2 安装ip addr
[root@hecs-131220 /]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether fa:16:3e:20:26:b2 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.4/24 brd 192.168.0.255 scope global noprefixroute dynamic eth0
valid_lft 55645sec preferred_lft 55645sec
inet6 fe80::f816:3eff:fe20:26b2/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:c4:96:cf:3d brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:c4ff:fe96:cf3d/64 scope link
#1、2、3分别为本机回环地址、华为云内网地址和docker自动生成的地址
Docker如何处理容器的网络访问的?
#启动tomcat
[root@hecs-131220 /]# docker run -d -P --name tomcat01 tomcat
#进入容器
[root@hecs-131220 /]# docker exec -it tomcat01 /bin/bash
#安装相应的软件
root@400f28c90308:/usr/local# apt update && apt install -y iproute2
#查看容器网络内部地址
root@400f28c90308:/usr/local# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
58: eth0@if59: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
#在linux主机可以ping通172.17.0.[root@hecs-131220 ~]# ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.040 ms #可以看到延时极低
64 bytes from 172.17.0.2: icmp_seq=2 ttl=64 time=0.045 ms
每启动一个容器, Docker就会给Docker容器分配一个ip,安装docker就会有一个docker0的网卡,使用桥接模式,采用veth-pair技术(一对虚拟的设备接口,都是成对出现的)
容器和容器之间的网络是互通的
Docker网络通信模型图
Tomcat01通过Docker0与Tomcat02进行通信,而Tomcat01与Docker0之间通过veth-pair技术进行通信。Docker0在此充当路由器。容器在不指定网络的情况下,都是通过docker0进行路由的,docker0会对每个容器分配一个可用的ip
6.2容器互联 --link
#容器无法通过容器名进行网络互通
root@400f28c90308:/usr/local/tomcat# ping tomcat01
ping: tomcat01: Name or service not known
#
root@77a983cbe683:/usr/local/tomcat# ping tomcat02
PING tomcat02 (172.17.0.3) 56(84) bytes of data.
64 bytes from tomcat02 (172.17.0.3): icmp_seq=1 ttl=64 time=0.067 ms
6.3 自定义网络
#查看所有的Docker网络
[root@hecs-131220 ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
d7b4e8f645ba bridge bridge local
f2f200ef85dc host host local
dc2cac72b415 none null local
网络模式
- bridge 桥接(默认)
- none 不配置网络
- host 和宿主机共享网络
- container 容器内网络连通
#创建一个自定义的网络
[root@hecs-131220 ~]# docker network create --driver bridge --subnet 192.168.0.0/16 --gateway 192.168.0.1 mynet
a8f14024ebd687670a362e7bf5461d8296129cdaf68434020d8a5fe2fd18c6be
[root@hecs-131220 ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
d7b4e8f645ba bridge bridge local
f2f200ef85dc host host local
a8f14024ebd6 mynet bridge local
dc2cac72b415 none null local
#查看网络配置
[root@hecs-131220 ~]# docker network inspect mynet
[
{
"Name": "mynet",
"Id": "a8f14024ebd687670a362e7bf5461d8296129cdaf68434020d8a5fe2fd18c6be",
"Created": "2022-02-06T15:53:59.163289778+08:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "192.168.0.0/16",
"Gateway": "192.168.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {},
"Labels": {}
}
]
#使用自定义网络创建容器
[root@hecs-131220 ~]# docker run -d -P --name tomcat01 --net mynet tomcat
###################################################
###使用自定义网络创建的容器可以通过容器名称进行网络通信 ####
###################################################
6.4网络连通
#connect命令
docker network connect [OPTIONS] NETWORK CONTAINER
#使用Docker0创建两个Tomcat容器
[root@hecs-131220 ~]# docker run -d -P --name tomcat01 tomcat
c9e24056996ce535caf83dd895c4fe522656230f6309906b87fafc2507d21fb0
[root@hecs-131220 ~]# docker run -d -P --name tomcat02 tomcat
359e3b2010b57fe4a522ce85b54820dab66997114460f525302d84910cda6309
#使用mynet网络创建两个Tomcat容器
[root@hecs-131220 ~]# docker run -d -P --name tomcat-net-01 --net mynet tomcat
6d9382afc6c36a2420a08df803239e6bdc92adbc3750d0a1a944f0e7b5b17195
[root@hecs-131220 ~]# docker run -d -P --name tomcat-net-02 --net mynet tomcat
45d9bdf0f4819eb2d222a2ba4f46697c740b5ccbd2aad70fce6f96255a62eea2
#尝试直接在tomcat01中去ping tomcat-net-01
root@c9e24056996c:/usr/local/tomcat# ping tomcat-net-01
ping: tomcat-net-01: Name or service not known
#使用connect命令将mynet和tomcat01打通
[root@hecs-131220 ~]# docker network connect mynet tomcat01
#再次尝试在tomcat01去ping tomcat-net-01 可以ping通
[root@hecs-131220 ~]# docker exec -it tomcat01 /bin/bash
root@c9e24056996c:/usr/local/tomcat# ping tomcat-net-01
PING tomcat-net-01 (192.168.0.2) 56(84) bytes of data.
64 bytes from tomcat-net-01.mynet (192.168.0.2): icmp_seq=1 ttl=64 time=0.074 ms
64 bytes from tomcat-net-01.mynet (192.168.0.2): icmp_seq=2 ttl=64 time=0.056 ms
#查看mynet配置
[root@hecs-131220 ~]# docker network inspect mynet
"c9e24056996ce535caf83dd895c4fe522656230f6309906b87fafc2507d21fb0": {
"Name": "tomcat01",
"EndpointID": "88b7d2fc81982d5bdeae373360ae5c4765d5166ed9d034cf5c829c261f39203a",
"MacAddress": "02:42:c0:a8:00:04",
"IPv4Address": "192.168.0.4/16",
"IPv6Address": ""
}
#一个容器两个ip地址
6.5实战
6.5.1部署Redis集群
#创建redis网络
[root@hecs-131220 ~]# docker network create redis --subnet 172.38.0.0/16
aa0613272ab5653b30ba853b70a12fcb5681274c266e1d2192f5efabcbfb5b8b
[root@hecs-131220 ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
d7b4e8f645ba bridge bridge local
f2f200ef85dc host host local
a8f14024ebd6 mynet bridge local
dc2cac72b415 none null local
aa0613272ab5 redis bridge local
[root@hecs-131220 ~]# docker network inspect redis
[
{
"Name": "redis",
"Id": "aa0613272ab5653b30ba853b70a12fcb5681274c266e1d2192f5efabcbfb5b8b",
"Created": "2022-02-06T16:52:59.491110899+08:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.38.0.0/16"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {},
"Labels": {}
}
]
#使用脚本创建redis
[root@hecs-131220 ~]# for port in $(seq 1 6); \
> do \
> mkdir -p /mydata/redis/node-${port}/conf
> touch /mydata/redis/node-${port}/conf/redis.conf
> cat << EOF >/mydata/redis/node-${port}/conf/redis.conf
> port 6379
> bind 0.0.0.0
> cluster-enabled yes
> cluster-config-file nodes.conf
> cluster-node-timeout 5000
> cluster-announce-ip 172.38.0.1${port}
> cluster-announce-port 6379
> cluster-announce-bus-port 16379
> appendonly yes
> EOF
> done
下面启动6个Redis容器,设置对应的容器数据卷挂载,
#第1个Redis容器
docker run -p 6371:6379 -p 16371:16379 --name redis-1 \
-v /mydata/redis/node-1/data:/data \
-v /mydata/redis/node-1/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.38.0.11 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
#第2个Redis容器
docker run -p 6372:6379 -p 16372:16379 --name redis-2 \
-v /mydata/redis/node-2/data:/data \
-v /mydata/redis/node-2/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.38.0.12 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
#第3个Redis容器
docker run -p 6373:6379 -p 16373:16379 --name redis-3 \
-v /mydata/redis/node-3/data:/data \
-v /mydata/redis/node-3/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.38.0.13 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
#第4个Redis容器
docker run -p 6374:6379 -p 16374:16379 --name redis-4 \
-v /mydata/redis/node-4/data:/data \
-v /mydata/redis/node-4/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.38.0.14 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
#第5个Redis容器
docker run -p 6375:6379 -p 16375:16379 --name redis-5 \
-v /mydata/redis/node-5/data:/data \
-v /mydata/redis/node-5/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.38.0.15 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
#第6个Redis容器
docker run -p 6376:6379 -p 16376:16379 --name redis-6 \
-v /mydata/redis/node-6/data:/data \
-v /mydata/redis/node-6/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.38.0.16 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
#或者通过脚本一次性启动6个Redis容器:
for port in $(seq 1 6); \
do
docker run -p 637${port}:6379 -p 1637${port}:16379 --name redis-${port} \
-v /mydata/redis/node-${port}/data:/data \
-v /mydata/redis/node-${port}/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.38.0.1${port} redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf; \
done
查看运行情况
[root@hecs-131220 conf]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
31d250534516 redis:5.0.9-alpine3.11 "docker-entrypoint.s…" 8 seconds ago Up 7 seconds 0.0.0.0:6376->6379/tcp, :::6376->6379/tcp, 0.0.0.0:16376->16379/tcp, :::16376->16379/tcp redis-6
644fe8bc6517 redis:5.0.9-alpine3.11 "docker-entrypoint.s…" 9 seconds ago Up 7 seconds 0.0.0.0:6375->6379/tcp, :::6375->6379/tcp, 0.0.0.0:16375->16379/tcp, :::16375->16379/tcp redis-5
ce0b9a405ab0 redis:5.0.9-alpine3.11 "docker-entrypoint.s…" 9 seconds ago Up 8 seconds 0.0.0.0:6374->6379/tcp, :::6374->6379/tcp, 0.0.0.0:16374->16379/tcp, :::16374->16379/tcp redis-4
b9aecc58311c redis:5.0.9-alpine3.11 "docker-entrypoint.s…" 9 seconds ago Up 8 seconds 0.0.0.0:6373->6379/tcp, :::6373->6379/tcp, 0.0.0.0:16373->16379/tcp, :::16373->16379/tcp redis-3
d10e904595d7 redis:5.0.9-alpine3.11 "docker-entrypoint.s…" 10 seconds ago Up 8 seconds 0.0.0.0:6372->6379/tcp, :::6372->6379/tcp, 0.0.0.0:16372->16379/tcp, :::16372->16379/tcp redis-2
7b17606a68b9 redis:5.0.9-alpine3.11 "docker-entrypoint.s…" 2 minutes ago Up 2 minutes 0.0.0.0:6371->6379/tcp, :::6371->6379/tcp, 0.0.0.0:16371->16379/tcp, :::16371->16379/tcp redis-1
创建集群
#进入容器redis-1
[root@hecs-131220 conf]# docker exec -it redis-1 /bin/sh
#创建集群
/data # redis-cli --cluster create 172.38.0.11:6379 172.38.0.12:6379 172.38.0.13:6379 172.38.0.14:6379 172.38.0.15:6379 172.38.0.16:6379 --cluster-replicas 1
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 172.38.0.15:6379 to 172.38.0.11:6379
Adding replica 172.38.0.16:6379 to 172.38.0.12:6379
Adding replica 172.38.0.14:6379 to 172.38.0.13:6379
M: 169bca2ee60b5dcaebbc1c776c3fda32dc84433f 172.38.0.11:6379
slots:[0-5460] (5461 slots) master
M: 24965da26f1edf024d2f1df6627cbd565f94f097 172.38.0.12:6379
slots:[5461-10922] (5462 slots) master
M: 849d825f09551d1f69e77f3c1ce2e4e2d8c647a7 172.38.0.13:6379
slots:[10923-16383] (5461 slots) master
S: a5519948e206ec6f7a9295cb4fe13550ecb8df60 172.38.0.14:6379
replicates 849d825f09551d1f69e77f3c1ce2e4e2d8c647a7
S: f473358032cfd8b37f5b123beaf822bc06f642f6 172.38.0.15:6379
replicates 169bca2ee60b5dcaebbc1c776c3fda32dc84433f
S: e22e084f2c5cf0d3b84aaaf26b266e24e03f6e49 172.38.0.16:6379
replicates 24965da26f1edf024d2f1df6627cbd565f94f097
#确认配置?
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
...
>>> Performing Cluster Check (using node 172.38.0.11:6379)
M: 169bca2ee60b5dcaebbc1c776c3fda32dc84433f 172.38.0.11:6379
slots:[0-5460] (5461 slots) master
1 additional replica(s)
S: e22e084f2c5cf0d3b84aaaf26b266e24e03f6e49 172.38.0.16:6379
slots: (0 slots) slave
replicates 24965da26f1edf024d2f1df6627cbd565f94f097
S: f473358032cfd8b37f5b123beaf822bc06f642f6 172.38.0.15:6379
slots: (0 slots) slave
replicates 169bca2ee60b5dcaebbc1c776c3fda32dc84433f
M: 849d825f09551d1f69e77f3c1ce2e4e2d8c647a7 172.38.0.13:6379
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
M: 24965da26f1edf024d2f1df6627cbd565f94f097 172.38.0.12:6379
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
S: a5519948e206ec6f7a9295cb4fe13550ecb8df60 172.38.0.14:6379
slots: (0 slots) slave
replicates 849d825f09551d1f69e77f3c1ce2e4e2d8c647a7
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
#进入集群
/data # redis-cli -c
#查看集群信息
127.0.0.1:6379> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:991
cluster_stats_messages_pong_sent:1004
cluster_stats_messages_sent:1995
cluster_stats_messages_ping_received:999
cluster_stats_messages_pong_received:991
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:1995
#查看节点信息
127.0.0.1:6379> cluster nodes
169bca2ee60b5dcaebbc1c776c3fda32dc84433f 172.38.0.11:6379@16379 myself,master - 0 1644388793000 1 connected 0-5460
e22e084f2c5cf0d3b84aaaf26b266e24e03f6e49 172.38.0.16:6379@16379 slave 24965da26f1edf024d2f1df6627cbd565f94f097 0 1644388794925 6 connected
f473358032cfd8b37f5b123beaf822bc06f642f6 172.38.0.15:6379@16379 slave 169bca2ee60b5dcaebbc1c776c3fda32dc84433f 0 1644388794524 5 connected
849d825f09551d1f69e77f3c1ce2e4e2d8c647a7 172.38.0.13:6379@16379 master - 0 1644388794524 3 connected 10923-16383
24965da26f1edf024d2f1df6627cbd565f94f097 172.38.0.12:6379@16379 master - 0 1644388794524 2 connected 5461-10922
a5519948e206ec6f7a9295cb4fe13550ecb8df60 172.38.0.14:6379@16379 slave 849d825f09551d1f69e77f3c1ce2e4e2d8c647a7 0 1644388794000 4 connected
测试集群的高可用
#创建数据,记录处理该数据的主机
127.0.0.1:6379> set test map
-> Redirected to slot [6918] located at 172.38.0.12:6379
OK
#停止处理该数据的机器
[root@hecs-131220 ~]# docker stop redis-2
redis-2
#重新进入集群获取该数据
/data # redis-cli -c
127.0.0.1:6379> get test
-> Redirected to slot [6918] located at 172.38.0.16:6379
"map" #成功获取
#查看容器节点信息,发现12节点挂掉
172.38.0.16:6379> CLUSTER nodes
24965da26f1edf024d2f1df6627cbd565f94f097 172.38.0.12:6379@16379 master,fail - 1644389069085 1644389068581 2 connected
a5519948e206ec6f7a9295cb4fe13550ecb8df60 172.38.0.14:6379@16379 slave 849d825f09551d1f69e77f3c1ce2e4e2d8c647a7 0 1644389352477 4 connected
f473358032cfd8b37f5b123beaf822bc06f642f6 172.38.0.15:6379@16379 slave 169bca2ee60b5dcaebbc1c776c3fda32dc84433f 0 1644389352578 5 connected
849d825f09551d1f69e77f3c1ce2e4e2d8c647a7 172.38.0.13:6379@16379 master - 0 1644389351000 3 connected 10923-16383
e22e084f2c5cf0d3b84aaaf26b266e24e03f6e49 172.38.0.16:6379@16379 myself,master - 0 1644389351000 7 connected 5461-10922
169bca2ee60b5dcaebbc1c776c3fda32dc84433f 172.38.0.11:6379@16379 master - 0 1644389351476 1 conbnected 0-5460
6.5.2 SpringBoot微服务打包Docker镜像
构建SpringBoot项目,使用Maven package
生成jar包
编写Dockerfile文件
#Dockerfile
FROM java:8
COPY *.jar /app.jar
CMD ["--server.port=8080"]
EXPOSE 8080
ENTRYPOINT ["java","-jar","/app.jar"]
将Dockerfile文件和生成的jar包一同上传到Linux服务器中
#生成Docker镜像
[root@hecs-131220 idea]# docker build -t helloidea .
Sending build context to Docker daemon 17.62MB
Step 1/5 : FROM java:8
8: Pulling from library/java
5040bd298390: Pull complete
fce5728aad85: Pull complete
76610ec20bf5: Pull complete
60170fec2151: Pull complete
e98f73de8f0d: Pull complete
11f7af24ed9c: Pull complete
49e2d6393f32: Pull complete
bb9cdec9c7f3: Pull complete
Digest: sha256:c1ff613e8ba25833d2e1940da0940c3824f03f802c449f3d1815a66b7f8c0e9d
Status: Downloaded newer image for java:8
---> d23bdf5b1b1b
Step 2/5 : COPY *.jar /app.jar
---> 0c5c2e69d20b
Step 3/5 : CMD ["--server.port=8080"]
---> Running in b81e3ec4a7a0
Removing intermediate container b81e3ec4a7a0
---> 5a5d8092042d
Step 4/5 : EXPOSE 8080
---> Running in 8186295fbc7b
Removing intermediate container 8186295fbc7b
---> ccc7400df15c
Step 5/5 : ENTRYPOINT ["java","-jar","/app.jar"]
---> Running in 53a31b77af95
Removing intermediate container 53a31b77af95
---> 6a44b3dfd4b5
Successfully built 6a44b3dfd4b5
Successfully tagged helloidea:latest
[root@hecs-131220 idea]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
helloidea latest 6a44b3dfd4b5 10 seconds ago 661MB
tomcat latest 4ebba13e9156 5 days ago 680MB
redis 5.0.9-alpine3.11 3661c84ee9d0 21 months ago 29.8MB
java 8 d23bdf5b1b1b 5 years ago 643MB
#启动
[root@hecs-131220 idea]# docker run -d -p 8080:8080 helloidea
10bbad9cdd402f1217f494bbc32eb42804a39d46aab4d77773a15cb123bf0d56
#测试
[root@hecs-131220 idea]# curl localhost:8080
{"timestamp":"2022-02-09T08:57:25.119+0000","status":404,"error":"Not Found","message":"No message available","path":"/"}[root@hecs-131220 idea]# curl localhost:8080/hello
Hello, Docker![root@hecs-131220 idea]#
7. DockerCompose
Compose两个重要的概念:
- 服务services:容器、应用(redis、mysql)
- 项目project:一组关联的容器。
7.1Compose的安装
1.下载适合本机的Compose
#github(官方下载)
sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
#上述连接下载速度太慢,可以使用下面这个:
curl -L https://get.daocloud.io/docker/compose/releases/download/1.25.5/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
2.二进制可执行文件授权
sudo chmod +x /usr/local/bin/docker-compose
3.测试安装是否成功
[root@hecs-131220 bin]# docker compose --version
Docker version 20.10.12, build e91ed57
Compose初体验
-
创建工作目录
$ mkdir composetest $ cd composetest
-
在工作目录下创建一个名为
app.py
的文件:import time import redis from flask import Flask app = Flask(__name__) cache = redis.Redis(host='redis', port=6379) def get_hit_count(): retries = 5 while True: try: return cache.incr('hits') except redis.exceptions.ConnectionError as exc: if retries == 0: raise exc retries -= 1 time.sleep(0.5) @app.route('/') def hello(): count = get_hit_count() return 'Hello World! I have been seen {} times.\n'.format(count)
-
创建名为
requirements.txt
的文件:flask redis
-
创建Dockerfile文件
# syntax=docker/dockerfile:1 FROM python:3.7-alpine WORKDIR /code ENV FLASK_APP=app.py ENV FLASK_RUN_HOST=0.0.0.0 RUN apk add --no-cache gcc musl-dev linux-headers COPY requirements.txt requirements.txt RUN pip install -r requirements.txt EXPOSE 5000 COPY . . CMD ["flask", "run"]
-
创建
docker-compose.yml
文件version: "3.3" services: web: build: . ports: - "8000:5000" redis: image: "redis:alpine"
-
使用
Compose
构建并运行项目[root@hecs-131220 composetest]# docker-compose up Creating composetest_redis_1 ... done Creating composetest_web_1 ... done Attaching to composetest_redis_1, composetest_web_1 redis_1 | 1:C 17 Feb 2022 03:33:37.003 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo redis_1 | 1:C 17 Feb 2022 03:33:37.003 # Redis version=6.2.6, bits=64, commit=00000000, modified=0, pid=1, just started redis_1 | 1:C 17 Feb 2022 03:33:37.003 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf redis_1 | 1:M 17 Feb 2022 03:33:37.004 * monotonic clock: POSIX clock_gettime redis_1 | 1:M 17 Feb 2022 03:33:37.004 * Running mode=standalone, port=6379. redis_1 | 1:M 17 Feb 2022 03:33:37.004 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128. redis_1 | 1:M 17 Feb 2022 03:33:37.004 # Server initialized redis_1 | 1:M 17 Feb 2022 03:33:37.004 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect. redis_1 | 1:M 17 Feb 2022 03:33:37.004 * Ready to accept connections web_1 | * Serving Flask app 'app.py' (lazy loading) web_1 | * Environment: production web_1 | WARNING: This is a development server. Do not use it in a production deployment. web_1 | Use a production WSGI server instead. web_1 | * Debug mode: off web_1 | * Running on all addresses. web_1 | WARNING: This is a development server. Do not use it in a production deployment. web_1 | * Running on http://172.18.0.3:5000/ (Press CTRL+C to quit)
参考:https://docs.docker.com/compose/gettingstarted/
网络规则:
[root@hecs-131220 ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
58948982621c bridge bridge local
b44176ed5bd4 composetest_default bridge local
a19f926bfd03 host host local
8f748582146f none null local
[root@hecs-131220 ~]# docker network inspect composetest_default
[
{
"Name": "composetest_default",
"Id": "b44176ed5bd40560c223aa126b435d597f5f997d436c54fd74f93acee0bef846",
"Created": "2022-02-17T11:11:21.889534121+08:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"93b3c3dcaa63f0ea9e8fe3f80ef584434673d822e02863666e43f631c415a781": {
"Name": "composetest_redis_1",
"EndpointID": "ccdcf3f6017dd9b4973f4fdd3f52c219439e22654f65a2d37ddaa21b82ebc121",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
},
"a8addeaac625cec16e95f09314b07cddcba6bb1565b7bac87cbb7bef21181cf4": {
"Name": "composetest_web_1",
"EndpointID": "f7c466535346f496c45b7b42e083d56102eb9ec658e60ad55cf09a9fdf4b8a32",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "default",
"com.docker.compose.project": "composetest",
"com.docker.compose.version": "1.25.5"
}
}
]
项目中的服务都在同一个网络下,可以直接通过名称进行访问
7.2 Compose配置编写规则
#3层
version:'' #版本
service: #服务
service1:
#服务配置
#depends_on
service2
#服务配置
#其他配置 网络/卷、全局规则
volumes:
network:
configs:
7.3使用Compose一键部署博客
[root@hecs-131220 home]# mkdir my_wordpress
[root@hecs-131220 home]# cd my_wordpress/
[root@hecs-131220 my_wordpress]# vim docker-compose.yml
#vim docker-compose.yml 文件
version: "3.8"
services:
db:
image: mysql:5.7
volumes:
- db_data:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: somewordpress
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
wordpress:
depends_on:
- db
image: wordpress:latest
volumes:
- wordpress_data:/var/www/html
ports:
- "8000:80"
restart: always
environment:
WORDPRESS_DB_HOST: db
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpress
WORDPRESS_DB_NAME: wordpress
volumes:
db_data: {}
wordpress_data: {}
#启动
[root@hecs-131220 my_wordpress]# docker-compose up -d
Creating network "my_wordpress_default" with the default driver
Creating volume "my_wordpress_db_data" with default driver
Creating volume "my_wordpress_wordpress_data" with default driver
Pulling db (mysql:5.7)...
5.7: Pulling from library/mysql
6552179c3509: Pull complete
Digest: sha256:afc453de0d675083ac00d0538521f8a9a67d1cce180d70fab9925ebcc87a0eba
Status: Downloaded newer image for mysql:5.7
Pulling wordpress (wordpress:latest)...
latest: Pulling from library/wordpress
5eb5b503b376: Pull complete
Digest: sha256:3e28e1e0b732e1828028d7d500eb73f273fc8365215f633414e60cdc631e0d91
Status: Downloaded newer image for wordpress:latest
Creating my_wordpress_db_1 ... done
Creating my_wordpress_wordpress_1 ... done
#启动成功
测试
浏览器访问:http://119.3.89.83:8000/
8. Docker Swarm
购置4台1c2g服务器,按量付费。
8.1 Swarm集群搭建
#主机1 内网地址
[root@ecs-1c8e-0003 ~]# docker swarm init --advertise-addr 192.168.0.11
Swarm initialized: current node (2j03yg2iaaz6bgglkmf6xfrfg) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-4ihnjf72f5ay2zm4la4qg034wng9i97unjvd8xx42bs1f7qzta-ajva81niyh0my95e7k48r208d 192.168.0.11:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
两个命令:
docker swarm join --token SWMTKN-1-4ihnjf72f5ay2zm4la4qg034wng9i97unjvd8xx42bs1f7qzta-ajva81niyh0my95e7k48r208d 192.168.0.11:2377
#
docker swarm join-token manager
docker swarm join-token worker
#主机2 加入集群
[root@ecs-1c8e-0001 ~]# docker swarm join --token SWMTKN-1-4ihnjf72f5ay2zm4la4qg034wng9i97unjvd8xx42bs1f7qzta-ajva81niyh0my95e7k48r208d 192.168.0.11:2377
This node joined a swarm as a worker.
获取节点的状态
#主机1
[root@ecs-1c8e-0003 ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
vq3t959tp8055s04vh595w8s6 ecs-1c8e-0001 Ready Active 20.10.12
2j03yg2iaaz6bgglkmf6xfrfg * ecs-1c8e-0003 Ready Active Leader 20.10.12
#主机1 生成成为woker节点的命令
[root@ecs-1c8e-0003 ~]# docker swarm join-token worker
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-4ihnjf72f5ay2zm4la4qg034wng9i97unjvd8xx42bs1f7qzta-ajva81niyh0my95e7k48r208d 192.168.0.11:2377
#主机3 加入
[root@ecs-1c8e-0004 ~]# docker swarm join --token SWMTKN-1-4ihnjf72f5ay2zm4la4qg034wng9i97unjvd8xx42bs1f7qzta-ajva81niyh0my95e7k48r208d 192.168.0.11:2377
This node joined a swarm as a worker.
#主机1 获取集群节点的状态
[root@ecs-1c8e-0003 ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
vq3t959tp8055s04vh595w8s6 ecs-1c8e-0001 Ready Active 20.10.12
2j03yg2iaaz6bgglkmf6xfrfg * ecs-1c8e-0003 Ready Active Leader 20.10.12
gbbhiz23k2ap2greplpz1izks ecs-1c8e-0004 Ready Active 20.10.12
#创建一个成为主节点的令牌
[root@ecs-1c8e-0003 ~]# docker swarm join-token manager
To add a manager to this swarm, run the following command:
docker swarm join --token SWMTKN-1-4ihnjf72f5ay2zm4la4qg034wng9i97unjvd8xx42bs1f7qzta-0gktjdr3fypizpv9nwosb6xtv 192.168.0.11:2377
#节点4 成为主节点
[root@ecs-1c8e-0002 ~]# docker swarm join --token SWMTKN-1-4ihnjf72f5ay2zm4la4qg034wng9i97unjvd8xx42bs1f7qzta-0gktjdr3fypizpv9nwosb6xtv 192.168.0.11:2377
This node joined a swarm as a manager.
#主机1 两主两从
[root@ecs-1c8e-0003 ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
vq3t959tp8055s04vh595w8s6 ecs-1c8e-0001 Ready Active 20.10.12
jybdg76twdo045ivbpebtou35 ecs-1c8e-0002 Ready Active Reachable 20.10.12
2j03yg2iaaz6bgglkmf6xfrfg * ecs-1c8e-0003 Ready Active Leader 20.10.12
gbbhiz23k2ap2greplpz1izks ecs-1c8e-0004 Ready Active 20.10.12
步骤总结:
1生成主节点 init
2.加入(manager、worker)
8.2 弹性创建服务
保证多于1台的管理节点(即至少需要两台),否则不工作。
弹性、扩缩容、集群。
集群:swarm docker service
[root@ecs-1c8e-0003 ~]# docker service --help
Usage: docker service COMMAND
Manage services
Commands:
create Create a new service
inspect Display detailed information on one or more services
logs Fetch the logs of a service or task
ls List services
ps List the tasks of one or more services
rm Remove one or more services
rollback Revert changes to a service's configuration
scale Scale one or multiple replicated services
update Update a service
Run 'docker service COMMAND --help' for more information on a command.
灰度发布、金丝雀发布
[root@ecs-1c8e-0003 ~]# docker service create -p 8888:80 --name mynginx nginx
j34hu7vpm0mg3ffc1zzisjdeb
overall progress: 1 out of 1 tasks
1/1: running [==================================================>]
verify: Service converged
docker run 容器启动! 不具有扩缩容
docker service 服务 具有扩缩容和滚动更新
#service
[root@ecs-1c8e-0003 ~]# docker service ps mynginx
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
xeoi6ausmzi1 mynginx.1 nginx:latest ecs-1c8e-0001 Running Running 4 minutes ago
[root@ecs-1c8e-0003 ~]# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
j34hu7vpm0mg mynginx replicated 1/1 nginx:latest *:8888->80/tcp
#扩容
[root@ecs-1c8e-0003 ~]# docker service update --replicas 3 mynginx
mynginx
overall progress: 3 out of 3 tasks
1/3: running [==================================================>]
2/3: running [==================================================>]
3/3: running [==================================================>]
verify: Service converged
#缩容
[root@ecs-1c8e-0003 ~]# docker service update --replicas 1 mynginx
#scale同理
[root@ecs-1c8e-0003 ~]# docker service scale mynginx=5
mynginx scaled to 5
overall progress: 5 out of 5 tasks
1/5: running [==================================================>]
2/5: running [==================================================>]
3/5: running [==================================================>]
4/5: running [==================================================>]
5/5: running [==================================================>]
verify: Service converged
8.3概念总结
swarm
集群的管理和编号。docker可以初始化一个swarm集群,其他节点可以加入。(管理、工作者)
Node
就是一个docker节点。多个节点就组成了一个网络集群。(管理、工作者)
service
任务,可以在管理节点或者工作节点来运行。核心。!用户访问!
Task
容器内的命令,细节任务!
Docker Stack
Docker Secret
Docker Config