Docker容器详细介绍

1.docker简介

1.1什么是Docker

 Docker是管理容器的引擎,为应用打包、部署平台,而非单纯的虚拟化技术

它具有以下几个重要特点和优势:

1. 轻量级虚拟化

Docker 容器相较于传统的虚拟机更加轻量和高效,能够快速启动和停止,节省系统资源。

2. 一致性

确保应用程序在不同的环境中(如开发、测试、生产)具有一致的运行表现。

        无论在本地还是云端,应用的运行环境都能保持相同,减少了因环境差异导致的问题。

3. 可移植性 :

可以轻松地将 Docker 容器从一个平台迁移到另一个平台,无需担心依赖和环境配置的差 异。

比如,在本地开发的容器可以无缝部署到云服务器上。

4. 高效的资源利用:

多个 Docker 容器可以共享主机的操作系统内核,从而更有效地利用系统资 源。

5. 易于部署和扩展:

能够快速部署新的应用实例,并且可以根据需求轻松地进行水平扩展。

总之,Docker 极大地简化了应用程序的开发、部署和管理流程,提高了开发效率和运维的便利性。 它在现代软件开发和云计算领域得到了广泛的应用。

2.部署docker

以下都是在rhel9里面实现的

我是通过导入docker的压缩包进行部署

####导入docker并解压
[root@docker-node1 ~]# tar zxf docker.tar.gz 
[root@docker-node1 ~]# ls
anaconda-ks.cfg
busybox-latest.tar.gz
containerd.io-1.7.20-3.1.el9.x86_64.rpm
docker
docker-buildx-plugin-0.16.2-1.el9.x86_64.rpm
docker-ce-27.1.2-1.el9.x86_64.rpm
docker-ce-cli-27.1.2-1.el9.x86_64.rpm
docker-ce-rootless-extras-27.1.2-1.el9.x86_64.rpm
docker-compose-plugin-2.29.1-1.el9.x86_64.rpm
docker.tar.gz
game2048.tar.gz
mario.tar.gz
nginx-latest.tar.gz

####下载解压后docker中的rpm包
[root@docker-node1 ~]# dnf install *.rpm -y 

####启动docker
[root@docker-node1 ~]# systemctl enable --now docker


#######查看docker信息
[root@docker-node1 ~]#  docker info 

####查看docker镜像
[root@docker-node1 ~]# docker images 
REPOSITORY           TAG       IMAGE ID       CREATED         SIZE

#####docker镜像加速器
[root@docker-node1 ~]# cat /etc/docker/daemon.json 
{
  "registry-mirrors": ["https://docker.m.daocloud.io"]
}

[root@docker-node1 ~]# systemctl restart docker

####这样拉取就会快很多
[root@docker-node1 ~]# docker pull busybox:latest
latest: Pulling from library/busybox
Digest: sha256:9ae97d36d26566ff84e8893c64a6dc4fe8ca6d1144bf5b87b2b85a32def253c7
Status: Image is up to date for busybox:latest
docker.io/library/busybox:latest

然后也可以通过下载docker来部署

####配置网络仓库
]# cd /etc/yum.repos.d
]# vim docker.repo
[docker]
name=docker-ce
baseurl=https://mirrors.tuna.tsinghua.edu.cn/dockerce/linux/centos/7/x86_64/stable/
gpgcheck=0
[centos]
name=extras
baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/7/extras/x86_64
gpgcheck=0



####安装docker
]# yum install -y docker-ce 

#编辑docker启动文件,设定其使用iptables的网络设定方式,默认使用nftables
[root@docker ~]# vim /usr/lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock 
--iptables=true

###启动docker
]# systemctl enable --now docker

###查看docker信息
]# docker info

如果我们网络比较好,我们可以直接在docker中用pull拉取下载所需要的镜像

如果不好的话可以将事先准备好的镜像压缩包导入系统中

####如果网络好的话,可以用pull拉取下载,
#####这里是我们将事先准备好的压缩包值load
[root@docker-node1 ~]# docker load -i mario.tar.gz 
[root@docker-node1 ~]# docker load -i nginx-latest.tar.gz 
[root@docker-node1 ~]# docker load -i busybox-latest.tar.gz 

#####查看
[root@docker-node1 ~]# docker images
REPOSITORY           TAG       IMAGE ID       CREATED         SIZE
nginx                latest    5ef79149e0ec   12 days ago     188MB
busybox              latest    65ad0d468eb1   15 months ago   4.26MB
timinglee/game2048   latest    19299002fdbe   7 years ago     55.5MB
timinglee/mario      latest    9a35a9e43e8c   8 years ago     198MB

 ###运行,-d表示后台运行,--rm表示停掉就删除  --name webserver  名字为webserver -p 表示暴露的端口,前面表示本机端口,后面表示容器端口   开启nginx
[root@docker-node1 ~]# docker run -d --rm --name webserver -p 80:80 nginx
ad97ad260c0403a2365d4c981d2e314f578be39397220f3cca83f65c19fb38a0

###查看
[root@docker-node1 ~]# docker ps
CONTAINER ID   IMAGE     COMMAND                  CREATED         STATUS         PORTS                               NAMES
ad97ad260c04   nginx     "/docker-entrypoint.…"   2 minutes ago   Up 2 minutes   0.0.0.0:80->80/tcp, :::80->80/tcp   webserver
#####然后打开浏览器访问172.25.250.100

####删除
[root@docker-node1 ~]# docker rm -f webserver 
webserver
[root@docker-node1 ~]# docker ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES

####再运行一个
[root@docker-node1 ~]# docker run -d --rm --name game2 -p 80:8080 timinglee/mario
fd3834014cb74e3da4a04a215bf764bd837f7aac0c4c5ca6691004ac759c3208
[root@docker-node1 ~]# docker ps
CONTAINER ID   IMAGE             COMMAND                  CREATED              STATUS              PORTS                                   NAMES
fd3834014cb7   timinglee/mario   "python3 -m http.ser…"   About a minute ago   Up About a minute   0.0.0.0:80->8080/tcp, :::80->8080/tcp   game2
####然后访问172.25.250.100

3.docker基本操作

3.1docker镜像管理

3.1.1搜索镜像

[root@Docker-node1 ~]# docker search nginx
NAME           DESCRIPTION                                     STARS     
OFFICIAL
nginx         Official build of Nginx.                         20094     [OK]
@@@省略内容

3.1.2 拉取镜像

#从镜像仓库中拉取镜像
[root@Docker-node1 ~]# docker pull busybox
[root@Docker-node1 ~]# docker pull nginx:1.26-alpine
#查看本地镜像
[root@Docker-node1 ~]# docker images

3.1.3查看镜像信息

[root@Docker-node1 ~]# docker image inspect nginx:1.26-alpine

2.1.4导出镜像

-o:指定导出镜像的位置;

可以同时导出多个镜像到一个文件中;

指定.tar.gz 可以导出并压缩。

####导出镜像
[root@docker-node1 ~]# docker image save -o nginx-1.26-alpine.tar.gz nginx:1.26-alpine
[root@docker-node1 ~]# ls
anaconda-ks.cfg
busybox-latest.tar.gz
containerd.io-1.7.20-3.1.el9.x86_64.rpm
docker
docker-buildx-plugin-0.16.2-1.el9.x86_64.rpm
docker-ce-27.1.2-1.el9.x86_64.rpm
docker-ce-cli-27.1.2-1.el9.x86_64.rpm
docker-ce-rootless-extras-27.1.2-1.el9.x86_64.rpm
docker-compose-plugin-2.29.1-1.el9.x86_64.rpm
docker.tar.gz
game2048.tar.gz
mario.tar.gz
nginx-1.26-alpine.tar.gz      ############这里可以看到导出的镜像tar包
nginx-latest.tar.gz

####查看tar包大小
[root@docker-node1 ~]# du -sh nginx-1.26-alpine.tar.gz 
43M	nginx-1.26-alpine.tar.gz


2.1.5保存、导出、导入、删除全部镜像

[root@docker-node1 ~]# docker images
REPOSITORY           TAG           IMAGE ID       CREATED         SIZE
nginx                1.26-alpine   9703b2608a98   12 days ago     43.3MB
nginx                latest        5ef79149e0ec   12 days ago     188MB
busybox              latest        65ad0d468eb1   15 months ago   4.26MB
timinglee/game2048   latest        19299002fdbe   7 years ago     55.5MB
timinglee/mario      latest        9a35a9e43e8c   8 years ago     198MB


####保存所有镜像
####   docker images 查看所有镜像    NR>1{print $1":"$2} 行号大于1的
[root@Docker-node1 ~]# docker save `docker images | awk 'NR>1{print $1":"$2}'` -
o images.tar.gz

[root@docker-node1 ~]# ls
anaconda-ks.cfg
busybox-latest.tar.gz
containerd.io-1.7.20-3.1.el9.x86_64.rpm
docker
docker-buildx-plugin-0.16.2-1.el9.x86_64.rpm
docker-ce-27.1.2-1.el9.x86_64.rpm
docker-ce-cli-27.1.2-1.el9.x86_64.rpm
docker-ce-rootless-extras-27.1.2-1.el9.x86_64.rpm
docker-compose-plugin-2.29.1-1.el9.x86_64.rpm
docker.tar.gz
game2048.tar.gz
images.tar.gz             ##############生成了一个images.tar.gz压缩包,里面有所有镜像
mario.tar.gz
nginx-1.26-alpine.tar.gz
nginx-latest.tar.gz

@####删除全部镜像
[root@docker-node1 ~]# docker rmi `docker images | awk 'NR>1{print $1":"$2}'`

####全部删掉了,但是前提需要没有任何使用的镜像,如果有正在使用的镜像的话,是删不掉正在使用的那个镜像
[root@docker-node1 ~]# docker images
REPOSITORY        TAG       IMAGE ID       CREATED       SIZE

####导入images.tar.gz
[root@docker-node1 ~]# docker load -i images.tar.gz 

#####再次查看
[root@docker-node1 ~]# docker images
REPOSITORY           TAG           IMAGE ID       CREATED         SIZE
nginx                1.26-alpine   9703b2608a98   12 days ago     43.3MB
nginx                latest        5ef79149e0ec   12 days ago     188MB
busybox              latest        65ad0d468eb1   15 months ago   4.26MB
timinglee/game2048   latest        19299002fdbe   7 years ago     55.5MB
timinglee/mario      latest        9a35a9e43e8c   8 years ago     198MB


2.1.6删除镜像

####删除镜像nginx:1.26-alpine
[root@docker-node1 ~]# docker rmi nginx:1.26-alpine 

2.2容器的常用操作

-d #后台运行

-i #交互式运行

-t #打开一个终端

--name #指定容器名称

-p #端口映射 -p 80:8080 把容器8080端口映射到本机80端口

--rm #容器停止自动删除容器

--network #指定容器使用的网络

2.2.1启动、删除、停止容器

####-d表示后台运行
#####  -i 表示交互运行   t表示打开一共终端

[root@Docker-node1 ~]# docker run -d --name mario -p 80:8080 timinglee/mario
[root@Docker-node1 ~]# docker run -it --name centos7 centos:7
[root@3ba22e59734f /]# #进入到容器中,按<ctrl>+<d>退出并停止容器,#按<ctrl>+<pq>退出但
不停止容器

#重新进入容器
[root@docker ~]# docker attach centos7
[root@3ba22e59734f /]#
#在容器中执行命令
[root@docker ~]# docker exec -it test ifconfig
lo       Link encap:Local Loopback
         inet addr:127.0.0.1 Mask:255.0.0.0
         inet6 addr: ::1/128 Scope:Host
         UP LOOPBACK RUNNING MTU:65536 Metric:1
         RX packets:0 errors:0 dropped:0 overruns:0 frame:0
         TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
         collisions:0 txqueuelen:1000
         RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

###停止容器
[root@docker-node1 ~]# docker stop mario
mario

#####查看
[root@docker-node1 ~]# docker ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES

#####  -i 表示交互运行   t表示打开一共终端
[root@docker-node1 ~]# docker run -it --name test busybox
/ # 
/ # 
/ # 
/ # 

#####在容器里面时间被隔离
/ # ls
bin    dev    etc    home   lib    lib64  proc   root   sys    tmp    usr    var
/ # date
Tue Aug 27 15:18:08 UTC 2024
/ # exit
[root@docker-node1 ~]# date
Tue Aug 27 11:18:27 PM CST 2024

####删除test容器
[root@docker-node1 ~]# docker rm test 
test

####查看nginx端口
[root@docker-node1 ~]# docker history nginx:latest 

####查看结束的进程或停止的进程
[root@docker-node1 ~]# docker ps -a 

####它运行了,但是这个容器需要交互,所以退出了,rm即停止就删除,所以查看不了停止的进行
[root@docker-node1 ~]# docker run --rm -d --name test busybox:latest 
597edc3e6badf5b1f5a4239a273469d195c055125f359f2e86a81b81788189f6
[root@docker-node1 ~]# docker ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
[root@docker-node1 ~]#  docker ps -a
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES

2.2.2 容器内容的提交

默认情况下,容器被删除后,在容器中的所有操作都会被清理,包括要保存的文件

如果想永久保存,那么我们需要把动作提交,提交后会生成新的镜像

当我们在运行新镜像后即可看到我们提交的内容

#####我们运行容器在里面创建file  renfile
[root@docker-node1 ~]# docker run -it --name test busybox
/ # 
/ # touch file
/ # ls
bin    etc    home   lib64  root   tmp    var
dev    file   lib    proc   sys    usr
/ # touch renfile
/ # ls
bin      etc      home     lib64    renfile  sys      usr
dev      file     lib      proc     root     tmp      var
/ # exit

#####退出后删掉再进去发现他没有保存
[root@docker-node1 ~]# docker rm test 
test
[root@docker-node1 ~]# docker run -it --name test busybox
/ # ls
bin    dev    etc    home   lib    lib64  proc   root   sys    tmp    usr    var
/ # 

####进入test创建renfile
[root@docker-node1 ~]# docker run -it --name test busybox
/ # touch renfile
/ # ls
bin      etc      lib      proc     root     tmp      var
dev      home     lib64    renfile  sys      usr
/ # [root@docker-node1 ~]# docker ps     #####然后我们ctrl+pq且查看进程
CONTAINER ID   IMAGE     COMMAND   CREATED          STATUS          PORTS     NAMES
f0c87b065a5b   busybox   "sh"      32 seconds ago   Up 30 seconds             test
#####commit
[root@docker-node1 ~]# docker commit -m "add renfile" test busybox:v1  
sha256:1783207b16680fbbd51ea3f69bd64a014b09b2842f3ad2b470a6789aaddd4bc9
####看到有一个新的busybox v1
[root@docker-node1 ~]# docker images
REPOSITORY           TAG           IMAGE ID       CREATED         SIZE
busybox              v1            1783207b1668   8 seconds ago   4.26MB
nginx                1.26-alpine   9703b2608a98   12 days ago     43.3MB
nginx                latest        5ef79149e0ec   12 days ago     188MB
busybox              latest        65ad0d468eb1   15 months ago   4.26MB
timinglee/game2048   latest        19299002fdbe   7 years ago     55.5MB
timinglee/mario      latest        9a35a9e43e8c   8 years ago     198MB
[root@docker-node1 ~]# docker rm -f test 
test
####进新的容器看不到renfile
[root@docker-node1 ~]# docker run -it --name test busybox:latest 
/ # ls
bin    dev    etc    home   lib    lib64  proc   root   sys    tmp    usr    var
/ # exit
[root@docker-node1 ~]# docker rm test 
test

###到busybox:v1容器  可以看到renfile
[root@docker-node1 ~]# docker run -it --name test busybox:v1 
/ # ls
bin      etc      lib      proc     root     tmp      var
dev      home     lib64    renfile  sys      usr
/ # 

#####查看busybox:v1  两层--提交了一层(写入层),如果再提交一层,那么原来的写入层就会变成只读层,以此类推,最多可提交127层  
[root@docker-node1 ~]# docker history  busybox:v1
IMAGE          CREATED         CREATED BY                          SIZE      COMMENT
1783207b1668   4 minutes ago   sh                                  17B       add renfile 
65ad0d468eb1   15 months ago   BusyBox 1.36.1 (glibc), Debian 12   4.26MB    

####busybox:latest  --一层(只读层)
[root@docker-node1 ~]# docker history  busybox:latest 
IMAGE          CREATED         CREATED BY                          SIZE      COMMENT
65ad0d468eb1   15 months ago   BusyBox 1.36.1 (glibc), Debian 12   4.26MB    

2.2.3系统中的文件和容器中的文件传输

#####把原来的容器删掉
[root@docker-node1 ~]# docker rm -f test 
test

####新建一个,创建renfile
[root@docker-node1 ~]# docker run -it --name test busybox:v1 
/ # touch renfile
/ # ls
bin      etc      lib      proc     root     tmp      var
dev      home     lib64    renfile  sys      usr
/ # [root@docker-node1 ~]# docker ps
CONTAINER ID   IMAGE        COMMAND   CREATED          STATUS          PORTS     NAMES
4e2c9822df68   busybox:v1   "sh"      22 seconds ago   Up 21 seconds             test

####将容器内文件cp到系统里的/mnt
[root@docker-node1 ~]# docker cp test:/renfile /mnt
Successfully copied 1.54kB to /mnt
[root@docker-node1 ~]# ls /mnt/
hgfs  renfile

###将系统里的文件拷贝到容器内
[root@docker-node1 ~]# mkdir /etc/zhangsan
[root@docker-node1 ~]# docker cp /etc/zhangsan/ test:/
Successfully copied 1.54kB to test:/
[root@docker-node1 ~]# docker attach test 
/ # 
/ # 
/ # ls
bin       etc       lib       proc      root      tmp       var
dev       home      lib64     renfile   sys       usr       zhangsan
/ # 

2.2.4查询容器内部日志

[root@docker-node1 ~]# docker logs test

3.docker镜像构建

3.1docker镜像结构

共享宿主机的kernel

base镜像提供的是最小的Linux发行版

同一docker主机支持运行多种Linux发行版

采用分层结构的最大好处是:共享资源

3.2 镜像运行的基本原理

Copy-on-Write 可写容器层

容器层以下所有镜像层都是只读的

docker从上往下依次查找文件

容器层保存镜像变化的部分,并不会对镜像本身进行任何修改

一个镜像最多127层

3.3 镜像获得方式

基本镜像通常由软件官方提供

企业镜像可以用官方镜像+Dockerfile来生成

系统关于镜像的获取动作有两种:

docker pull 镜像地址

docker load –i 本地镜像包

3.4 镜像构建

3.4.1 构建参数

示例

[root@docker-node1 ~]# cd /mnt/
[root@docker-node1 mnt]# ls
hgfs  renfile

####导入centos7镜像
[root@docker-node1 mnt]# rz -E
rz waiting to receive.
[root@docker-node1 mnt]# ls
centos-7.tar.gz  hgfs  renfile
[root@docker-node1 mnt]# docker load  -i /mnt/centos-7.tar.gz 
174f56854903: Loading layer  211.7MB/211.7MB
Loaded image: centos:7
[root@docker-node1 mnt]# cd

[root@docker-node2 ~]# cd docker/
[root@docker-node2 docker]# mv /mnt/nginx-1.26.1.tar.gz .
[root@docker-node2 docker]# ls
Dockerfile nginx-1.26.1.tar.gz  


#配置文件
[root@docker-node1 docker]# vim Dockerfile 
FROM centos:repo
LABEL USER=ren@123
ADD nginx-1.26.1.tar.gz /mnt
WORKDIR /mnt/nginx-1.26.1
RUN yum install gcc make pcre-devel openssl-devel -y
RUN ./configure --prefix=/usr/local/nginx --with-http_ssl_module --with-http_stub_status_module
RUN make
RUN make install
EXPOSE 80 443
VOLUME ["/usr/local/nginx/html"]
CMD ["/usr/local/nginx/sbin/nginx","-g","daemon off"]


#配置HTTP和挂载
[root@docker-node1 docker]# yum install httpd -y
[root@docker-node1 docker]# vim /etc/httpd/conf/httpd.conf ####把端口改为8888
[root@docker-node1 docker]# systemctl restart httpd


#####我们将rhel7.9的镜像添加到系统中,rhel7.9的cd光盘添加到VMware 中 rhel9挂载cd的地方,添加就可以了
[root@docker-node1 docker]# mkdir /var/www/html/rhel7.9

[root@docker-node1 ~]# mount /dev/sr1 /var/www/html/rhel7.9/
mount: /var/www/html/rhel7.9: WARNING: source write-protected, mounted read-only.

####查看容器的网络(一定要启动容器才能看到网络)
[root@docker-node1 ~]# docker run -it --name centos centos:7
[root@99aa9aaff3f3 /]# cd /etc/yum.repos.d/
[root@99aa9aaff3f3 yum.repos.d]# ls
CentOS-Base.repo  CentOS-Debuginfo.repo  CentOS-Sources.repo  CentOS-fasttrack.repo
CentOS-CR.repo    CentOS-Media.repo      CentOS-Vault.repo    CentOS-x86_64-kernel.repo
[root@99aa9aaff3f3 yum.repos.d]# rm -rf *
[root@99aa9aaff3f3 yum.repos.d]# ls

######配置仓库
[root@99aa9aaff3f3 yum.repos.d]# vi centos7.repo
[centos7]
name=centos7
baseurl=http://172.17.0.1:8888/rhel7.9
gpgcheck=0


#####提交
[root@docker-node1 ~]# docker commit -m "add repo" centos centos:repo
sha256:15a9926e439d1447f1629035a85e86bb241a510b1479919ab6aeb13b8e386929

####就可以查看repo版本的
[root@docker-node1 ~]# docker images
REPOSITORY                      TAG           IMAGE ID       CREATED         SIZE
centos                          repo          15a9926e439d   8 seconds ago   204MB
busybox                         v1            1783207b1668   22 hours ago    4.26MB
.
.
.
centos                          7             eeb6ee3f44bd   2 years ago     204MB

####然后我们容器就可以退出删除掉
[root@99aa9aaff3f3 yum.repos.d]# exit
exit
[root@docker-node1 ~]# docker rm centos 
centos


[root@docker-node1 ~]# cd docker/
[root@docker-node1 docker]# docker build -t nginx:v1 .
[+] Building 22.2s (12/12) FINISHED                                          docker:default
 => [internal] load build definition from Dockerfile                                   0.0s
 => => transferring dockerfile: 406B                                                   0.0s
 => [internal] load metadata for docker.io/library/centos:repo                         0.0s
 => [internal] load .dockerignore                                                      0.0s
 => => transferring context: 2B                                                        0.0s
 => [internal] load build context                                                      0.0s
 => => transferring context: 42B                                                       0.0s
 => [1/7] FROM docker.io/library/centos:repo                                           0.0s
 => CACHED [2/7] ADD nginx-1.26.1.tar.gz /mnt                                          0.0s
 => CACHED [3/7] WORKDIR /mnt/nginx-1.26.1                                             0.0s
 => CACHED [4/7] RUN yum install gcc make pcre-devel openssl-devel -y                  0.0s
 => [5/7] RUN ./configure --prefix=/usr/local/nginx --with-http_ssl_module --with-htt  4.7s
 => [6/7] RUN make                                                                    16.8s
 => [7/7] RUN make install                                                             0.2s
 => exporting to image                                                                 0.5s 
 => => exporting layers                                                                0.5s 
 => => writing image sha256:a9535e22b38f210b4a1bd985c631b678b198b0ac130d46873f523e0b1  0.0s 
 => => naming to docker.io/library/nginx:v1                                            0.0s 
[root@docker-node1 docker]# docker images
####nginx是我们构建的镜像,但是他很大
nginx                           v1            a9535e22b38f   17 seconds ago   356MB
centos                          repo          3034f2269768   6 minutes ago    204MB
.
.
.
centos                          7             eeb6ee3f44bd   2 years ago     204MB

3.5镜像优化方案

3.5.1镜像优化策略

选择最精简的基础镜像

减少镜像的层数

清理镜像构建的中间产物

选择最精简的基础镜像

减少镜像的层数

3.5.2镜像优化示例

方法1.缩减镜像层
[root@docker-node1 docker]# vim Dockerfile 
[root@docker-node1 docker]# docker build -t nginx:v1 .
[+] Building 26.0s (9/9) FINISHED                                            docker:default
 => [internal] load build definition from Dockerfile                                   0.0s
 => => transferring dockerfile: 448B                                                   0.0s
 => [internal] load metadata for docker.io/library/centos:repo                         0.0s
 => [internal] load .dockerignore                                                      0.0s
 => => transferring context: 2B                                                        0.0s
 => [internal] load build context                                                      0.0s
 => => transferring context: 42B                                                       0.0s
 => [1/4] FROM docker.io/library/centos:repo                                           0.0s
 => CACHED [2/4] ADD nginx-1.26.1.tar.gz /mnt                                          0.0s
 => CACHED [3/4] WORKDIR /mnt/nginx-1.26.1                                             0.0s
 => [4/4] RUN yum install gcc make pcre-devel openssl-devel -y && ./configure --pref  25.8s
 => exporting to image                                                                 0.2s
 => => exporting layers                                                                0.2s
 => => writing image sha256:67e189e5bd8a8b8b6f5bf738880eba62d52dc37b599e4873f6d79c9b3  0.0s 
 => => naming to docker.io/library/nginx:v1                                            0.0s 
 
 #####比上面原来的356M小,变成了292M
[root@docker-node1 docker]# docker images                                                   
REPOSITORY                      TAG           IMAGE ID       CREATED             SIZE       
nginx                           v1            67e189e5bd8a   7 seconds ago       292MB

方法2.多阶段构建

[root@docker-node1 docker]# cat Dockerfile 
FROM centos:repo AS build
LABEL USER=ren@123
ADD nginx-1.26.1.tar.gz /mnt
WORKDIR /mnt/nginx-1.26.1
RUN yum install gcc make pcre-devel openssl-devel -y && ./configure --prefix=/usr/local/nginx --with-http_ssl_module --with-http_stub_status_module && make && make install && rm -fr /mnt/nginx-1.26.1 && yum clean all

FROM centos:repo
LABEL USER=ren@123
COPY --from=build /usr/local/nginx /usr/local/nginx
EXPOSE 80 
VOLUME ["/usr/local/nginx/html"]
CMD ["/usr/local/nginx/sbin/nginx","-g","daemon off"]


###
[root@docker-node1 docker]# docker build -t nginx:v2 .
[+] Building 0.1s (10/10) FINISHED                                           docker:default
 => [internal] load build definition from Dockerfile                                   0.0s
 => => transferring dockerfile: 543B                                                   0.0s
 => [internal] load metadata for docker.io/library/centos:repo                         0.0s
 => [internal] load .dockerignore                                                      0.0s
 => => transferring context: 2B                                                        0.0s
 => [internal] load build context                                                      0.0s
 => => transferring context: 42B                                                       0.0s
 => CACHED [build 1/4] FROM docker.io/library/centos:repo                              0.0s
 => CACHED [build 2/4] ADD nginx-1.26.1.tar.gz /mnt                                    0.0s
 => CACHED [build 3/4] WORKDIR /mnt/nginx-1.26.1                                       0.0s
 => CACHED [build 4/4] RUN yum install gcc make pcre-devel openssl-devel -y && ./conf  0.0s
 => [stage-1 2/2] COPY --from=build /usr/local/nginx /usr/local/nginx                  0.0s
 => exporting to image                                                                 0.0s
 => => exporting layers                                                                0.0s
 => => writing image sha256:57880b19058860d1a36b9cc5a3533fced42073160a40c6e5ccdd13841  0.0s
 => => naming to docker.io/library/nginx:v2                                            0.0s
 
 ###可以看到nginx更小了
[root@docker-node1 docker]# docker images
REPOSITORY                      TAG           IMAGE ID       CREATED             SIZE
nginx                           v2            57880b190588   6 seconds ago       210MB
nginx                           v1            67e189e5bd8a   8 minutes ago       292MB

方法3.使用最精简镜像

####将最小镜像拉倒系统里面
[root@docker-node1 ~]# docker load -i debian11.tar.gz
5342a2647e87: Loading layer  327.7kB/327.7kB
577c8ee06f39: Loading layer   51.2kB/51.2kB
9ed498e122b2: Loading layer  3.379MB/3.379MB
4d049f83d9cf: Loading layer  1.536kB/1.536kB
af5aa97ebe6c: Loading layer   2.56kB/2.56kB
ac805962e479: Loading layer   2.56kB/2.56kB
bbb6cacb8c82: Loading layer   2.56kB/2.56kB
2a92d6ac9e4f: Loading layer  1.536kB/1.536kB
1a73b54f556b: Loading layer  10.24kB/10.24kB
c048279a7d9f: Loading layer  3.072kB/3.072kB
2388d21e8e2b: Loading layer  225.3kB/225.3kB
8451c71f8c1e: Loading layer  12.92MB/12.92MB
24aacbf97031: Loading layer  3.983MB/3.983MB
6835249f577a: Loading layer  1.505MB/1.505MB
Loaded image: gcr.io/distroless/base-debian11:latest

####将nginx-1.23.tar.gz拉到/root/docker
[root@docker-node1 docker]# ls
Dockerfile         nginx-1.26.1         nginx-1.26.1.tar.gz.0  renfile1  renfile3
nginx-1.23.tar.gz  nginx-1.26.1.tar.gz  renfile                renfile2  renfile.gz

[root@docker-node1 docker]# cat Dockerfile 
FROM nginx:1.23 AS base
ARG TIME_ZONE
RUN mkdir -p /opt/var/cache/nginx && \
   cp -a --parents /usr/lib/nginx /opt && \
   cp -a --parents /usr/share/nginx /opt && \
   cp -a --parents /var/log/nginx /opt && \
   cp -aL --parents /var/run /opt && \
   cp -a --parents /etc/nginx /opt && \
   cp -a --parents /etc/passwd /opt && \
   cp -a --parents /etc/group /opt && \
   cp -a --parents /usr/sbin/nginx /opt && \
   cp -a --parents /usr/sbin/nginx-debug /opt && \
   cp -a --parents /lib/x86_64-linux-gnu/ld-* /opt && \
   cp -a --parents /usr/lib/x86_64-linux-gnu/libpcre* /opt && \
   cp -a --parents /lib/x86_64-linux-gnu/libz.so.* /opt && \
   cp -a --parents /lib/x86_64-linux-gnu/libc* /opt && \
   cp -a --parents /lib/x86_64-linux-gnu/libdl* /opt && \
   cp -a --parents /lib/x86_64-linux-gnu/libpthread* /opt && \
   cp -a --parents /lib/x86_64-linux-gnu/libcrypt* /opt && \
   cp -a --parents /usr/lib/x86_64-linux-gnu/libssl.so.* /opt && \
   cp -a --parents /usr/lib/x86_64-linux-gnu/libcrypto.so.* /opt && \
   cp /usr/share/zoneinfo/${TIME_ZONE:-ROC} /opt/etc/localtime
FROM gcr.io/distroless/base-debian11
COPY --from=base /opt /
EXPOSE 80 443
ENTRYPOINT ["nginx", "-g", "daemon off;"]

[root@docker-node1 docker]# docker load -i nginx-1.23.tar.gz 
###
[root@docker-node1 docker]# docker build -t nginx:v3 .

4.docker镜像仓库管理

4.1什么是docker仓库

Docker 仓库(Docker Registry) 是用于存储和分发 Docker 镜像的集中式存储库。

它就像是一个大型的镜像仓库,开发者可以将自己创建的 Docker 镜像推送到仓库中,也可以从仓库中拉 取所需的镜像。

Docker 仓库可以分为公共仓库和私有仓库:

  • 公共仓库,如 Docker Hub,任何人都可以访问和使用其中的镜像。许多常用的软件和应用都有在 Docker Hub 上提供的镜像,方便用户直接获取和使用。

    • 例如,您想要部署一个 Nginx 服务器,就可以从 Docker Hub 上拉取 Nginx 的镜像。

  • 私有仓库则是由组织或个人自己搭建和管理的,用于存储内部使用的、不希望公开的镜像。

    • 比如,一家企业为其特定的业务应用创建了定制化的镜像,并将其存储在自己的私有仓库中, 以保证安全性和控制访问权限。

通过 Docker 仓库,开发者能够方便地共享和复用镜像,加速应用的开发和部署过程。

4.2docker hub

官网:https://hub.docker.com/

Docker Hub 是 Docker 官方提供的一个公共的镜像仓库服务。

它是 Docker 生态系统中最知名和广泛使用的镜像仓库之一,拥有大量的官方和社区贡献的镜像。

以下是 Docker Hub 的一些关键特点和优势:

  1. 丰富的镜像资源:涵盖了各种常见的操作系统、编程语言运行时、数据库、Web 服务器等众多应用 的镜像。

    例如,您可以轻松找到 Ubuntu、CentOS 等操作系统的镜像,以及 MySQL、Redis 等数据库 的镜像。

    1. 官方支持:提供了由 Docker 官方维护的一些重要镜像,确保其质量和安全性。

    2. 社区贡献:开发者们可以自由上传和分享他们创建的镜像,促进了知识和资源的共享。

    3. 版本管理:对于每个镜像,通常都有多个版本可供选择,方便用户根据需求获取特定版本。

    4. 便于搜索:用户可以通过关键词轻松搜索到所需的镜像。

我们可以在官网上注册一个自己的公有仓库----sign in

注册完就可以登录le

4.2.1docker hub的使用方法

#登陆官方仓库
[root@docker ~]# docker login
Log in with your Docker ID or email address to push and pull images from Docker 
Hub. If you don't have a Docker ID, head over to https://hub.docker.com/ to 
create one.
You can log in with your password or a Personal Access Token (PAT). Using a 
limited-scope PAT grants better security and is required for organizations using 
SSO. Learn more at https://docs.docker.com/go/access-tokens/
Username: timinglee
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credential-stores
Login Succeeded
#登陆信息保存位置
[root@docker ~]# cd .docker/
[root@docker .docker]# ls
config.json
[root@docker .docker]# cat config.json
{
        "auths": {
                "https://index.docker.io/v1/": {
                        "auth": "dGltaW5nbGVlOjY3NTE1MTVtaW5nemxu"
               }
       }
[root@docker ~]# docker tag gcr.io/distroless/base-debian11:latest 
timinglee/base-debian11:latest
[root@docker ~]# docker push timinglee/base-debian11:latest
The push refers to repository [docker.io/timinglee/base-debian11]
6835249f577a: Pushed
24aacbf97031: Pushed
8451c71f8c1e: Pushed
2388d21e8e2b: Pushed
c048279a7d9f: Pushed
1a73b54f556b: Pushed
2a92d6ac9e4f: Pushed
bbb6cacb8c82: Pushed
ac805962e479: Pushed
af5aa97ebe6c: Pushed
4d049f83d9cf: Pushed
9ed498e122b2: Pushed
577c8ee06f39: Pushed
5342a2647e87: Pushed
latest: digest: 
sha256:f8179c20f1f2b1168665003412197549bd4faab5ccc1b140c666f9b8aa958042 size: 
3234

4.3docker仓库工作原理

仓库中的三个角色 index docker索引服务,负责并维护有关用户帐户、镜像的校验以及公共命名空间的信息。

registry docker仓库,是镜像和图表的仓库,它不具有本地数据库以及不提供用户认证,通过Index Auth service的Token的方式进行认证

Registry Client Docker充当registry客户端来维护推送和拉取,以及客户端的授权。

4.3.1pull原理

镜像拉取分为以下几步:

1.docker客户端向index发送镜像拉去请求并完成与index的认证

2.index发送认证token和镜像位置给dockerclient

3.dockerclient携带token和根据index指引的镜像位置取连接registry

4.Registry会根据client持有的token跟index核实身份合法性

5.index确认此token合法性

6.Registry会根据client的请求传递镜像到客户端

4.3.2push原理

镜像上传的步骤:

1.client向index发送上传请求并完成用户认证

2.index会发方token给client来证明client的合法性

3.client携带index提供的token连接Registry

4.Registry向index合适token的合法性

5.index证实token的合法性

6.Registry开始接收客户端上传过来的镜像

4.4搭建docker的私有仓库

为啥搭建私有仓库

docker hub虽然方便,但是还是有限制

  • 需要internet连接,速度慢

  • 所有人都可以访问

  • 由于安全原因企业不允许将镜像放到外网

好消息是docker公司已经将registry开源,我们可以快速构建企业私有仓库

4.4.1构建简单的registry仓库

下载Registry镜像

[root@docker-node1 ~]# docker pull registry
Using default tag: latest
latest: Pulling from library/registry
930bdd4d222e: Pull complete 
a15309931e05: Pull complete 
6263fb9c821f: Pull complete 
86c1d3af3872: Pull complete 
a37b1bf6a96f: Pull complete 
Digest: sha256:12120425f07de11a1b899e418d4b0ea174c8d4d572d45bdb640f93bc7ca06a3d
Status: Downloaded newer image for registry:latest
docker.io/library/registry:latest
[root@docker-node1 ~]# docker images
REPOSITORY           TAG           IMAGE ID       CREATED         SIZE
busybox              v1            1783207b1668   10 hours ago    4.26MB
nginx                1.26-alpine   9703b2608a98   13 days ago     43.3MB
nginx                latest        5ef79149e0ec   13 days ago     188MB
registry             latest        cfb4d9904335   11 months ago   25.4MB
busybox              latest        65ad0d468eb1   15 months ago   4.26MB
timinglee/game2048   latest        19299002fdbe   7 years ago     55.5MB
timinglee/mario      latest        9a35a9e43e8c   8 years ago     198MB
[root@docker-node1 ~]# 

###查看 registry 端口 5000端口
[root@docker-node1 ~]# docker history registry 
IMAGE          CREATED         CREATED BY                                      SIZE      COMMENT
cfb4d9904335   11 months ago   CMD ["/etc/docker/registry/config.yml"]         0B        buildkit.dockerfile.v0
<missing>      11 months ago   ENTRYPOINT ["/entrypoint.sh"]                   0B        buildkit.dockerfile.v0
<missing>      11 months ago   COPY docker-entrypoint.sh /entrypoint.sh # b…   155B      buildkit.dockerfile.v0
<missing>      11 months ago   EXPOSE map[5000/tcp:{}]                         0B        buildkit.dockerfile.v0
<missing>      11 months ago   VOLUME [/var/lib/registry]                      0B        buildkit.dockerfile.v0
<missing>      11 months ago   COPY ./config-example.yml /etc/docker/regist…   295B      buildkit.dockerfile.v0
<missing>      11 months ago   RUN /bin/sh -c set -eux;  version='2.8.3';  …   17.5MB    buildkit.dockerfile.v0
<missing>      11 months ago   RUN /bin/sh -c apk add --no-cache ca-certifi…   531kB     buildkit.dockerfile.v0
<missing>      11 months ago   /bin/sh -c #(nop)  CMD ["/bin/sh"]              0B        
<missing>      11 months ago   /bin/sh -c #(nop) ADD file:5851aef23205a072e…   7.35MB    


#####开启registry    --restart=always 停了自动重启
[root@docker-node1 ~]# docker run -d -p 5000:5000 --restart=always --name registry registry5c069198b5b9a10a31c7bef6121bf948e2267f7a6604d4008624387c310fec9c
[root@docker-node1 ~]# docker ps
CONTAINER ID   IMAGE      COMMAND                  CREATED         STATUS         PORTS                                       NAMES
5c069198b5b9   registry   "/entrypoint.sh /etc…"   5 seconds ago   Up 4 seconds   0.0.0.0:5000->5000/tcp, :::5000->5000/tcp   registry

#####上传镜像到仓库中
#####给要上传的经镜像打标签
[root@docker-node1 ~]# docker tag busybox:latest 172.25.250.100:5000/busybox:latest

######docker在上传的过程中默认使用https,但是我们并没有建立https认证需要的认证文件所以会报错
[root@docker-node1 ~]# docker push 172.25.250.100:5000/busybox:latest 
The push refers to repository [172.25.250.100:5000/busybox]
Get "https://172.25.250.100:5000/v2/": http: server gave HTTP response to HTTPS client
[root@docker-node1 ~]# vim /etc/docker/daemon.json 

#####配置非加密端口
[root@docker-node1 ~]# cat /etc/docker/daemon.json 
{
  "insecure-registries": ["http://172.25.250.100:5000"]
}
 
[root@docker-node1 ~]# systemctl restart docker
######上传镜像
[root@docker-node1 ~]# docker push 172.25.250.100:5000/busybox:latest 
The push refers to repository [172.25.250.100:5000/busybox]
d51af96cf93e: Pushed 
latest: digest: sha256:28e01ab32c9dbcbaae96cf0d5b472f22e231d9e603811857b295e61197e40a9b size: 527

####查看镜像上传
[root@docker-node1 ~]# curl 172.25.250.100:5000/v2/_catalog
{"repositories":["busybox"]}

4.4.2为Registry提加密传输

[root@docker-node1 ~]# mkdir certs

#####写本地解析
[root@docker-node1 ~]# vim /etc/hosts
172.25.250.100  docker-node1 reg.timinglee.org


####生成认证key和证书
[root@docker-node1 ~]# openssl req -newkey rsa:4096 \
> -nodes -sha256 -keyout certs/timinglee.org.key \
> -addext "subjectAltName = DNS:reg.timinglee.org" \
> -x509 -days 365 -out certs/timinglee.org.crt
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:CN
State or Province Name (full name) []:shanxi
Locality Name (eg, city) [Default City]:xian
Organization Name (eg, company) [Default Company Ltd]:timinglee
Organizational Unit Name (eg, section) []:docker
Common Name (eg, your name or your server's hostname) []:reg.timinglee.org
Email Address []:admin@timinglee.org


####启动registry仓库
[root@docker-node1 ~]# docker run -d -p 443:443 --restart=always --name registry \
> --name registry -v /opt/registry:/var/lib/registry \
> -v /root/certs:/certs \     ###把/root/certs目录挂载到/certs
> -e REGISTRY_HTTP_ADDR=0.0.0.0:443 \    
> -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/timinglee.org.crt \
> -e REGISTRY_HTTP_TLS_KEY=/certs/timinglee.org.key registry

####测试
####docker客户端没有key和证书
[root@docker-node1 ~]# docker push reg.timinglee.org/busybox:latest 
The push refers to repository [reg.timinglee.org/busybox]
Get "https://reg.timinglee.org/v2/": tls: failed to verify certificate: x509: certificate signed by unknown authority

####为客户端建立证书
[root@docker-node1 ~]# mkdir /etc/docker/certs.d/reg.timinglee.org/ -p
[root@docker-node1 ~]# cp /root/certs/timinglee.org.crt /etc/docker/certs.d/reg.timinglee.org/ca.crt
[root@docker-node1 ~]# ls /etc/docker/certs.d/reg.timinglee.org/ca.crt

/etc/docker/certs.d/reg.timinglee.org/ca.crt

####重启docker
[root@docker-node1 ~]# systemctl restart docker

###再次上传
[root@docker-node1 ~]# docker push reg.timinglee.org/busybox:latest
The push refers to repository [reg.timinglee.org/busybox]
d51af96cf93e: Pushed 
latest: digest: sha256:28e01ab32c9dbcbaae96cf0d5b472f22e231d9e603811857b295e61197e40a9b size: 527

###查看
[root@docker-node1 ~]# curl -k https://reg.timinglee.org/v2/_catalog
{"repositories":["busybox"]}

4.4.3为仓库建立登录认证

####安装建立认证文件的工具包
[root@docker-node1 ~]# yum install httpd-tools -y

###建立认证文件
[root@docker-node1 ~]# mkdir auth

### -B 强制使用最安全的加密方式,默认用md5加密
[root@docker-node1 ~]# htpasswd -Bc auth/htpasswd timinglee
New password:               ###redhat
Re-type new password: 		###redhat
Adding password for user timinglee
###如再次添加用户,可以用-B,不用-Bc,如果用—BC的话就会把原来的覆盖掉

#添加认证到registry容器中 
[root@docker-node1 ~]# docker run -d -p 443:443 --restart=always --name registry \
> --name registry -v /opt/registry:/var/lib/registry \
> -v /root/certs:/certs \
> -e REGISTRY_HTTP_ADDR=0.0.0.0:443 \
> -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/timinglee.org.crt \
> -e REGISTRY_HTTP_TLS_KEY=/certs/timinglee.org.key \
> -v /root/auth:/auth \
> -e "REGISTRY_AUTH=htpasswd" \
> -e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" \
> -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \
> registry



####登录测试
[root@docker-node1 ~]# docker login reg.timinglee.org
Username: timinglee            ####这是我前面创建的用户名字为timinglee
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credential-stores

Login Succeeded


####登录完后再上传

###查看
[root@docker-node1 harbor]# curl -k https://reg.timinglee.org/v2/_catalog -u timinglee:redhat
{"repositories":["busybox"]}

####当仓库开启认证后必须登陆仓库才能进行镜像上传

#######未登陆情况下不能上传下载镜像

4.5构建企业级私有仓库---harbor

它提供了以下主要功能和特点:

  1. 基于角色的访问控制(RBAC):可以为不同的用户和用户组分配不同的权限,增强了安全性和管理 的灵活性。

  2. 镜像复制:支持在不同的 Harbor 实例之间复制镜像,方便在多个数据中心或环境中分发镜像。

  3. 图形化用户界面(UI):提供了直观的 Web 界面,便于管理镜像仓库、项目、用户等。

  4. 审计日志:记录了对镜像仓库的各种操作,有助于追踪和审查活动。

  5. 垃圾回收:可以清理不再使用的镜像,节省存储空间。

4.5.1部署harbor

[root@harbotr ~]# yum install lrz* -y

####导入Docker等一系列镜像、软件包
[root@harbotr ~]# ls
1panel-v1.10.13-lts-linux-amd64.tar.gz             docker.tar.gz
anaconda-ks.cfg                                    game2048.tar.gz
busybox-latest.tar.gz                              haproxy-2.3.tar.gz
centos-7.tar.gz                                    harbor
certs                                              harbor-offline-installer-v2.5.4.tgz
containerd.io-1.7.20-3.1.el9.x86_64.rpm            mario.tar.gz
debian11.tar.gz                                    mysql-5.7.tar.gz
docker-buildx-plugin-0.16.2-1.el9.x86_64.rpm       nginx-1.23.tar.gz
docker-ce-27.1.2-1.el9.x86_64.rpm                  nginx-latest.tar.gz
docker-ce-cli-27.1.2-1.el9.x86_64.rpm              phpmyadmin-latest.tar.gz
docker-ce-rootless-extras-27.1.2-1.el9.x86_64.rpm  registry.tag.gz
docker-compose-plugin-2.29.1-1.el9.x86_64.rpm      ubuntu-latest.tar.gz
docker-images.tar.gz


####部署Docker
[root@docker-node1 ~]# tar zxf docker.tar.gz 
[root@docker-node1 ~]# ls
anaconda-ks.cfg
busybox-latest.tar.gz
containerd.io-1.7.20-3.1.el9.x86_64.rpm
docker
docker-buildx-plugin-0.16.2-1.el9.x86_64.rpm
docker-ce-27.1.2-1.el9.x86_64.rpm
docker-ce-cli-27.1.2-1.el9.x86_64.rpm
docker-ce-rootless-extras-27.1.2-1.el9.x86_64.rpm
docker-compose-plugin-2.29.1-1.el9.x86_64.rpm
docker.tar.gz
game2048.tar.gz
mario.tar.gz
nginx-latest.tar.gz

[root@docker-node1 ~]# tar zxf docker.tar.gz 
[root@docker-node1 ~]# dnf install *.rpm -y 

[root@docker-node1 ~]# systemctl restart docker

#######查看docker信息
[root@docker-node1 ~]#  docker info 

####查看docker镜像
[root@docker-node1 ~]# docker images 
REPOSITORY           TAG       IMAGE ID       CREATED         SIZE


#####部署harbor
[root@docker-node1 ~]# tar zxf harbor-offline-installer-v2.5.4.tgz 
[root@docker-node1 ~]# cd harbor/

[root@docker-node1 harbor]# ls
common.sh  harbor.v2.5.4.tar.gz  harbor.yml.tmpl  install.sh  LICENSE  prepare
[root@docker-node1 harbor]# cp harbor.yml.tmpl harbor.yml
[root@docker-node1 harbor]# vim harbor.yml
 
 hostname: reg.renhz.org
 certificate: /data/certs/renhz.org.crt
 private_key: /data/certs/renhz.org.key
 harbor_admin_password: ren



[root@docker-node1 harbor]# ./install.sh --help
Please set --with-notary #证书签名
Please set --with-trivy #安全扫描
Please set --with-chartmuseum if needs enable Chartmuseum in Harbor

[root@docker-node1 ~]# mkdir certs

####系统里做本地解析
[root@harbotr ~]# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
172.25.250.111  harbotr reg.renhz.org


####生成认证key和证
[root@harbotr ~]# openssl req -newkey rsa:4096 \
> -nodes -sha256 -keyout certs/renhz.org.key \
> -addext "subjectAltName = DNS:reg.renhz.org" \
> -x509 -days 365 -out certs/renhz.org.crt
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:CN
State or Province Name (full name) []:shanxi
Locality Name (eg, city) [Default City]:xian
Organization Name (eg, company) [Default Company Ltd]:renhz
Organizational Unit Name (eg, section) []:docker
Common Name (eg, your name or your server's hostname) []:reg.renhz.org
Email Address []:admin@renhz.org


[root@harbotr ~]# cd certs/
[root@harbotr certs]# ls
renhz.org.crt  renhz.org.key

[root@harbotr ~]# cp /root/certs/ /data -r
[root@harbotr ~]# ls /data/
certs  secret
[root@harbotr ~]# cd harbor/
[root@harbotr harbor]# ls common
config
[root@harbotr harbor]# ls /data/certs/
renhz.org.crt  renhz.org.key


[root@harbotr harbor]# ./install.sh --with-chartmuseum
[root@harbotr harbor]# docker compose up -d


[root@harbotr ~]# for f in *.tar.gz; do docker load -i $f; done
[root@harbotr ~]# docker images 

[root@harbotr harbor]# vim /etc/docker/daemon.json
{
     "insecure-registries": ["reg.renhz.org"]
}

[root@harbotr harbor]# systemctl restart docker

######做Windows下的本地解析
172.25.250.111 reg.renhz.org




###然后浏览器访问172.25.250.111或reg.renhz.org 登录到harbor仓库

###然后就可以上传镜像
###首先得登录,如果之前有仓库没有退出,我们可以docker logout 
[root@harbotr harbor]# docker login reg.renhz.org
Username: admin
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credential-stores

Login Succeeded


###一键打包
[root@harbotr harbor]# docker images | awk 'NR>1{system("docker tag "$1":"$2" reg.renhz.org/renhz/"$1":"$2)}'

###一键上传
[root@harbotr harbor]# docker images | awk '/reg.renhz.org/{system("docker push "$1":"$2)}'

[root@harbotr harbor]# docker compose down
[root@harbotr harbor]# docker compose up -d

 

5.docker 网络

docker安装后会自动创建3种网络:bridge、host、none

#####查看网络,有原来做的实验的网络
[root@docker-node1 ~]# docker network ls
NETWORK ID     NAME                        DRIVER    SCOPE
7caa58f74693   bridge                      bridge    local
fd818e2f3512   harbor_harbor               bridge    local
8c39912f8483   harbor_harbor-chartmuseum   bridge    local
0cebe6483454   host                        host      local
9928aa9119dd   none                        null      local

###down掉harbor
[root@docker-node1 ~]# cd harbor/
[root@docker-node1 harbor]# docker compose down

####再次查看
[root@docker-node1 harbor]# docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
7caa58f74693   bridge    bridge    local
0cebe6483454   host      host      local
9928aa9119dd   none      null      local

5.1docker原生bridge网路

docker安装时会创建一个名为 docker0 的Linux bridge,新建的容器会自动桥接到这个接口

bridge模式下容器没有一个公有ip,只有宿主机可以直接访问,外部主机是不可见的。

容器通过宿主机的NAT规则后可以访问外网

所有的容器都是通过docker 0 来上网

比如说现在开启一个nginx和Ubuntu,它们ip分别为172.17.0.2和172.17.0.3,他们都是连接到172.17.0.1上,

然后docker把所有请求发送到eth0 ,docker0 和eth0 通过net.ipv4.ip_forward = 1 进行通信

5.2docker原生网络host

host网络模式需要在容器创建时指定 --network=host

host模式可以让容器共享宿主机网络栈,这样的好处是外部主机与容器直接通信,但是容器的网络缺少 隔离性

[root@docker-node1 ~]# docker run -it --name test --network host busybox
/ # ifconfig 

docker0   Link encap:Ethernet  HWaddr 02:42:18:22:5C:5B  
          inet addr:172.17.0.1  Bcast:172.17.255.255  Mask:255.255.0.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:3725 errors:0 dropped:0 overruns:0 frame:0
          TX packets:3968 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:273938 (267.5 KiB)  TX bytes:86944139 (82.9 MiB)

eth0      Link encap:Ethernet  HWaddr 00:0C:29:76:B2:5D  
          inet addr:172.25.250.100  Bcast:172.25.250.255  Mask:255.255.255.0
          inet6 addr: fe80::89cc:8717:7d0a:579e/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:3728 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2772 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:693585 (677.3 KiB)  TX bytes:610378 (596.0 KiB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:12453 errors:0 dropped:0 overruns:0 frame:0
          TX packets:12453 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:1085542 (1.0 MiB)  TX bytes:1085542 (1.0 MiB)

/ # 

如果公用一个网络,那么所有的网络资源都是公用的,比如启动了nginx容器那么真实主机的80端口被占 用,在启动第二个nginx容器就会失败

5.3docker 原生网络none

none模式是指禁用网络功能,只有lo接口,在容器创建时使用

--network=none指定。

[root@docker-node1 ~]# docker run -it --name test --network none busybox
/ # ifconfig 
lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

/ # 

5.4docker的自定义网络

自定义网络模式,docker提供了三种自定义网络驱动:

bridge

overlay

macvlan

bridge驱动类似默认的bridge网络模式,但增加了一些新的功能, overlay和macvlan是用于创建跨主机网络

建议使用自定义的网络来控制哪些容器可以相互通信,还可以自动DNS解析容器名称到IP地址。

5.4.1自定义桥接网络

原生的brige桥接没有dns解析

在建立自定以网络时,默认使用桥接模式

[root@docker-node1 ~]# docker network create my_net1
0b1da84f7f2520232c527c123d0b1a2ea37c279f0248cdcd60a77f98bb1bd99c
[root@docker-node1 ~]# docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
7caa58f74693   bridge    bridge    local
0cebe6483454   host      host      local
0b1da84f7f25   my_net1   bridge    local
9928aa9119dd   none      null      local
[root@docker-node1 ~]# ifconfig 
br-0b1da84f7f25: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.18.0.1  netmask 255.255.0.0  broadcast 172.18.255.255
        ether 02:42:80:a5:23:1c  txqueuelen 0  (Ethernet)
        RX packets 716  bytes 111932 (109.3 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 940  bytes 4538991 (4.3 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        ether 02:42:18:22:5c:5b  txqueuelen 0  (Ethernet)
        RX packets 716  bytes 111932 (109.3 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 940  bytes 4538991 (4.3 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@docker-node1 ~]# docker network create my_net2 --subnet 172.25.0.0/24 --gateway 172.25.0.100
feb3e36d07476cbc24c8d838c4eb33a7c90446453ba452be33e1de84e11987a0
[root@docker-node1 ~]# docker network inspect my_net2
[
    {
        "Name": "my_net2",
        "Id": "feb3e36d07476cbc24c8d838c4eb33a7c90446453ba452be33e1de84e11987a0",
        "Created": "2024-08-28T16:09:48.179462788+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "172.25.0.0/24",
                    "Gateway": "172.25.0.100"
                }
            ]
        },

####创建/自定义
[root@docker-node1 ~]# docker network create mynet1
253ee3939eac793828a806575740e3e262f6a72915349f58a57ef92b8664c731
[root@docker-node1 ~]# docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
e1e8f05bb85b   bridge    bridge    local
0cebe6483454   host      host      local
253ee3939eac   mynet1    bridge    local
9928aa9119dd   none      null      local

####test
[root@docker-node1 ~]# docker run -it --name test --network mynet1 busybox
/ # ifconfig 
eth0      Link encap:Ethernet  HWaddr 02:42:AC:12:00:02  
          inet addr:172.18.0.2  Bcast:172.18.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:38 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:5524 (5.3 KiB)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

####test1
[root@docker-node1 ~]# docker run -it --name test1 --network mynet1 busybox
/ # ifconfig 
eth0      Link encap:Ethernet  HWaddr 02:42:AC:12:00:03  
          inet addr:172.18.0.3  Bcast:172.18.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:15 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:2058 (2.0 KiB)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

/ # 

####test   ping  test1
/ # ping test1
PING test1 (172.18.0.3): 56 data bytes
64 bytes from 172.18.0.3: seq=0 ttl=64 time=0.312 ms
64 bytes from 172.18.0.3: seq=1 ttl=64 time=0.145 ms
64 bytes from 172.18.0.3: seq=2 ttl=64 time=0.167 ms
64 bytes from 172.18.0.3: seq=3 ttl=64 time=0.100 ms


####自定义桥接,原生桥接就是没有dns解析,自定义桥接过后就不关心容器ip是多少,只要知道容器名字,就可以找到

5.4.2为啥要自定义桥接

[root@docker-node1 ~]# docker run -d --name web1 nginx

[root@docker-node1 ~]# docker inspect web1

[root@docker-node1 ~]# docker run -d --name web2 nginx

[root@docker-node1 ~]# docker inspect web2

关闭容器后重启容器,启动顺序调换
[root@docker-node1 ~]# docker stop web1 web2
web1
web2
[root@docker-node1 ~]# docker start web2
web2
[root@docker-node1 ~]# docker start web1
web1

####我们会发现ip颠倒,也就是原来的web1 的ip由172.17.0.2变为172.17.0.3,
####web2的ip由原来的172.17.0.3变为172.17.0.2
###这是因为启动顺序不一样,那么谁先启动,谁就会分配前面的ip

docker引擎在分配ip时时根据容器启动顺序分配到,谁先启动谁用,是动态变更的

多容器互访用ip很显然不是很靠谱,那么多容器访问一般使用容器的名字访问更加稳定

docker原生网络是不支持dns解析的,自定义网络中内嵌了dns

[root@docker-node1 ~]# docker run -d --network my_net1 --name web nginx
3c98dda5baf0b41798414ed7b749f404f0446d17713ae5c481c36987da29580e

####这里就是通过test1 ping  web
[root@docker-node1 ~]# docker run -it --network my_net1 --name test busybox
/ # ping web
PING web (172.18.0.2): 56 data bytes
64 bytes from 172.18.0.2: seq=0 ttl=64 time=0.229 ms
64 bytes from 172.18.0.2: seq=1 ttl=64 time=0.095 ms
64 bytes from 172.18.0.2: seq=2 ttl=64 time=0.156 ms
64 bytes from 172.18.0.2: seq=3 ttl=64 time=0.104 ms

注意:不同的自定义网络是不能通讯的

#在rhel7中使用的是iptables进行网络隔离,在rhel9中使用nftpables
[root@docker ~]# nft list ruleset可以看到网络隔离策略

5.4.3如何让不同的自定义网络互通

[root@docker-node1 ~]# docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
e1e8f05bb85b   bridge    bridge    local
0cebe6483454   host      host      local
253ee3939eac   mynet1    bridge    local
9928aa9119dd   none      null      local
[root@docker-node1 ~]# docker network  rm mynet1 
mynet1
[root@docker-node1 ~]# docker network create mynet1
580ec60f63fea59bcfdad0e2b728d4a98cc194b1da07a704ffc28692e1e6e992
[root@docker-node1 ~]# docker network create mynet2
2fff115e89824cc69d3c88d35b73d3542e74b0c60e0960f88cfb4eda7e6a42dc
[root@docker-node1 ~]# docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
e1e8f05bb85b   bridge    bridge    local
0cebe6483454   host      host      local
580ec60f63fe   mynet1    bridge    local
2fff115e8982   mynet2    bridge    local
9928aa9119dd   none      null      local

####test   mynet1
[root@docker-node1 ~]# docker run -it --name test --network mynet1 busybox
/ # ifconfig 
eth0      Link encap:Ethernet  HWaddr 02:42:AC:12:00:02  
          inet addr:172.18.0.2  Bcast:172.18.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:30 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:4324 (4.2 KiB)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

/ # 


###test1   mynet2       可以看到目前只有一个网卡
[root@docker-node1 ~]# docker run -it --name test1 --network mynet2 busybox
/ # ifconfig 
eth0      Link encap:Ethernet  HWaddr 02:42:AC:13:00:02  
          inet addr:172.19.0.2  Bcast:172.19.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:19 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:2488 (2.4 KiB)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

/ # ping test
ping: bad address 'test'               #####不能ping
/ # 


###然后我们让mynet1网卡连接mynet2
[root@docker-node1 ~]# docker network connect mynet1 test1

####test1  mynet2          ####可以看到现在有两张网卡
/ # ifconfig 
eth0      Link encap:Ethernet  HWaddr 02:42:AC:13:00:02  
          inet addr:172.19.0.2  Bcast:172.19.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:44 errors:0 dropped:0 overruns:0 frame:0
          TX packets:4 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:6026 (5.8 KiB)  TX bytes:212 (212.0 B)

eth1      Link encap:Ethernet  HWaddr 02:42:AC:12:00:03  
          inet addr:172.18.0.3  Bcast:172.18.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:15 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:2058 (2.0 KiB)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:4 errors:0 dropped:0 overruns:0 frame:0
          TX packets:4 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:350 (350.0 B)  TX bytes:350 (350.0 B)

/ # ping test                      ####可以ping通
PING test (172.18.0.2): 56 data bytes
64 bytes from 172.18.0.2: seq=0 ttl=64 time=0.303 ms
64 bytes from 172.18.0.2: seq=1 ttl=64 time=0.083 ms

5.4.4joined容器网络

Joined容器一种较为特别的网络模式,•在容器创建时使用--network=container:vm1指定。(vm1指定 的是运行的容器名)

处于这个模式下的 Docker 容器会共享一个网络栈,这样两个容器之间可以使用localhost高效快速通 信。

### test   mynet1    172.18.0.2
[root@docker-node1 ~]# docker run -it --name test --network mynet1 busybox
/ # ifconfig 
eth0      Link encap:Ethernet  HWaddr 02:42:AC:12:00:02  
          inet addr:172.18.0.2  Bcast:172.18.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:15 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:2042 (1.9 KiB)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

/ # 

####test1   和test用同一个网络  container:test    也是172.18.0.2
[root@docker-node1 ~]# docker run -it --name test1 --network container:test busybox
/ # ifconfig 
eth0      Link encap:Ethernet  HWaddr 02:42:AC:12:00:02  
          inet addr:172.18.0.2  Bcast:172.18.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:20 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:2528 (2.4 KiB)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

/ # 
/ # exit
[root@docker-node1 ~]# docker rm -f test
test
[root@docker-node1 ~]# docker rm -f test1 
test1


[root@docker-node1 ~]# docker run -d --name test --network mynet1 nginx 
f6ae93f32af31d2c4cd0fd95204a48d2345ee55f24e0929614e44325d97c0ec9

#####通过lo访问nginx
[root@docker-node1 ~]# docker run -it --name test1 --network container:test centos:7
[root@f6ae93f32af3 /]# curl localhost
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

5.4.5 joined网络示例演示

利用容器部署phpmyadmin管理mysql

[root@docker-node1 ~]# docker load -i mysql-5.7.tar.gz 
cff044e18624: Loading layer    145MB/145MB
7ff7abf4911b: Loading layer  11.26kB/11.26kB
8b2952eb02aa: Loading layer  2.383MB/2.383MB
d76a5f910f6b: Loading layer  13.91MB/13.91MB
8527ccd6bd85: Loading layer  7.168kB/7.168kB
4555572a6bb2: Loading layer  3.072kB/3.072kB
0d9e9a9ce9e4: Loading layer  79.38MB/79.38MB
532b66f4569d: Loading layer  3.072kB/3.072kB
337ec6bae222: Loading layer  278.8MB/278.8MB
73cb62467b8f: Loading layer  17.41kB/17.41kB
441e16cac4fe: Loading layer  1.536kB/1.536kB
Loaded image: mysql:5.7


[root@docker-node1 ~]# docker load -i phpmyadmin-latest.tar.gz 
4cae4ea97049: Loading layer  3.584kB/3.584kB
7f0d23b78477: Loading layer  320.2MB/320.2MB
8f42af1dd50e: Loading layer   5.12kB/5.12kB
7285b46fc0b1: Loading layer  51.28MB/51.28MB
886076bbd0e5: Loading layer  9.728kB/9.728kB
fe49c1c8ccdc: Loading layer   7.68kB/7.68kB
c98461c57e2d: Loading layer  13.41MB/13.41MB
4646cbc7a84d: Loading layer  4.096kB/4.096kB
7183cf0cacbe: Loading layer  49.48MB/49.48MB
923288b71444: Loading layer   12.8kB/12.8kB
eb4f3a0b1a71: Loading layer  4.608kB/4.608kB
43cd9aa62af4: Loading layer  4.608kB/4.608kB
9f9985f7ecbd: Loading layer  9.134MB/9.134MB
25d63a36933d: Loading layer  6.656kB/6.656kB
13ccf69b5807: Loading layer  53.35MB/53.35MB
a65e8a0ad246: Loading layer  8.192kB/8.192kB
26f3cdf867bf: Loading layer  3.584kB/3.584kB
Loaded image: phpmyadmin:latest

#####运行phpmysqladmin
[root@docker-node1 ~]# docker run -d --name mysqladmin --network mynet1 \
> -e PMA_ARBITRARY=1 \          ¥#####在web页面中可以手动输入数据库地址和端口

> -p 80:80 phpmyadmin:latest
0269bd477ba7609fcb59c6f9f0a1485b90978e8750d35b54b53ebf5ba64890ce


[root@docker-node1 ~]# docker ps
CONTAINER ID   IMAGE               COMMAND                  CREATED         STATUS         PORTS                               NAMES
0269bd477ba7   phpmyadmin:latest   "/docker-entrypoint.…"   7 seconds ago   Up 7 seconds   0.0.0.0:80->80/tcp, :::80->80/tcp   mysqladmin

#####运行数据库

[root@docker-node1 ~]# docker run -d --name mysql --network container:mysqladmin \     把数据库容器添加到phpmyadmin容器中
> -e MYSQL_ROOT_PASSWORD='redhat' \   ####设定数据库密码   
> mysql:5.7
7b2a0545e8bc47b83add0f148b005bd1f361d276b6a4b4e21d7d650f4c6108de


[root@docker-node1 ~]# docker ps
CONTAINER ID   IMAGE               COMMAND                  CREATED         STATUS         PORTS                               NAMES
7b2a0545e8bc   mysql:5.7           "docker-entrypoint.s…"   4 seconds ago   Up 3 seconds                                       mysql
0269bd477ba7   phpmyadmin:latest   "/docker-entrypoint.…"   2 minutes ago   Up 2 minutes   0.0.0.0:80->80/tcp, :::80->80/tcp   mysqladmin


[root@docker-node1 ~]# docker exec -it mysql bash          ¥#########进入数据库

bash-4.2# mysql -uroot -predhat                ######登录数据库

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
4 rows in set (0.00 sec)

mysql> 

浏览器输入172.25.250.100:80

开启的phpmyadmin容器中是没有数据库的

这里填写的localhost:3306是因为mysql容器和phpmyadmin容器公用一个网络站

5.5容器内外网的访问

5.5.1容器访问外网

  • 在rhel7中,docker访问外网是通过iptables添加地址伪装策略来完成容器访问外网

  • 在rhel7之后的版本通过nftables添加地址伪装来访问外网

####企业9里面
[root@docker-node1 ~]# nft list ruleset

##7
[root@docker ~]# iptables -t nat -nL
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination
Chain INPUT (policy ACCEPT)
target     prot opt source               destination
Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination
MASQUERADE  6    --  172.17.0.2           172.17.0.2           tcp dpt:80 			#内网访问外网策略
Chain DOCKER (0 references)
target     prot opt source               destination
DNAT       6    --  0.0.0.0/0            0.0.0.0/0           tcp dpt:80 
to:172.17.0.2:80

5.5.2外网访问docker容器

端口映射 -p 本机端口:容器端口来暴漏端口从而达到访问效果

[root@docker-node1 ~]# grubby --update-kernel ALL --args iptables=turecd 
ocker-node1 ~]# docker run -d --name test --rm -p 80:80 nginx
dcbd5cc45de246bf17b14176936fba52511e316432d4d2c62563bc866f99024c
[root@docker-node1 ~]# docker ps
CONTAINER ID   IMAGE     COMMAND                  CREATED         STATUS         PORTS                               NAMES
dcbd5cc45de2   nginx     "/docker-entrypoint.…"   7 seconds ago   Up 7 seconds   0.0.0.0:80->80/tcp, :::80->80/tcp   test

[root@docker-node1 ~]# curl 172.25.250.100
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
.
.
.
[root@docker-node1 ~]# iptables -t nat -nL
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination         
DOCKER     0    --  0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
DOCKER     0    --  0.0.0.0/0           !127.0.0.0/8          ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination         
MASQUERADE  0    --  172.17.0.0/16        0.0.0.0/0           
MASQUERADE  6    --  172.17.0.2           172.17.0.2           tcp dpt:80

Chain DOCKER (2 references)
target     prot opt source               destination         
RETURN     0    --  0.0.0.0/0            0.0.0.0/0           
DNAT       6    --  0.0.0.0/0            0.0.0.0/0            tcp dpt:80 to:172.17.0.2:80 ##########这里注意,把数据接受过来都转到172.17.0.2

#######我们查看docker的ip就是172.17.0.2
[root@docker-node1 ~]# docker inspect test 
...
                    "Gateway": "172.17.0.1",
                    "IPAddress": "172.17.0.2",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "DNSNames": null
...


[root@docker-node1 ~]# iptables -t nat -nL
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination         
DOCKER     0    --  0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
DOCKER     0    --  0.0.0.0/0           !127.0.0.0/8          ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination         
MASQUERADE  0    --  172.17.0.0/16        0.0.0.0/0           
MASQUERADE  6    --  172.17.0.2           172.17.0.2           tcp dpt:80

Chain DOCKER (2 references)
target     prot opt source               destination         
RETURN     0    --  0.0.0.0/0            0.0.0.0/0           
DNAT       6    --  0.0.0.0/0            0.0.0.0/0            tcp dpt:80 to:172.17.0.2:80

####我们把策略删掉
[root@docker-node1 ~]# iptables -t nat -D DOCKER 2
[root@docker-node1 ~]# iptables -t nat -nL
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination         
DOCKER     0    --  0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
DOCKER     0    --  0.0.0.0/0           !127.0.0.0/8          ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination         
MASQUERADE  0    --  172.17.0.0/16        0.0.0.0/0           
MASQUERADE  6    --  172.17.0.2           172.17.0.2           tcp dpt:80

Chain DOCKER (2 references)
target     prot opt source               destination         
RETURN     0    --  0.0.0.0/0            0.0.0.0/0 

####还是可以访问
[root@docker-node1 ~]# curl 172.25.250.100
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;

####因为设了双保险,也即是说proxy和iptables都为了我们端口映射做了不同的网络解决方案,所以我们外网访问内网不光走的是proxy还走的火墙策略

5.6 docker跨主机网络

在生产环境中,我们的容器不可能都在同一个系统中,所以需要容器具备跨主机通信的能力

  • 跨主机网络解决方案

    • docker原生overlay和macvlan

    • 第三方flannel,weave,calico

  • macvlan网络方式实现跨主机通信

    • libnetwork docker容器网络库

    • CNM(Container Network Model)这个模型对容器网络进行了抽象

5.6.1 CNM (Container Network Model)

CNM分三类组件

Sandbox:容器网络栈,包含容器接口、dns、路由表。(namespace)

Endpoint:作用是将sandbox接入network (veth pair)

Network:包含一组endpoint,同一network的endpoint可以通信

5.6.2 macvlan网络方式实现跨主机通信

macvlan网络方式

Linux kernel(内核)提供的一种网卡虚拟化技术。

无需Linux bridge,直接使用物理接口,性能极好

容器的接口直接与主机网卡连接,无需NAT或端口映射。

macvlan会独占主机网卡,但可以使用vlan子接口实现多macvlan网络

vlan可以将物理二层网络划分为4094个逻辑网络,彼此隔离,vlan id取值为1~4094

macvlan网络间的隔离和连通

macvlan网络在二层上是隔离的,所以不同macvlan网络的容器是不能通信的

可以在三层上通过网关将macvlan网络连通起来

docker本身不做任何限制,像传统vlan网络那样管理即可

实现方法如下:

1.在两台docker主机上各添加一块网卡(仅主机模式),打开网卡混杂模式:

[root@docker-node1 ~]# ip link set eth1 promisc on
[root@docker-node1 ~]# ip link set up eth1
[root@docker-node1 ~]# ifconfig eth1
eth1: flags=4419<UP,BROADCAST,RUNNING,PROMISC,MULTICAST>  mtu 1500
        ether 00:0c:29:76:b2:67  txqueuelen 1000  (Ethernet)
        RX packets 39  bytes 6323 (6.1 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

 2.添加macvlan网络(两台主机都添加)

[root@docker-node1 ~]# docker network create \
-d macvlan \
--subnet 1.1.1.0/24 \
--gateway 1.1.1.1 \
-o parent=eth1 macvlan1

3.测试(跨主机通信)

####docker-node1中
[root@docker-node1 ~]# docker run -it --name busybox --network mynet1 --ip 1.1.1.100 --rm busybox
/ # ping 1.1.1.200
PING 1.1.1.200 (1.1.1.200): 56 data bytes
64 bytes from 1.1.1.200: seq=0 ttl=64 time=0.415 ms
64 bytes from 1.1.1.200: seq=1 ttl=64 time=0.749 ms

###docker-node2中
[root@docker-node2 ~]# docker run -it --name busybox --network mynet1 --ip 1.1.1.200 --rm busybox            
/ # ping 1.1.1.100
PING 1.1.1.100 (1.1.1.100): 56 data bytes
64 bytes from 1.1.1.100: seq=0 ttl=64 time=0.748 ms
64 bytes from 1.1.1.100: seq=1 ttl=64 time=0.426 ms
64 bytes from 1.1.1.100: seq=2 ttl=64 time=0.408 ms

6、docker的数据卷的管理和优化

如果不设置数据卷,容器内的数据和主机的数据是没有进行交互的,也即是说容器关闭后容器内的数据就没有

Docker 数据卷是一个可供容器使用的特殊目录,它绕过了容器的文件系统,直接将数据存储在宿主机 上。

这样可以实现以下几个重要的目的:

  • 数据持久化:即使容器被删除或重新创建,数据卷中的数据仍然存在,不会丢失。

  • 数据共享:多个容器可以同时挂载同一个数据卷,实现数据的共享和交互。

  • 独立于容器生命周期:数据卷的生命周期独立于容器,不受容器的启动、停止和删除的影响。

6.1 为什么要用数据卷

docker分层文件系统

性能差

生命周期与容器相同

docker数据卷 mount到主机中,绕开分层文件系统

和主机磁盘性能相同,容器删除后依然保留

仅限本地磁盘,不能随容器迁移

docker提供了两种卷:

bind mount //本地目直接挂载

docker managed volume //j自己建立一个卷,自己建立的卷默认会在/var/lib/docker/volume

6.2 bind mount 数据卷

是将主机上的目录或文件mount到容器里。

使用直观高效,易于理解。

使用 -v 选项指定路径,格式 :-v选项指定的路径,如果不存在,挂载时会自动创建

[root@docker-node1 ~]# mkdir /ren
[root@docker-node1 ~]# touch /ren/renfile
[root@docker-node1 ~]# touch /ren/renfile{1..5}
[root@docker-node1 ~]# ls /ren/
renfile  renfile1  renfile2  renfile3  renfile4  renfile5

###  -v /ren:/data1:rw  挂载/ren 挂载到/data1  rw 读写挂载
[root@docker-node1 ~]# docker run -it --rm --name test -v /ren:/data1:rw -v /etc/passwd:/data2/passwd busybox
/ # ls   ####这里就可以看到有data1和data2文件
bin    data2  etc    lib    proc   sys    usr
data1  dev    home   lib64  root   tmp    var
/ # touch data1/renfile6         ####在data1下可以建立文件,因为挂载时指定了它为rw  读写状态
/ # ls
bin    data2  etc    lib    proc   sys    usr
data1  dev    home   lib64  root   tmp    var
/ # ls /data1
renfile   renfile1  renfile2  renfile3  renfile4  renfile5  renfile6
 
###data2也是默认为读写状态
###如果要让data2为只读状态
[root@docker-node1 ~]# docker run -it --rm --name test -v /ren:/data1:rw -v /etc/passwd:/data2/passwd:ro busybox


####这是bind mount挂载

6.3 docker managed 数据卷

//自己建立

  • bind mount必须指定host文件系统路径,限制了移植性

  • docker managed volume 不需要指定mount源,docker自动为容器创建数据卷目录

  • 默认创建的数据卷目录都在 /var/lib/docker/volumes 中

  • 如果挂载时指向容器内已有的目录,原有数据会被复制到volume中

[root@docker-node1 ~]# docker run -d --name mysql -e MYSQL_ROOT_PASSWORD='ren' mysql:5.7
198c259ca5b2c5bb5417c792a428bcd59376e58f21e09c07b514a853efde3846
[root@docker-node1 ~]# docker inspect mysql
...
 "Mounts": [
            {
                "Type": "volume",
                "Name": "e3a2785e369e06db7a286d90b3e58f7c6f6caec0076ce221aa990fc7cd771ae8",      ###名字
                "Source": "/var/lib/docker/volumes/e3a2785e369e06db7a286d90b3e58f7c6f6caec0076ce221aa990fc7cd771ae8/_data",   ###宿主机的目录
                "Destination": "/var/lib/mysql",   ##容器内的挂载点
...

####如果新建一个库就会存在这里
[root@docker-node1 ~]# ll "/var/lib/docker/volumes/e3a2785e369e06db7a286d90b3e58f7c6f6caec0076ce221aa990fc7cd771ae8/_data"
total 188484
-rw-r----- 1 systemd-coredump input       56 Aug 30 21:48 auto.cnf
-rw------- 1 systemd-coredump input     1680 Aug 30 21:48 ca-key.pem
-rw-r--r-- 1 systemd-coredump input     1112 Aug 30 21:48 ca.pem
-rw-r--r-- 1 systemd-coredump input     1112 Aug 30 21:48 client-cert.pem
-rw------- 1 systemd-coredump input     1680 Aug 30 21:48 client-key.pem
-rw-r----- 1 systemd-coredump input     1318 Aug 30 21:48 ib_buffer_pool
-rw-r----- 1 systemd-coredump input 79691776 Aug 30 21:48 ibdata1
-rw-r----- 1 systemd-coredump input 50331648 Aug 30 21:48 ib_logfile0
-rw-r----- 1 systemd-coredump input 50331648 Aug 30 21:48 ib_logfile1
-rw-r----- 1 systemd-coredump input 12582912 Aug 30 21:48 ibtmp1
drwxr-x--- 2 systemd-coredump input     4096 Aug 30 21:48 mysql
lrwxrwxrwx 1 systemd-coredump input       27 Aug 30 21:48 mysql.sock -> /var/run/mysqld/mysqld.sock
drwxr-x--- 2 systemd-coredump input     8192 Aug 30 21:48 performance_schema
-rw------- 1 systemd-coredump input     1680 Aug 30 21:48 private_key.pem
-rw-r--r-- 1 systemd-coredump input      452 Aug 30 21:48 public_key.pem
-rw-r--r-- 1 systemd-coredump input     1112 Aug 30 21:48 server-cert.pem
-rw------- 1 systemd-coredump input     1676 Aug 30 21:48 server-key.pem
drwxr-x--- 2 systemd-coredump input     8192 Aug 30 21:48 sys

####如果我们关闭掉容器,那么这里面的数据就会消失,会随容器的关闭做自动清理
#####建立数据卷
[root@docker-node1 volumes]# docker volume create mysqldata
mysqldata
[root@docker-node1 volumes]# ll /var/lib/docker/volumes/
total 52
brw------- 1 root root 253, 0 Aug 30 20:10 backingFsBlockDev
drwx-----x 3 root root     19 Aug 30 21:48 e3a2785e369e06db7a286d90b3e58f7c6f6caec0076ce221aa990fc7cd771ae8
drwx-----x 3 root root     19 Aug 30 14:08 leevol1
-rw------- 1 root root  65536 Aug 30 22:01 metadata.db
drwx-----x 3 root root     19 Aug 30 22:01 mysqldata   ###已建立
drwx-----x 3 root root     19 Aug 30 13:53 renvol1

###查看卷
[root@docker ~]# docker volume ls

###使用建立的数据卷
[root@docker-node1 volumes]# docker run -d --name mysql -e MYSQL_ROOT_PASSWORD='ren' -v mysqldata:/var/lib/mysql mysql:5.7
21f116f89ed894a8d4b75123f69f8538c4827d107ebf8082515ea591192f63a0

[root@docker-node1 volumes]# docker inspect  mysql
...

"Mounts": [
            {
                "Type": "volume",
                "Name": "mysqldata",
                "Source": "/var/lib/docker/volumes/mysqldata/_data",
                "Destination": "/var/lib/mysql",
                "Driver": "local",
                "Mode": "z",
                "RW": true,
                "Propagation": ""
            }
...

[root@docker-node1 volumes]# cd mysqldata/
[root@docker-node1 mysqldata]# ls
_data
[root@docker-node1 mysqldata]# cd _data/

####看到mysql.sock
[root@docker-node1 _data]# ls
auto.cnf    client-cert.pem  ibdata1      ibtmp1      performance_schema  server-cert.pem
ca-key.pem  client-key.pem   ib_logfile0  mysql       private_key.pem     server-key.pem
ca.pem      ib_buffer_pool   ib_logfile1  mysql.sock  public_key.pem      sys

####删掉容器数据依旧存在
[root@docker-node1 _data]# docker rm mysql 
mysql
[root@docker-node1 _data]# cd /var/lib/docker/volumes/
[root@docker-node1 volumes]# ls
backingFsBlockDev                                                 leevol1      mysqldata
e3a2785e369e06db7a286d90b3e58f7c6f6caec0076ce221aa990fc7cd771ae8  metadata.db  renvol1
[root@docker-node1 volumes]# cd mysqldata/
[root@docker-node1 mysqldata]# ls
_data
[root@docker-node1 mysqldata]# cd _data/

###可以看到mysql.sock
[root@docker-node1 _data]# ls
auto.cnf         client-key.pem  ib_logfile1         private_key.pem  sys
ca-key.pem       ib_buffer_pool  mysql               public_key.pem
ca.pem           ibdata1         mysql.sock          server-cert.pem
client-cert.pem  ib_logfile0     performance_schema  server-key.pem

6.4 数据卷容器(Data Volume Container)

数据卷容器(Data Volume Container)是 Docker 中一种特殊的容器,主要用于方便地在多个容器之间 共享数据卷。

1.建立数据卷容器

[root@docker ~]# docker run -d --name datavol \
-v /tmp/data1:/data1:rw \
-v /tmp/data2:/data2:ro \
-v /etc/resolv.conf:/etc/hosts busybox

2.使用数据卷容器

[root@docker ~]# docker run -it --name test --rm --volumes-from datavol busybox
/ # ls
bin   data1 data2 dev   etc   home   lib   lib64 proc   root   sys   tmp 
  usr   var
/ # cat /etc/resolv.conf
# Generated by Docker Engine.
# This file can be edited; Docker Engine will not make further changes once it
# has been modified.
nameserver 114.114.114.114
search timinglee.org
# Based on host file: '/etc/resolv.conf' (legacy)
# Overrides: []
/ # touch data1/leefile1
/ # touch /data2/leefile1
touch: /data2/leefile1: Read-only file system
/ #

6.5 bind mount 数据卷和docker managed 数据卷的对 比

相同点:

  • 两者都是 host 文件系统中的某个路径

不同点

6.6 备份与迁移数据卷

备份数据卷

#建立容器并指定使用卷到要备份的容器
[root@docker ~]# docker run --volumes-from datavol \
-v `pwd`:/backup busybox \ #把当前目录挂在到容器中用于和容器交互保存要备
份的容器
tar zcf /backup/data1.tar.gz /data1 #备份数据到本地

数据恢复

 docker run -it --name test -v leevol1:/data1 -v `pwd`:/backup busybox /bin/sh -
c "tar zxf /backup/data1.tar.gz;/bin/sh"
/ # ls
backup data1   etc     lib     proc   sys     usr
bin     dev     home   lib64   root   tmp     var
/ # cd data1/ #查看数据迁移情况
/data1 # ls
index.html leefile1

7.docker的安全优化

Docker容器的安全性,很大程度上依赖于Linux系统自身

评估Docker的安全性时,主要考虑以下几个方面:

  • Linux内核的命名空间机制提供的容器隔离安全

  • Linux控制组机制对容器资源的控制能力安全。

  • Linux内核的能力机制所带来的操作权限安全

  • Docker程序(特别是服务端)本身的抗攻击性。

  • 其他安全增强机制对容器安全性的影响

#在rhel9中默认使用cgroup-v2 但是cgroup-v2中不利于观察docker的资源限制情况,所以推荐使用
cgroup-v1
[root@docker ~]# grubby --update-kernel=/boot/vmlinuz-$(uname -r) \
--args="systemd.unified_cgroup_hierarchy=0 systemd.legacy_systemd_cgroup_controller"

7.1 命名空间隔离的安全

当docker run启动一个容器时,Docker将在后台为容器创建一个独立的命名空间。命名空间提供了最基础也最直接的隔离。

与虚拟机方式相比,通过Linux namespace来实现的隔离不是那么彻底。

容器只是运行在宿主机上的一种特殊的进程,那么多个容器之间使用的就还是同一个宿主机的操作系统内核。

在 Linux 内核中,有很多资源和对象是不能被 Namespace 化的,比如:磁盘等等

####cgroup隔离机制
[root@docker-node1 _data]# cd /sys/fs/cgroup/
[root@docker-node1 cgroup]# ls
blkio  cpuacct      cpuset   freezer  memory  net_cls           net_prio    pids  systemd
cpu    cpu,cpuacct  devices  hugetlb  misc    net_cls,net_prio  perf_event  rdma
[root@docker-node1 cgroup]# 

###cpu信息
[root@docker-node1 cgroup]# cd cpu
[root@docker-node1 cpu]# ls
cgroup.clone_children      cpuacct.usage_user   notify_on_release
cgroup.procs               cpu.cfs_burst_us     release_agent
cgroup.sane_behavior       cpu.cfs_period_us    sys-fs-fuse-connections.mount
cpuacct.stat               cpu.cfs_quota_us     sys-kernel-config.mount
cpuacct.usage              cpu.idle             sys-kernel-debug.mount
cpuacct.usage_all          cpu.shares           sys-kernel-tracing.mount
cpuacct.usage_percpu       cpu.stat             system.slice
cpuacct.usage_percpu_sys   dev-hugepages.mount  tasks
cpuacct.usage_percpu_user  dev-mqueue.mount     user.slice
cpuacct.usage_sys          docker

####我们运行一个容器
[root@docker-node1 cpu]# docker run -d --name test --rm nginx
6dd1e34070464817a90b64fd3019da867bb2c433e694492fe6be33188136cf8f
[root@docker-node1 cpu]# ls
cgroup.clone_children      cpuacct.usage_user   notify_on_release
cgroup.procs               cpu.cfs_burst_us     release_agent
cgroup.sane_behavior       cpu.cfs_period_us    sys-fs-fuse-connections.mount
cpuacct.stat               cpu.cfs_quota_us     sys-kernel-config.mount
cpuacct.usage              cpu.idle             sys-kernel-debug.mount
cpuacct.usage_all          cpu.shares           sys-kernel-tracing.mount
cpuacct.usage_percpu       cpu.stat             system.slice
cpuacct.usage_percpu_sys   dev-hugepages.mount  tasks
cpuacct.usage_percpu_user  dev-mqueue.mount     user.slice
cpuacct.usage_sys          docker

####这里面是一个cpu限制
[root@docker-node1 cpu]# cd docker/
[root@docker-node1 docker]# ls
6dd1e34070464817a90b64fd3019da867bb2c433e694492fe6be33188136cf8f  cpuacct.usage_user
cgroup.clone_children                                             cpu.cfs_burst_us
cgroup.procs                                                      cpu.cfs_period_us
cpuacct.stat                                                      cpu.cfs_quota_us
cpuacct.usage                                                     cpu.idle
cpuacct.usage_all                                                 cpu.shares
cpuacct.usage_percpu                                              cpu.stat
cpuacct.usage_percpu_sys                                          notify_on_release
cpuacct.usage_percpu_user                                         tasks
cpuacct.usage_sys
##6dd1e34070464817a90b64fd3019da867bb2c433e694492fe6be33188136cf8f 是我们docker的id

[root@docker-node1 docker]# cd 6dd1e34070464817a90b64fd3019da867bb2c433e694492fe6be33188136cf8f/
[root@docker-node1 6dd1e34070464817a90b64fd3019da867bb2c433e694492fe6be33188136cf8f]# ls
cgroup.clone_children  cpuacct.usage_percpu       cpu.cfs_burst_us   cpu.stat
cgroup.procs           cpuacct.usage_percpu_sys   cpu.cfs_period_us  notify_on_release
cpuacct.stat           cpuacct.usage_percpu_user  cpu.cfs_quota_us   tasks
cpuacct.usage          cpuacct.usage_sys          cpu.idle
cpuacct.usage_all      cpuacct.usage_user         cpu.shares


[root@docker-node1 6dd1e34070464817a90b64fd3019da867bb2c433e694492fe6be33188136cf8f]# cat tasks 
11160
11208

[root@docker-node1 ~]# docker ps 
CONTAINER ID   IMAGE     COMMAND                  CREATED         STATUS         PORTS     NAMES
6dd1e3407046   nginx     "/docker-entrypoint.…"   4 minutes ago   Up 4 minutes   80/tcp    test
[root@docker-node1 ~]# docker inspect test | grep Pid
            "Pid": 11160,    ####和前面tasks中一样
            "PidMode": "",
            "PidsLimit": null,

###然后我们关掉容器
[root@docker-node1 ~]# docker rm  -f test
test

####没有这个文件了
[root@docker-node1 6dd1e34070464817a90b64fd3019da867bb2c433e694492fe6be33188136cf8f]# cat tasks 
cat: tasks: No such file or directory
####这就是docker的基础隔离,但是它不彻底

####我们再次开启容器
[root@docker-node1 ~]# docker run -d --name test nginx
e1ee6e4c8964c3394dfb01e79731c35cbbb805e94cd10d8ea95a8c1031644407
[root@docker-node1 ~]# docker inspect test  | grep Pid
            "Pid": 11945,
            "PidMode": "",
            "PidsLimit": null,
####现在不是原来的那个id了
[root@docker-node1 6dd1e34070464817a90b64fd3019da867bb2c433e694492fe6be33188136cf8f]# cat tasks
cat: tasks: No such file or directory

###我们返回上级目录再查看发现这次是新的id
[root@docker-node1 6dd1e34070464817a90b64fd3019da867bb2c433e694492fe6be33188136cf8f]# cd ..
[root@docker-node1 docker]# ls
cgroup.clone_children      cpu.cfs_burst_us
cgroup.procs               cpu.cfs_period_us
cpuacct.stat               cpu.cfs_quota_us
cpuacct.usage              cpu.idle
cpuacct.usage_all          cpu.shares
cpuacct.usage_percpu       cpu.stat
cpuacct.usage_percpu_sys   e1ee6e4c8964c3394dfb01e79731c35cbbb805e94cd10d8ea95a8c1031644407
cpuacct.usage_percpu_user  notify_on_release
cpuacct.usage_sys          tasks
cpuacct.usage_user
[root@docker-node1 docker]# cd e1ee6e4c8964c3394dfb01e79731c35cbbb805e94cd10d8ea95a8c1031644407/
[root@docker-node1 e1ee6e4c8964c3394dfb01e79731c35cbbb805e94cd10d8ea95a8c1031644407]# cat tasks 
11945           ####依旧是一样11945
11987

[root@docker-node1 e1ee6e4c8964c3394dfb01e79731c35cbbb805e94cd10d8ea95a8c1031644407]# cat tasks 
11945
11987

####进入11945这个进程里面会发先ns这个目录,所有的进程信息都在这里
[root@docker-node1 e1ee6e4c8964c3394dfb01e79731c35cbbb805e94cd10d8ea95a8c1031644407]# cd /proc/11945/
[root@docker-node1 11945]# ls
arch_status         cwd                maps           pagemap       stack
attr                environ            mem            patch_state   stat
autogroup           exe                mountinfo      personality   statm
auxv                fd                 mounts         projid_map    status
cgroup              fdinfo             mountstats     root          syscall
clear_refs          gid_map            net            sched         task
cmdline             io                 ns             schedstat     timens_offsets
comm                ksm_merging_pages  numa_maps      sessionid     timers
coredump_filter     limits             oom_adj        setgroups     timerslack_ns
cpu_resctrl_groups  loginuid           oom_score      smaps         uid_map
cpuset              map_files          oom_score_adj  smaps_rollup  wchan

####进入ns里面,除了ns里面的都没有被隔离
[root@docker-node1 11945]# cd ns/
[root@docker-node1 ns]# ls
cgroup  ipc  mnt  net  pid  pid_for_children  time  time_for_children  user  uts
####所以它隔离不彻底,因为他的安全性是通过安全隔离来做的

7.2 控制组资源控制的安全

当docker run启动一个容器时,Docker将在后台为容器创建一个独立的控制组策略集合。

Linux Cgroups提供了很多有用的特性,确保各容器可以公平地分享主机的内存、CPU、磁盘IO等资源。

确保当发生在容器内的资源压力不会影响到本地主机系统和其他容器,它在防止拒绝服务攻击 (DDoS)方面必不可少

###在这个文件里面给启动的容器创建隔离策略
[root@docker-node1 ns]# cd /sys/fs/cgroup/devices/
[root@docker-node1 devices]# ls
cgroup.clone_children  devices.deny       release_agent                  system.slice
cgroup.procs           devices.list       sys-fs-fuse-connections.mount  tasks
cgroup.sane_behavior   dev-mqueue.mount   sys-kernel-config.mount        user.slice
dev-hugepages.mount    docker             sys-kernel-debug.mount
devices.allow          notify_on_release  sys-kernel-tracing.mount

[root@docker-node1 devices]# cd ..
[root@docker-node1 cgroup]# ls
blkio  cpuacct      cpuset   freezer  memory  net_cls           net_prio    pids  systemd
cpu    cpu,cpuacct  devices  hugetlb  misc    net_cls,net_prio  perf_event  rdma
[root@docker-node1 cgroup]# cd memory/
[root@docker-node1 memory]# ls
cgroup.clone_children               memory.memsw.max_usage_in_bytes
cgroup.event_control                memory.memsw.usage_in_bytes
cgroup.procs                        memory.move_charge_at_immigrate
cgroup.sane_behavior                memory.numa_stat
dev-hugepages.mount                 memory.oom_control
dev-mqueue.mount                    memory.pressure_level
docker                              memory.soft_limit_in_bytes
memory.failcnt                      memory.stat
memory.force_empty                  memory.swappiness
memory.kmem.failcnt                 memory.usage_in_bytes
memory.kmem.limit_in_bytes          memory.use_hierarchy
memory.kmem.max_usage_in_bytes      notify_on_release
memory.kmem.slabinfo                release_agent
memory.kmem.tcp.failcnt             sys-fs-fuse-connections.mount
memory.kmem.tcp.limit_in_bytes      sys-kernel-config.mount
memory.kmem.tcp.max_usage_in_bytes  sys-kernel-debug.mount
memory.kmem.tcp.usage_in_bytes      sys-kernel-tracing.mount
memory.kmem.usage_in_bytes          system.slice
memory.limit_in_bytes               tasks
memory.max_usage_in_bytes           user.slice
memory.memsw.failcnt                x1
memory.memsw.limit_in_bytes
[root@docker-node1 memory]# cd docker/
[root@docker-node1 docker]# ls
cgroup.clone_children
cgroup.event_control
cgroup.procs
e1ee6e4c8964c3394dfb01e79731c35cbbb805e94cd10d8ea95a8c1031644407
memory.failcnt
memory.force_empty
memory.kmem.failcnt
memory.kmem.limit_in_bytes
memory.kmem.max_usage_in_bytes
memory.kmem.slabinfo
memory.kmem.tcp.failcnt
memory.kmem.tcp.limit_in_bytes
memory.kmem.tcp.max_usage_in_bytes
memory.kmem.tcp.usage_in_bytes
memory.kmem.usage_in_bytes
memory.limit_in_bytes
memory.max_usage_in_bytes
memory.memsw.failcnt
memory.memsw.limit_in_bytes
memory.memsw.max_usage_in_bytes
memory.memsw.usage_in_bytes
memory.move_charge_at_immigrate
memory.numa_stat
memory.oom_control
memory.pressure_level
memory.soft_limit_in_bytes
memory.stat
memory.swappiness
memory.usage_in_bytes
memory.use_hierarchy
notify_on_release
tasks

#####这是对docker内存的限制
[root@docker-node1 docker]# pwd /sys/fs/cgroup/memory/docker


[root@docker-node1 memory]# cd ..
[root@docker-node1 cgroup]# ls
blkio  cpuacct      cpuset   freezer  memory  net_cls           net_prio    pids  systemd
cpu    cpu,cpuacct  devices  hugetlb  misc    net_cls,net_prio  perf_event  rdma
[root@docker-node1 cgroup]# cd blkio/
[root@docker-node1 blkio]# ls
blkio.bfq.io_service_bytes                 cgroup.procs
blkio.bfq.io_service_bytes_recursive       cgroup.sane_behavior
blkio.bfq.io_serviced                      dev-hugepages.mount
blkio.bfq.io_serviced_recursive            dev-mqueue.mount
blkio.reset_stats                          docker
blkio.throttle.io_service_bytes            notify_on_release
blkio.throttle.io_service_bytes_recursive  release_agent
blkio.throttle.io_serviced                 sys-fs-fuse-connections.mount
blkio.throttle.io_serviced_recursive       sys-kernel-config.mount
blkio.throttle.read_bps_device             sys-kernel-debug.mount
blkio.throttle.read_iops_device            sys-kernel-tracing.mount
blkio.throttle.write_bps_device            system.slice
blkio.throttle.write_iops_device           tasks
cgroup.clone_children                      user.slice

[root@docker-node1 blkio]# cd docker/
[root@docker-node1 docker]# ls
blkio.bfq.io_service_bytes
blkio.bfq.io_service_bytes_recursive
blkio.bfq.io_serviced
blkio.bfq.io_serviced_recursive
blkio.bfq.weight
blkio.bfq.weight_device
blkio.reset_stats
blkio.throttle.io_service_bytes
blkio.throttle.io_service_bytes_recursive
blkio.throttle.io_serviced
blkio.throttle.io_serviced_recursive
blkio.throttle.read_bps_device
blkio.throttle.read_iops_device
blkio.throttle.write_bps_device
blkio.throttle.write_iops_device
cgroup.clone_children
cgroup.procs
e1ee6e4c8964c3394dfb01e79731c35cbbb805e94cd10d8ea95a8c1031644407
notify_on_release
tasks

####这是对磁盘的控制
[root@docker-node1 blkio]# pwd
/sys/fs/cgroup/blkio/docker

7.3 内核能力机制

能力机制(Capability)是Linux内核一个强大的特性,可以提供细粒度的权限访问控制。

大部分情况下,容器并不需要“真正的”root权限,容器只需要少数的能力即可。

默认情况下,Docker采用“白名单”机制,禁用“必需功能”之外的其他权限。

7.4 Docker服务端防护

使用Docker容器的核心是Docker服务端,确保只有可信的用户才能访问到Docker服务。

将容器的root用户映射到本地主机上的非root用户,减轻容器和主机之间因权限提升而引起的安全问题。

允许Docker 服务端在非root权限下运行,利用安全可靠的子进程来代理执行需要特权权限的操作。这些子进程只允许在特定范围内进行操作。

7.5 docker的资源限制

Linux Cgroups 的全称是 Linux Control Group

  • 是限制一个进程组能够使用的资源上限,包括 CPU、内存、磁盘、网络带宽等等。

  • 对进程进行优先级设置、审计,以及将进程挂起和恢复等操作。

Linux Cgroups 给用户暴露出来的操作接口是文件系统

  • 它以文件和目录的方式组织在操作系统的 /sys/fs/cgroup 路径下。

  • 执行此命令查看:mount -t cgroup

7.5.1限制cpu使用

7.5.1.1.限制cpu的使用量
[root@docker-node1 ~]# docker run -it --rm --name test \
--cpu-period 100000 \ #设置 CPU 周期的长度,单位为微秒(通常为 100000,即 100 毫秒)
--cpu-quota 20000 ubuntu #设置容器在一个周期内可以使用的 CPU 时间,单位也是微秒


[root@docker-node1 ~]# docker run -it --rm --name test \
> --cpu-period 100000 \
> --cpu-quota 20000 ubuntu
root@2f02e62dc66d:/# dd if=/dev/zero of=/dev/null &
[1] 9
root@2f02e62dc66d:/# top
top - 15:21:22 up  7:48,  0 user,  load average: 0.00, 0.00, 0.00
Tasks:   3 total,   2 running,   1 sleeping,   0 stopped,   0 zombie
%Cpu(s):  1.1 us,  5.7 sy,  0.0 ni, 93.2 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st 
MiB Mem :   1743.5 total,    300.0 free,    744.2 used,   1062.0 buff/cache     
MiB Swap:   4096.0 total,   4093.2 free,      2.8 used.    999.2 avail Mem 

PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND             
 9 root      20   0    2736   1280   1280 R  19.9   0.1   0:01.65 dd                  
 1 root      20   0    4588   3840   3328 S   0.0   0.2   0:00.02 bash                
10 root      20   0    8868   5120   3072 R   0.0   0.3   0:00.00 top                 

然后我们查看它的那个限制它cpu的值存放在哪里

[root@docker-node1 ~]# cd /sys/fs/cgroup/
[root@docker-node1 cgroup]# ls
blkio  cpuacct      cpuset   freezer  memory  net_cls           net_prio    pids  systemd
cpu    cpu,cpuacct  devices  hugetlb  misc    net_cls,net_prio  perf_event  rdma
[root@docker-node1 cgroup]# cd cpu
[root@docker-node1 cpu]# ls
cgroup.clone_children      cpuacct.usage_user   notify_on_release
cgroup.procs               cpu.cfs_burst_us     release_agent
cgroup.sane_behavior       cpu.cfs_period_us    sys-fs-fuse-connections.mount
cpuacct.stat               cpu.cfs_quota_us     sys-kernel-config.mount
cpuacct.usage              cpu.idle             sys-kernel-debug.mount
cpuacct.usage_all          cpu.shares           sys-kernel-tracing.mount
cpuacct.usage_percpu       cpu.stat             system.slice
cpuacct.usage_percpu_sys   dev-hugepages.mount  tasks
cpuacct.usage_percpu_user  dev-mqueue.mount     user.slice
cpuacct.usage_sys          docker
[root@docker-node1 cpu]# cd docker/
[root@docker-node1 docker]# ls
2f02e62dc66de4df6fb3bd2958b2fe3dd3c16082cc4e62d39d8ceb9dc16c7927  cpuacct.usage_user
cgroup.clone_children                                             cpu.cfs_burst_us
cgroup.procs                                                      cpu.cfs_period_us
cpuacct.stat                                                      cpu.cfs_quota_us
cpuacct.usage                                                     cpu.idle
cpuacct.usage_all                                                 cpu.shares
cpuacct.usage_percpu                                              cpu.stat
cpuacct.usage_percpu_sys                                          notify_on_release
cpuacct.usage_percpu_user                                         tasks
cpuacct.usage_sys
[root@docker-node1 docker]# cd 2f02e62dc66de4df6fb3bd2958b2fe3dd3c16082cc4e62d39d8ceb9dc16c7927/
[root@docker-node1 2f02e62dc66de4df6fb3bd2958b2fe3dd3c16082cc4e62d39d8ceb9dc16c7927]# ls
cgroup.clone_children  cpuacct.usage_percpu       cpu.cfs_burst_us   cpu.stat
cgroup.procs           cpuacct.usage_percpu_sys   cpu.cfs_period_us  notify_on_release
cpuacct.stat           cpuacct.usage_percpu_user  cpu.cfs_quota_us   tasks
cpuacct.usage          cpuacct.usage_sys          cpu.idle
cpuacct.usage_all      cpuacct.usage_user         cpu.shares
[root@docker-node1 2f02e62dc66de4df6fb3bd2958b2fe3dd3c16082cc4e62d39d8ceb9dc16c7927]# cat cpu.cfs_quota_us 
20000 ############这既是我们设置的值,限制的值,即它所用cpu在20%左右

#####这个路径
[root@docker-node1 2f02e62dc66de4df6fb3bd2958b2fe3dd3c16082cc4e62d39d8ceb9dc16c7927]# pwd
/sys/fs/cgroup/cpu/docker/2f02e62dc66de4df6fb3bd2958b2fe3dd3c16082cc4e62d39d8ceb9dc16c7927


####我们可以修改
[root@docker-node1 2f02e62dc66de4df6fb3bd2958b2fe3dd3c16082cc4e62d39d8ceb9dc16c7927]# echo 30000 > cpu.cfs_quota_us
[root@docker-node1 2f02e62dc66de4df6fb3bd2958b2fe3dd3c16082cc4e62d39d8ceb9dc16c7927]# cat cpu.cfs_quota_us 
30000

###然后查看cpu使用率

7.5.1.2 cpu优先级
#关闭cpu的核心,当cpu都不空闲下才会出现争抢的情况,为了实验效果我们可以关闭一个cpu核心
root@docker ~]# echo 0 > /sys/devices/system/cpu/cpu1/online
[root@docker ~]# cat /proc/cpuinfo
processor       : 0
vendor_id       : GenuineIntel
cpu family     : 6
model           : 58
model name     : Intel(R) Core(TM) i7-3770K CPU @ 3.50GHz
stepping       : 9
microcode       : 0x21
cpu MHz         : 3901.000
cache size     : 8192 KB
physical id     : 0
siblings       : 1
core id         : 0
cpu cores       : 1 ##cpu核心数为1
apicid         : 0
initial apicid : 0
fpu             : yes
fpu_exception   : yes
cpuid level     : 13
wp             : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov 
pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx rdtscp lm constant_tsc 
arch_perfmon nopl xtopology tsc_reliable nonstop_tsc cpuid tsc_known_freq pni 
pclmulqdq ssse3 cx16 pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes 
xsave avx f16c rdrand hypervisor lahf_lm pti ssbd ibrs ibpb stibp fsgsbase 
tsc_adjust smep arat md_clear flush_l1d arch_capabilities
bugs           : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds 
swapgs itlb_multihit srbds mmio_unknown
bogomips       : 7802.00
clflush size   : 64
cache_alignment : 64
address sizes   : 45 bits physical, 48 bits virtual
power management:

#开启容器并限制资源
[root@docker ~]# docker run -it --rm --cpu-shares 100 ubuntu #设定cpu优先级,最大为1024,值越大优先级越高
root@dc066aa1a1f0:/# dd if=/dev/zero of=/dev/null &
[1] 8
root@dc066aa1a1f0:/# top
top - 12:16:56 up 1 day,  2:22,  0 user, load average: 1.20, 0.37, 0.20
Tasks:   3 total,   2 running,   1 sleeping,   0 stopped,   0 zombie
%Cpu(s): 37.3 us, 61.4 sy,  0.0 ni,  0.0 id,  0.0 wa,  1.0 hi,  0.3 si,  0.0 st
MiB Mem :   3627.1 total,    502.5 free,    954.5 used,   2471.7 buff/cache
MiB Swap:   2063.0 total,   2062.3 free,      0.7 used.   2672.6 avail Mem
   PID USER     PR NI   VIRT   RES   SHR S %CPU %MEM     TIME+ COMMAND
      8 root      20   0    2736   1536   1536 R   3.6   0.0   0:16.74 dd 
#cpu有限制被限制
      1 root      20   0    4588   3968   3456 S   0.0   0.1   0:00.03 bash
      9 root      20   0    8856   5248   3200 R   0.0   0.1   0:00.00 top
      
#开启另外一个容器不限制cpu的优先级
root@17f8c9d66fde:/# dd if=/dev/zero of=/dev/null &
[1] 8
root@17f8c9d66fde:/# top
top - 12:17:55 up 1 day,  2:23,  0 user, load average: 1.84, 0.70, 0.32
Tasks:   3 total,   2 running,   1 sleeping,   0 stopped,   0 zombie
%Cpu(s): 36.2 us, 62.1 sy,  0.0 ni,  0.0 id,  0.0 wa,  1.3 hi,  0.3 si,  0.0 st
MiB Mem :   3627.1 total,    502.3 free,    954.6 used,   2471.7 buff/cache
MiB Swap:   2063.0 total,   2062.3 free,      0.7 used.   2672.5 avail Mem
   PID USER     PR NI   VIRT   RES   SHR S %CPU %MEM     TIME+ COMMAND
      8 root      20   0    2736   1408   1408 R  94.0   0.0   1:09.34 dd 
#cpu为被限制
      1 root      20   0    4588   3968   3456 S   0.0   0.1   0:00.02 bash
      9 root      20   0    8848   5248   3200 R   0.0   0.1   0:00.01 top

7.5.2限制内存使用

[root@docker-node1 ~]# cd /sys/fs/cgroup/memory/
[root@docker-node1 memory]# ls
cgroup.clone_children               memory.memsw.max_usage_in_bytes
cgroup.event_control                memory.memsw.usage_in_bytes
cgroup.procs                        memory.move_charge_at_immigrate
cgroup.sane_behavior                memory.numa_stat
dev-hugepages.mount                 memory.oom_control
dev-mqueue.mount                    memory.pressure_level
docker                              memory.soft_limit_in_bytes
memory.failcnt                      memory.stat
memory.force_empty                  memory.swappiness
memory.kmem.failcnt                 memory.usage_in_bytes
memory.kmem.limit_in_bytes          memory.use_hierarchy
memory.kmem.max_usage_in_bytes      notify_on_release
memory.kmem.slabinfo                release_agent
memory.kmem.tcp.failcnt             sys-fs-fuse-connections.mount
memory.kmem.tcp.limit_in_bytes      sys-kernel-config.mount
memory.kmem.tcp.max_usage_in_bytes  sys-kernel-debug.mount
memory.kmem.tcp.usage_in_bytes      sys-kernel-tracing.mount
memory.kmem.usage_in_bytes          system.slice
memory.limit_in_bytes               tasks
memory.max_usage_in_bytes           user.slice
memory.memsw.failcnt                x1
memory.memsw.limit_in_bytes
[root@docker-node1 memory]# cd docker/
[root@docker-node1 docker]# ls
b7f1b2056bcef00d28b00812f2eecd5d3c45417f99f975873d44f69d4b6692ad
cgroup.clone_children
cgroup.event_control
cgroup.procs
memory.failcnt
memory.force_empty
memory.kmem.failcnt
memory.kmem.limit_in_bytes
memory.kmem.max_usage_in_bytes
memory.kmem.slabinfo
memory.kmem.tcp.failcnt
memory.kmem.tcp.limit_in_bytes
memory.kmem.tcp.max_usage_in_bytes
memory.kmem.tcp.usage_in_bytes
memory.kmem.usage_in_bytes
memory.limit_in_bytes
memory.max_usage_in_bytes
memory.memsw.failcnt
memory.memsw.limit_in_bytes
memory.memsw.max_usage_in_bytes
memory.memsw.usage_in_bytes
memory.move_charge_at_immigrate
memory.numa_stat
memory.oom_control
memory.pressure_level
memory.soft_limit_in_bytes
memory.stat
memory.swappiness
memory.usage_in_bytes
memory.use_hierarchy
notify_on_release
tasks
[root@docker-node1 docker]# cd b7f1b2056bcef00d28b00812f2eecd5d3c45417f99f975873d44f69d4b6692ad/
[root@docker-node1 b7f1b2056bcef00d28b00812f2eecd5d3c45417f99f975873d44f69d4b6692ad]# ls
cgroup.clone_children               memory.memsw.failcnt
cgroup.event_control                memory.memsw.limit_in_bytes
cgroup.procs                        memory.memsw.max_usage_in_bytes
memory.failcnt                      memory.memsw.usage_in_bytes
memory.force_empty                  memory.move_charge_at_immigrate
memory.kmem.failcnt                 memory.numa_stat
memory.kmem.limit_in_bytes          memory.oom_control
memory.kmem.max_usage_in_bytes      memory.pressure_level
memory.kmem.slabinfo                memory.soft_limit_in_bytes
memory.kmem.tcp.failcnt             memory.stat
memory.kmem.tcp.limit_in_bytes      memory.swappiness
memory.kmem.tcp.max_usage_in_bytes  memory.usage_in_bytes
memory.kmem.tcp.usage_in_bytes      memory.use_hierarchy
memory.kmem.usage_in_bytes          notify_on_release
memory.limit_in_bytes               tasks
memory.max_usage_in_bytes
[root@docker-node1 b7f1b2056bcef00d28b00812f2eecd5d3c45417f99f975873d44f69d4b6692ad]# cat memory.memsw.limit_in_bytes 
209715200
[root@docker-node1 b7f1b2056bcef00d28b00812f2eecd5d3c45417f99f975873d44f69d4b6692ad]# pwd
/sys/fs/cgroup/memory/docker/b7f1b2056bcef00d28b00812f2eecd5d3c45417f99f975873d44f69d4b6692ad


[root@docker-node1 ~]# cd /sys/fs/cgroup/memory/docker/b7f1b2056bcef00d28b00812f2eecd5d3c45417f99f975873d44f69d4b6692ad/

#查看容器内存使用限制
[root@docker-node1 b7f1b2056bcef00d28b00812f2eecd5d3c45417f99f975873d44f69d4b6692ad]# cat memory.memsw.limit_in_bytes 
209715200
[root@docker-node1 b7f1b2056bcef00d28b00812f2eecd5d3c45417f99f975873d44f69d4b6692ad]# cat memory.limit_in_bytes 
209715200
#开启容器并限制容器使用内存大小
[root@docker system.slice]# docker run -d --name test --memory 200M --memory-swap 
200M nginx
#查看容器内存使用限制
[root@docker ~]# cd /sys/fs/cgroup/memory/docker/d09100472de41824bf0省略部分
id96b977369dad843740a1e8e599f430/
[root@docker d091004723d4de41824f6b38a7be9b77369dad843740a1e8e599f430]# cat 
memory.limit_in_bytes
209715200
[root@docker d091004723d4de41824f6b38a7be9977369dad843740a1e8e599f430]# cat 
memory.memsw.limit_in_bytes
209715200#测试容器内存限制,在容器中我们测试内存限制效果不是很明显,可以利用工具模拟容器在内存中写入数据
#在系统中/dev/shm这个目录被挂在到内存中
[root@docker cgroup]# docker run -d --name test --rm --memory 200M --memory-swap 
200M nginx                     
f5017485d69b50cf2e294bf6c65fcd5e679002e25bd9b0eaf9149eee2e379eec
[root@docker cgroup]# cgexec -g 
memory:docker/f5017485d69b50cf2e294bf6c65fcd5e679002e25bd9b0eaf9149eee2e379eec 
dd if=/dev/zero of=/dev/shm/bigfile bs=1M count=150
记录了150+0 的读入
记录了150+0 的写出
157286400字节(157 MB,150 MiB)已复制,0.0543126 s,2.9 GB/s
[root@docker cgroup]# cgexec -g 
memory:docker/f5017485d69b50cf2e294bf6c65fcd5e679002e25bd9b0eaf9149eee2e379eec 
dd if=/dev/zero of=/dev/shm/bigfile bs=1M count=180
记录了180+0 的读入
记录了180+0 的写出
188743680字节(189 MB,180 MiB)已复制,0.0650658 s,2.9 GB/s
[root@docker cgroup]# cgexec -g 
memory:docker/f5017485d69b50cf2e294bf6c65fcd5e679002e25bd9b0eaf9149eee2e379eec 
dd if=/dev/zero of=/dev/shm/bigfile bs=1M count=120
记录了120+0 的读入
记录了120+0 的写出
125829120字节(126 MB,120 MiB)已复制,0.044017 s,2.9 GB/s
[root@docker cgroup]# cgexec -g 
memory:docker/f5017485d69b50cf2e294bf6c65fcd5e679002e25bd9b0eaf9149eee2e379eec 
dd if=/dev/zero of=/dev/shm/bigfile bs=1M count=200
已杀死
#也可以自建控制器
[root@docker ~]# mkdir -p /sys/fs/cgroup/memory/x1/
[root@docker ~]# ls /sys/fs/cgroup/memory/x1/
cgroup.clone_children           memory.kmem.tcp.max_usage_in_bytes 
memory.oom_control
cgroup.event_control           memory.kmem.tcp.usage_in_bytes     
memory.pressure_level
cgroup.procs                   memory.kmem.usage_in_bytes         
memory.soft_limit_in_bytes
memory.failcnt                 memory.limit_in_bytes               memory.stat
memory.force_empty             memory.max_usage_in_bytes           
memory.swappiness
memory.kmem.failcnt             memory.memsw.failcnt               
memory.usage_in_bytes
memory.kmem.limit_in_bytes     memory.memsw.limit_in_bytes         
memory.use_hierarchy
memory.kmem.max_usage_in_bytes memory.memsw.max_usage_in_bytes     
notify_on_release
memory.kmem.slabinfo           memory.memsw.usage_in_bytes         tasks
memory.kmem.tcp.failcnt         memory.move_charge_at_immigrate
memory.kmem.tcp.limit_in_bytes memory.numa_stat
[root@docker ~]# echo 209715200 > /sys/fs/cgroup/memory/x1/memory.limit_in_bytes 
  #内存可用大小限制
[root@docker ~]# cat /sys/fs/cgroup/memory/x1/tasks #此控制器被那个进程调用
[root@docker ~]# cgexec -g memory:x1 dd if=/dev/zero of=/dev/shm/bigfile bs=1M 
count=100
记录了100+0 的读入
记录了100+0 的写出
104857600字节(105 MB,100 MiB)已复制,0.0388935 s,2.7 GB/s
[root@docker ~]# free -m
               total       used       free     shared buff/cache   available
Mem:            3627        1038        1813         109        1131        2589
Swap:           2062           0        2062
[root@docker ~]# cgexec -g memory:x1 dd if=/dev/zero of=/dev/shm/bigfile bs=1M 
count=300
记录了300+0 的读入
记录了300+0 的写出
314572800字节(315 MB,300 MiB)已复制,0.241256 s,1.3 GB/s
[root@docker ~]# free -m
               total       used       free     shared buff/cache   available
Mem:            3627        1125        1725         181        1203        2501
Swap:           2062         129        1933 #内存溢出部分被写入swap交换分
区
[root@docker ~]# rm -fr /dev/shm/bigfile
[root@docker ~]# echo 209715200 > 
/sys/fs/cgroup/memory/x1/memory.memsw.limit_in_bytes #内存+swap控制
[root@docker ~]# cgexec -g memory:x1 dd if=/dev/zero of=/dev/shm/bigfile bs=1M 
count=200
已杀死
[root@docker ~]# cgexec -g memory:x1 dd if=/dev/zero of=/dev/shm/bigfile bs=1M 
count=199
已杀死
[root@docker ~]# rm -fr /dev/shm/bigfile
[root@docker ~]#
[root@docker ~]# rm -fr /dev/shm/bigfile
[root@docker ~]# cgexec -g memory:x1 dd if=/dev/zero of=/dev/shm/bigfile bs=1M 
count=180
记录了180+0 的读入
记录了180+0 的写出
188743680字节(189 MB,180 MiB)已复制,0.0660052 s,2.9 GB/s
[root@docker ~]# cgexec -g memory:x1 dd if=/dev/zero of=/dev/shm/bigfile bs=1M 
count=190
记录了190+0 的读入
记录了190+0 的写出
199229440字节(199 MB,190 MiB)已复制,0.0682285 s,2.9 GB/s
[root@docker ~]# cgexec -g memory:x1 dd if=/dev/zero of=/dev/shm/bigfile bs=1M 
count=200
已杀死

7.5.3限制docker的磁盘io

[root@docker ~]# docker run -it --rm \
--device-write-bps \ #指定容器使用磁盘io的速率
/dev/nvme0n1:30M \ #/dev/nvme0n1是指定系统的磁盘,30M即每秒30M数据
ubuntu
root@a4e9567a666d:/# dd if=/dev/zero of=bigfile #开启容器后会发现速度和设定不匹配,
是因为系统的缓存机制
^C592896+0 records in
592895+0 records out
303562240 bytes (304 MB, 289 MiB) copied, 2.91061 s, 104 MB/s
root@a4e9567a666d:/# ^C
root@a4e9567a666d:/# dd if=/dev/zero of=bigfile bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.0515779 s, 2.0 GB/s
root@a4e9567a666d:/# dd if=/dev/zero of=bigfile bs=1M count=100 oflag=direct 
#设定dd命令直接写入磁盘
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 3.33545 s, 31.4 MB/s

7.6docker的安全加固

7.6.1 docker默认隔离性

在系统中运行容器,我们会发现资源并没有完全隔离开

[root@docker-node1 ~]# docker ps -a    #系统内存使用情况
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
[root@docker-node1 ~]# free -m
               total        used        free      shared  buff/cache   available
Mem:            1743         748         295         202        1061         995
Swap:           4095           2        4093

####我们开启一个容器
[root@docker-node1 ~]# docker run --rm --memory 200M -it ubuntu

#####容器内存使用情况
root@e1770cba2d9a:/# free -m
               total        used        free      shared  buff/cache   available
Mem:            1743         791         252         202        1062         952
Swap:           4095           2        4093
#虽然我们限制了容器的内容使用情况,但是查看到的信息依然是系统中内存的使用信息,并没有隔离开

7.6.2 解决Docker的默认隔离性

LXCFS 是一个为 LXC(Linux Containers)容器提供增强文件系统功能的工具

主要功能

  1. 资源可见性: LXCFS 可以使容器内的进程看到准确的 CPU、内存和磁盘 I/O 等资源使用信息。在没有 LXCFS 时,容器内看到的资源信息可能不准确,这会影响到在容器内运行的应用程序对资源的评估和管理。

  2. 性能监控: 方便对容器内的资源使用情况进行监控和性能分析。通过提供准确的资源信息,管理员和开发人员可以更好地了解容器化应用的性能瓶颈,并进行相应的优化。

安装lxcfs

[root@docker-node1 ~]# dnf install lxc*.rpm -y

[root@docker-node1 ~]# docker run -it --rm --name test \
> -v /var/lib/lxcfs/proc/cpuinfo:/proc/cpuinfo:rw \
> -v /var/lib/lxcfs/proc/meminfo:/proc/meminfo:rw \
> -v /var/lib/lxcfs/proc/diskstats:/proc/diskstats:rw \
> -v /var/lib/lxcfs/proc/stat:/proc/stat:rw \
> -v /var/lib/lxcfs/proc/swaps:/proc/swaps:rw \
> -m 200M \
> ubuntu
root@bad846a736f0:/# free -m
               total        used        free      shared  buff/cache   available
Mem:             200           1         198           0           0         198
Swap:            400           0         400
root@bad846a736f0:/# exit
exit

[root@docker-node1 ~]# free -m
               total        used        free      shared  buff/cache   available
Mem:            1743         807         190         202        1125         936
Swap:           4095           2        4093
####这就表明这个docker已经完全和系统隔离开了

7.7.3 容器特权

在容器中默认情况下即使我是容器的超级用户也无法修改某些系统设定,比如网络

[root@docker-node1 ~]# docker run --rm -it busybox
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
54: eth0@if55: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # ip a a 1.1.1.4/24 dev eth0
ip: RTNETLINK answers: Operation not permitted
/ # exit

 这是因为容器使用的很多资源都是和系统真实主机公用的,如果允许容器修改这些重要资源,系统的稳 定性会变的非常差

但是由于某些需要求,容器需要控制一些默认控制不了的资源,如何解决此问题,这时我们就要设置容 器特权

[root@docker-node1 ~]# docker run --rm -it --privileged busybox
/ # id root
uid=0(root) gid=0(root) groups=0(root),10(wheel)
/ # ip a 
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
56: eth0@if57: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # ip a a 1.1.1.2/24 dev eth0
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
56: eth0@if57: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 1.1.1.2/24 scope global eth0
       valid_lft forever preferred_lft forever
/ # 
/ # 
/ # 
/ # 
/ # fdisk -l
Disk /dev/nvme0n1: 20 GB, 21474836480 bytes, 41943040 sectors
82241 cylinders, 255 heads, 2 sectors/track
Units: sectors of 1 * 512 = 512 bytes

Device       Boot StartCHS    EndCHS        StartLBA     EndLBA    Sectors  Size Id Type
/dev/nvme0n1p1 *  4,4,1       1023,254,2        2048    2099199    2097152 1024M 83 Linux
/dev/nvme0n1p2    1023,254,2  1023,254,2     2099200   41943039   39843840 18.9G 8e Linux LVM
Disk /dev/dm-0: 15 GB, 16101933056 bytes, 31449088 sectors
1957 cylinders, 255 heads, 63 sectors/track
Units: sectors of 1 * 512 = 512 bytes

Disk /dev/dm-0 doesn't contain a valid partition table
Disk /dev/dm-1: 4096 MB, 4294967296 bytes, 8388608 sectors
522 cylinders, 255 heads, 63 sectors/track
Units: sectors of 1 * 512 = 512 bytes

Disk /dev/dm-1 doesn't contain a valid partition table
/ # exit
#如果添加了--privileged 参数开启容器,容器获得权限近乎于宿主机的root用户

7.6.4 容器白名单

--privileged=true 的权限非常大,接近于宿主机的权限,为了防止用户的滥用,需要增加限制,只提供 给容器必须的权限。此时Docker 提供了权限白名单的机制,使用--cap-add添加必要的权限

[root@docker-node1 ~]# docker run --rm -it --cap-add NET_ADMIN busybox
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
58: eth0@if59: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
58: eth0@if59: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # ip a a 1.1.1.2/24 dev eth0            ####网络可以设定
/ # fdisk -l                                ####无法管理磁盘
/ # 
/ # 

8.容器编排工具Docker Compose

主要功能

1. 定义服务:

使用 YAML 格式的配置文件来定义一组相关的容器服务。每个服务可以指定镜像、端口映射、 环境变量、存储卷等参数。

例如,可以在配置文件中定义一个 Web 服务和一个数据库服务,以及它们之间的连接关系。

2. 一键启动和停止:

通过一个简单的命令,可以启动或停止整个应用程序所包含的所有容器。这大大简化了多容器 应用的部署和管理过程。

例如,使用 docker-compose up 命令可以启动配置文件中定义的所有服务,使用 dockercompose down 命令可以停止并删除这些服务。

3. 服务编排:

可以定义容器之间的依赖关系,确保服务按照正确的顺序启动和停止。

例如,可以指定数据库 服务必须在 Web 服务之前启动。 支持网络配置,使不同服务的容器可以相互通信。可以定义一个自定义的网络,将所有相关的 容器连接到这个网络上。

4. 环境变量管理: 可以在配置文件中定义环境变量,并在容器启动时传递给容器。这使得在不同环境(如开发、 测试和生产环境)中使用不同的配置变得更加容易。

例如,可以定义一个数据库连接字符串的环境变量,在不同环境中可以设置不同的值。

工作原理

1. 读取配置文件: Docker Compose 读取 YAML 配置文件,解析其中定义的服务和参数。

2. 创建容器:

根据配置文件中的定义,Docker Compose 调用 Docker 引擎创建相应的容器。它会下载所需 的镜像(如果本地没有),并设置容器的各种参数。

3. 管理容器生命周期:

Docker Compose 监控容器的状态,并在需要时启动、停止、重启容器。

它还可以处理容器的故障恢复,例如自动重启失败的容器。 Docker Compose 中的管理层

1. 服务 (service) 一个应用的容器,实际上可以包括若干运行相同镜像的容器实例

2. 项目 (project) 由一组关联的应用容器组成的一个完整业务单元,在 docker-compose.yml 文件中 定义

3. 容器(container)容器是服务的具体实例,每个服务可以有一个或多个容器。容器是基于服务定义 的镜像创建的运行实例

8.1Docker Compose 的常用命令参数

docker-compose up :

启动配置文件中定义的所有服务。

可以使用 -d 参数在后台启动服务。

可以使用-f 来指定yml文件

例如: docker-compose up -d 。

[root@docker-node1 ~]# mkdir test
[root@docker-node1 ~]# cd test
[root@docker-node1 test]# cat docker-compose.yaml 
services:
  web:
    image: nginx:latest
    ports:
      - "80:80"

  testdone:
    image: busybox:latest
    command: ["/bin/sh","-c","sleep 100000"]

[root@docker-node1 test]# docker compose up -d
[+] Running 2/2
 ✔ Container test-testdone-1  Started                                       0.4s 
 ✔ Container test-web-1       Started         
 
 
 ####停止并删除配置文件中定义的所有服务以及相关的网络和存储卷
 [root@docker-node1 test]# docker compose down 
[+] Running 3/3
 ✔ Container test-testdone-1  Removed                                      10.1s 
 ✔ Container test-web-1       Removed                                       0.1s 
 ✔ Network test_default       Removed                                       0.1s 
[root@docker-node1 test]# docker compose up -d
[+] Running 3/3
 ✔ Network test_default       Created                                       0.1s 
 ✔ Container test-web-1       Started                                       0.6s 
 ✔ Container test-testdone-1  Started                                       0.5s 

###停止正在运行的服务
[root@docker-node1 test]# docker compose stop
[+] Stopping 2/2
 ✔ Container test-web-1       Stopped                                       0.1s 
 ✔ Container test-testdone-1  Stopped                                      10.1s 

####重启服务。
[root@docker-node1 test]# docker compose restart 
[+] Restarting 2/2
 ✔ Container test-web-1       Started                                       0.5s 
 ✔ Container test-testdone-1  Started                                       0.4s 
[root@docker-node1 test]# 



[root@docker-node1 test]# docker compose -f docker-compose.yaml up -d
[+] Running 2/0
 ✔ Container test-web-1       Running                                       0.0s 
 ✔ Container test-testdone-1  Running              
 
 
 [root@docker-node1 test]# docker compose -f docker-compose.yaml exec testdone sh
/ # 



[root@docker-node1 test]# ls
docker-compose.yaml
[root@docker-node1 test]# mv docker-compose.yaml ren.yaml
[root@docker-node1 test]# docker compose ren.yaml 

Usage:  docker compose [OPTIONS] COMMAND

####-f  可以使用-f 来指定yml文件
[root@docker-node1 test]# docker compose -f ren.yaml up -d
[+] Running 2/0
 ✔ Container test-testdone-1  Running                                       0.0s 
 ✔ Container test-web-1       Running                                       0.0s 

服务状态查看

1. docker-compose ps :
列出正在运行的服务以及它们的状态,包括容器 ID、名称、端口映射等信息。
[root@docker test]# docker compose ps

2. docker-compose logs :
查看服务的日志输出。可以指定服务名称来查看特定服务的日志。
[root@docker test]# docker compose logs web

其他操作

1. docker-compose exec :
在正在运行的服务容器中执行命令。
services:
 web:
   image: busybox
   command: ["/bin/sh","-c","sleep 3000"]
   restart: always
   container_name: busybox1
[root@docker test]# docker compose -f ren.yml up -d
[root@docker test]# docker compose -f ren.yml exec test sh
/ #


2. docker-compose pull :
拉取配置文件中定义的服务所使用的镜像。
[root@docker test]# docker compose -f ren.yml pull
[+] Pulling 2/2
 ✔ test Pulled
 ✔ ec562eabd705 Pull complete  


3. docker-compose config :
验证并查看解析后的 Compose 文件内容
[root@docker test]# docker compose -f ren.yml config
name: test
services:
 web:
   command:
     - /bin/sh
     - -c
     - sleep 3000
   container_name: busybox1
   image: busybox
   networks:
     default: null
   restart: always
networks:
 default:
   name: test_default
[root@docker test]# docker compose -f ren.yml config -q

8.2docker compose的yml文件

服务(services)

1. 服务名称(service1_name/service2_name 等):

每个服务在配置文件中都有一个唯一的名称,用于在命令行和其他部分引用该服务。

services:
 web:
   # 服务1的配置
 mysql:
   # 服务2的配置

2. 镜像(image):

指定服务所使用的 Docker 镜像名称和标签。例如, image: nginx:latest 表示使用 nginx 镜像的最新版本

services:
 web:
   images:nginx
 mysql:
   images:mysql:5.7

3. 端口映射(ports):

将容器内部的端口映射到主机的端口,以便外部可以访问容器内的服务。例如, - "8080:80" 表示将主机的 8080 端口映射到容器内部的 80 端口

services:
 web:
   image: timinglee/mario
   container_name: game #指定容器名称
    restart: always   #docekr容器自动启动
   expose:
   - 1234 #指定容器暴露那些端口,些端口仅对链接的服务可见,不会映射到主机的端口
   ports:
      - "80:8080"

4. 环境变量(environment):

为容器设置环境变量,可以在容器内部的应用程序中使用。例如, VAR1: value1 设置环境变 量 VAR1 的值为 value1

services:
 web:
   images:mysql:5.7
   environment:
     MYSQL_ROOT_PASSWORD: ren

5. 存储卷(volumes)

将主机上的目录或文件挂载到容器中,以实现数据持久化或共享。例如,-/host/data:/container/data 将主机上的 /host/data 目录挂载到容器内的 /container/data 路径。

 

services:
 test:
   image: busybox
   command: ["/bin/sh","-c","sleep 3000"]
   restart: always
   container_name: busybox1
   volumes:
     - /etc/passwd:/tmp/passwd:ro #只读挂在本地文件到指定位置

6. 网络(networks):

将服务连接到特定的网络,以便不同服务的容器可以相互通信

services:
 web:
   image: nginx
   container_name: webserver
   network_mode: bridge #使用本机自带bridge网络

7. 命令(command):

覆盖容器启动时默认执行的命令。例如, command: python app.py 指定容器启动时运行 python app.py 命令

[root@docker test]# vim busybox.yml
services:
 web:
   image: busybox
   container_name: busybox
   #network_mode: mynet2
   command: ["/bin/sh","-c","sleep10000000"]

网络(networks)

定义 Docker Compose 应用程序中使用的网络。可以自定义网络名称和驱动程序等属性。

默认情况下docker compose 在执行时会自动建立网路

services:
 test:
   image: busybox1
   command: ["/bin/sh","-c","sleep 3000"]
    restart: always
   network_mode: default
   container_name: busybox
 test1:
   image: busybox2
   command: ["/bin/sh","-c","sleep 3000"]
    restart: always
   container_name: busybox1
   networks:
      - mynet1
 
 test3:
   image: busybox3
   command: ["/bin/sh","-c","sleep 3000"]
    restart: always
    container_name: busybox1
    networks:
      - mynet1
networks:
 mynet1:
   driver: bridge #使用桥接驱动,也可以使用macvlan用于跨主机连接
 default:
   external: true #不建立新的网络而使用外部资源
   name: bridge #指定外部资源网络名字
    
 mynet2:
   ipam:
     driver: default
     config:
        - subnet: 172.28.0.0/16
         gateway: 172.28.0.254

存储卷(volumes)

定义 Docker Compose 应用程序中使用的存储卷。可以自定义卷名称和存储位置等属性。

services:
 test:
   image: busybox
   command: ["/bin/sh","-c","sleep 3000"]
   restart: always
   container_name: busybox1
   volumes:
     - data:/test #挂在data卷
     - /etc/passwd:/tmp/passwd:ro #只读挂在本地文件到指定位置
volumes:
 data:
   name: ren #指定建立卷的名字

8.3企业示例

利用容器编排完成haproxy和nginx负载均衡架构实施

[root@docker-node2 ~]# yum install haproxy -y --downloadonly --downloaddir=/mnt

[root@docker-node2 ~]# mkdir test
[root@docker-node2 ~]# ls
anaconda-ks.cfg
busybox-latest.tar.gz
containerd.io-1.7.20-3.1.el9.x86_64.rpm
docker-buildx-plugin-0.16.2-1.el9.x86_64.rpm
docker-ce-27.1.2-1.el9.x86_64.rpm
docker-ce-cli-27.1.2-1.el9.x86_64.rpm
docker-ce-rootless-extras-27.1.2-1.el9.x86_64.rpm
docker-compose-plugin-2.29.1-1.el9.x86_64.rpm
docker.tar.gz
test
[root@docker-node2 ~]# cd test/
[root@docker-node2 test]# vim docker-compose.yaml
[root@docker-node2 test]# ls
docker-compose.yaml
[root@docker-node2 test]# cd
[root@docker-node2 ~]# cd /mnt/
[root@docker-node2 mnt]# ls
haproxy-2.4.22-3.el9_3.x86_64.rpm  hgfs
[root@docker-node2 mnt]# rpm2cpio haproxy-2.4.22-3.el9_3.x86_64.rpm | cpio -id
13488 blocks
[root@docker-node2 mnt]# ls
etc  haproxy-2.4.22-3.el9_3.x86_64.rpm  hgfs  usr  var
[root@docker-node2 mnt]# cd etc/
[root@docker-node2 etc]# ls
haproxy  logrotate.d  sysconfig
[root@docker-node2 etc]# cd haproxy/
[root@docker-node2 haproxy]# ls
conf.d  haproxy.cfg


[root@docker-node2 haproxy]# mkdir /var/lib/docker/volumes/conf
[root@docker-node2 haproxy]# cp haproxy.cfg /var/lib/docker/volumes/conf/
[root@docker-node2 haproxy]# cd /var/lib/docker/volumes/conf/
[root@docker-node2 conf]# ls
haproxy.cfg
[root@docker-node2 conf]# vim haproxy.cfg 

listen webcluster
  bind *:80
  balance roundrobin
  server web1 web1:80 check inter 3 fall 3 rise 5
  server web2 web2:80 check inter 3 fall 3 rise 5


[root@docker-node2 test]# vim docker-compose.yaml 
services:
  web1:
    image: nginx:latest
    container_name: web1
    restart: always
    networks:
      - mynet1
    expose:
      - 80
    volumes:
      - data_web1:/usr/share/nginx/html
  web2:
    image: nginx:latest
    container_name: web2
    restart: always
    networks:
      - mynet1
    expose:
      - 80
    volumes:
      - data_web2:/usr/share/nginx/html

  haproxy:
    image: haproxy:2.3
    container_name: haproxy
    restart: always
    networks:
      - mynet1
      - mynet2
    volumes:
      - /var/lib/docker/volumes/conf/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg
    ports:
      - "80:80"
networks:
  mynet1:
    driver: bridge

  mynet2:
    driver: bridge

volumes:
  data_web1:
    name: data_web1

  data_web2:
    name: data_web2
    
[root@docker-node2 test]# docker compose -f docker-compose.yaml up -d
[+] Running 5/5
 ✔ Network test_mynet1  Created                                                       0.1s 
 ✔ Network test_mynet2  Created                                                       0.1s 
 ✔ Container haproxy    Started                                                       0.8s 
 ✔ Container web1       Started                                                       0.7s 
 ✔ Container web2       Started                                                       0.6s 
[root@docker-node2 test]# docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED          STATUS          PORTS                               NAMES
122b96e21418   nginx:latest   "/docker-entrypoint.…"   10 seconds ago   Up 10 seconds   80/tcp                              web1
8cb3407c453c   haproxy:2.3    "docker-entrypoint.s…"   10 seconds ago   Up 9 seconds    0.0.0.0:80->80/tcp, :::80->80/tcp   haproxy
58008ed7ffef   nginx:latest   "/docker-entrypoint.…"   10 seconds ago   Up 10 seconds   80/tcp                              web2
[root@docker-node2 test]# netstat -antlupe | grep 80
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      0          330206     52375/docker-proxy  
tcp6       0      0 :::80                   :::*                    LISTEN      0          329592     52385/docker-proxy  

####实现负载均衡
[root@docker-node2 test]# curl 172.25.250.200
webserver1
[root@docker-node2 test]# curl 172.25.250.200
webserver2
[root@docker-node2 test]# curl 172.25.250.200
webserver1
[root@docker-node2 test]# curl 172.25.250.200
webserver2
[root@docker-node2 test]# curl 172.25.250.200
webserver1
[root@docker-node2 test]# curl 172.25.250.200
webserver2

  • 5
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值