容器学习记录(二)

目录

1、实现并总结容器跨主机的通信过程

2、Dockerfile的常见指令

3、使用Dockerfile构建Nginx镜像

4、部署单机harbor并实现镜像的上传与下载

5.基于systemd实现容器的CPU及内存的使用限制

扩展:

1.总结镜像的分层构建流程

2.总结基于lxcfs对容器的内存及CPU的资源限制——略


1、实现并总结容器跨主机的通信过程

默认配置下宿主机安装好docker后会有一个生成一个docker0的网卡,正常情况下使用docker run拉起来的容器都会在同一个网段172.17.0.0中,通过以下实验验证容器在跨主机通信时网络走向

实验机器的网卡

root@ubuntu2204-server1:~#ifconfig
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.208  netmask 255.255.255.0  broadcast 10.0.0.255
        inet6 fe80::20c:29ff:fe67:6cdd  prefixlen 64  scopeid 0x20<link>

veth8fda2d4: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::4446:c2ff:fef3:e196  prefixlen 64  scopeid 0x20<link>
        ether 46:46:c2:f3:e1:96  txqueuelen 0  (Ethernet)

root@ubuntu2204-server2:~# ifconfig
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.206  netmask 255.255.255.0  broadcast 10.0.0.255
        inet6 fe80::20c:29ff:fe5a:367d  prefixlen 64  scopeid 0x20<link>

veth6367947: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::3c28:55ff:fef3:6947  prefixlen 64  scopeid 0x20<link>
        ether 3e:28:55:f3:69:47  txqueuelen 0  (Ethernet)

在宿主机1上抓包,开另一个窗口进入nginx容器对宿主机2发起请求

​
root@ubuntu2204-server1:~# tcpdump -nn -vvv -i veth8fda2d4 -vvv -nn ! port 22 and ! arp and ! port 53 -w 1-veth8fda2d4.pcap

root@ubuntu2204-server1:~# tcpdump -nn -vvv -i docker0 -vvv -nn ! port 22 and ! arp and ! port 53 -w 2-docker0.pcap

root@ubuntu2204-server1:~# tcpdump -nn -vvv -i ens33 -vvv -nn ! port 22 and ! arp and ! port 53 -w 3-ens33.pcap

/ # curl http://10.0.0.206:8080

 目的主机2抓包

root@ubuntu2204-server2:~# tcpdump -nn -vvv -i ens33 -vvv -nn ! port 22 and ! arp and ! port 53 -w 4-ens33.pcap

root@ubuntu2204-server2:~# tcpdump -nn -vvv -i docker0 -vvv -nn ! port 22 and ! arp and ! port 53 -w 5-docker0.pcap

root@ubuntu2204-server2:~# tcpdump -nn -vvv -i veth6367947 -vvv -nn ! port 22 and ! arp and ! port 53 -w 6-veth6367947.pcap 

容器网络端口信息如下:

root@ubuntu2204-server1

root@ubuntu2204-server1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:67:6c:dd brd ff:ff:ff:ff:ff:ff
    altname enp2s1
    inet 10.0.0.208/24 metric 100 brd 10.0.0.255 scope global dynamic ens33
       valid_lft 1666sec preferred_lft 1666sec
    inet6 fe80::20c:29ff:fe67:6cdd/64 scope link
       valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:9d:58:cb:00 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:9dff:fe58:cb00/64 scope link
       valid_lft forever preferred_lft forever
5: veth8fda2d4@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether 46:46:c2:f3:e1:96 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::4446:c2ff:fef3:e196/64 scope link
       valid_lft forever preferred_lft forever
root@ubuntu2204-server1:~#
root@ubuntu2204-server1:~# docker exec -it b167216dfafc sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
4: eth0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
/ #

 root@ubuntu2204-server2

root@ubuntu2204-server2:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:5a:36:7d brd ff:ff:ff:ff:ff:ff
    altname enp2s1
    inet 10.0.0.206/24 metric 100 brd 10.0.0.255 scope global dynamic ens33
       valid_lft 1585sec preferred_lft 1585sec
    inet6 fe80::20c:29ff:fe5a:367d/64 scope link
       valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:93:70:6c:07 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:93ff:fe70:6c07/64 scope link
       valid_lft forever preferred_lft forever
5: veth6367947@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether 3e:28:55:f3:69:47 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::3c28:55ff:fef3:6947/64 scope link
       valid_lft forever preferred_lft forever
root@ubuntu2204-server2:~#
root@ubuntu2204-server2:~# docker exec -it d14468ad6c45 sh
/usr/local/tomcat # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
4: eth0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forev

使用抓包分析工具确定网络流量走向

第1个抓包:1-veth8fda2d4.pcap

src:宿主机1容器:02:42:ac:11:00:02

dst:宿主机1docker0:02:42:9d:58:cb:00 

第2个抓包:2-docker0.pcap9

src:宿主机1容器:02:42:ac:11:00:02

dst:宿主机1docker0:02:42:9d:58:cb:00 

 第3个抓包:3-ens33.pcap

src:宿主机1ens33:00:0c:29:67:6c:dd

dst:宿主机2ens33:00:0c:29:5a:36:7d 

 第4个抓包:4-ens33.pcap

src:宿主机1ens33:00:0c:29:67:6c:dd

dst:宿主机2ens33:00:0c:29:5a:36:7d 

 第5个抓包:5-docker0.pcap

src:宿主机2docker0:02:42:93:70:6c:07 

dst:宿主机2容器:02:42:ac:11:00:02

 第6个抓包:6-veth6367947.pcap

src:宿主机2docker0:02:42:93:70:6c:07 

dst:宿主机2容器:02:42:ac:11:00:02

可以看到,流量的走向是:

宿主机1容器——宿主机1docker0——宿主机1ens33——宿主机2ens33——宿主机2docker0——宿主机2容器

docker跨主机的容器通信是逐层转发出去的(同个宿主机内则,通过docker0)

2、Dockerfile的常见指令

FROM alpine #指定使用的镜像
LABEL zezehu #指定维护人信息
ADD [--chown root:root ]src dest #添加宿主机本地文件到镜像中去,会自动解压tar.gz压缩包,不会解压zip
COPY [--chown root:root ]src dest #添加宿主机本地文件到镜像中去,不会解压任何文件
ENV NAME="huzewei" #设置环境变量
USER<user>[:<group>] or USER<UID>[:<GID>] #设置执行命令的用户
RUN apt install -y vim unzip  && cd /etc/nginx  #run后面带执行的命令,无法执行交互式命令
VOLUME [/data/nginx] #挂载卷,定义volume
WORKDIR /data/data1 #用与定义当前的工作目录,会影响后续的命令
EXPOSE <port> #声明端口映射

CMD ["可执行脚本或者程序","参数1","参数2"]
ENTRYPOINT ["可执行脚本或者程序","-b","-c"]
CMD与ENTRYPOINT结合使用,把CMD的命令当作参数给ENTRYPOINT后面的脚本
ENTERYPOINT ["top","-b"]
CMD ["-c"]
等于 
ENTRYPOINT ["top","-b","-c"]

注:经常变动包文件/配置/放后面

3、使用Dockerfile构建Nginx镜像

root@docker1:/data/nginx# ll
total 1116
drwxr-xr-x 2 root root    4096 Jan 22 09:28 ./
drwxr-xr-x 5 root root    4096 Jan 12 08:00 ../
-rw-r--r-- 1 root root     265 Aug 25 16:54 build-command.sh
-rw-r--r-- 1 root root     869 Jan 12 08:05 Dockerfile
-rw-r--r-- 1 root root   38751 Aug 25 16:54 frontend.tar.gz
-rw-r--r-- 1 root root 1073322 Aug 25 16:55 nginx-1.22.0.tar.gz
-rw-r--r-- 1 root root    2812 Jan 12 08:46 nginx.conf
-rw-r--r-- 1 root root    1139 Aug 25 16:54 sources.list
  • 编辑Dockerfile文件
oot@docker1:/data/nginx# vim Dockerfile 
root@docker1:/data/nginx# cat Dockerfile 
FROM ubuntu:22.04
MAINTAINER "huzewei"

ADD sources.list /etc/apt/sources.list
RUN apt update && apt  install -y iproute2  ntpdate  tcpdump telnet traceroute nfs-kernel-server nfs-common  lrzsz tree  openssl libssl-dev libpcre3 libpcre3-dev zlib1g-dev ntpdate tcpdump telnet traceroute  gcc openssh-server lrzsz tree  openssl libssl-dev libpcre3 libpcre3-dev zlib1g-dev ntpdate tcpdump telnet traceroute iotop unzip zip make


ADD nginx-1.22.0.tar.gz /usr/local/src/
RUN cd /usr/local/src/nginx-1.22.0 && ./configure --prefix=/apps/nginx && make && make install  && ln -sv /apps/nginx/sbin/nginx /usr/bin
RUN groupadd  -g 2088 nginx && useradd  -g nginx -s /usr/sbin/nologin -u 2088 nginx && chown -R nginx.nginx /apps/nginx
ADD nginx.conf /apps/nginx/conf/
ADD frontend.tar.gz /apps/nginx/html/


EXPOSE 80 443
#ENTRYPOINT ["nginx"]
CMD ["nginx","-g","daemon off;"]
root@docker1:/data/nginx# 
  • 构建镜像并上创仓库
root@docker1:/data/nginx# vi build-command.sh 
root@docker1:/data/nginx#cat build-command.sh 
#!/bin/bash
docker build -t zezehu.cloud/myserver/nginx:20220122 .
docker push zezehu.cloud/myserver/nginx:20220122

#/usr/local/bin/nerdctl build -t harbor.magedu.net/magedu/nginx-base:1.22.0 .
#/usr/local/bin/nerdctl push harbor.magedu.net/magedu/nginx-base:1.22.0

root@docker1:/data/nginx# sh build-command.sh 
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
            Install the buildx component to build images with BuildKit:
            https://docs.docker.com/go/buildx/

Sending build context to Docker daemon  1.123MB
Step 1/11 : FROM ubuntu:22.04
22.04: Pulling from library/ubuntu
29202e855b20: Pull complete 
Digest: sha256:e6173d4dc55e76b87c4af8db8821b1feae4146dd47341e4d431118c7dd060a74
Status: Downloaded newer image for ubuntu:22.04
 ---> e34e831650c1
Step 2/11 : MAINTAINER "huzewei"
 ---> Running in 09bbfeef352f
Removing intermediate container 09bbfeef352f
 ---> 06eb6b6d5e67
Step 3/11 : ADD sources.list /etc/apt/sources.list
 ---> 5d5fa7e0b2a5
Step 4/11 : RUN apt update && apt  install -y iproute2  ntpdate  tcpdump telnet traceroute nfs-kernel-server nfs-common  lrzsz tree  openssl libssl-dev libpcre3 libpcre3-dev zlib1g-dev ntpdate tcpdump telnet traceroute  gcc openssh-server lrzsz tree  openssl libssl-dev libpcre3 libpcre3-dev zlib1g-dev ntpdate tcpdump telnet traceroute iotop unzip zip make
 ---> Running in 017365c09752



root@docker1:/data/nginx# docker images
REPOSITORY                                        TAG        IMAGE ID       CREATED          SIZE
zezehu.cloud/myserver/nginx                       20240122   209a8378c051   38 seconds ago   453MB
zezehu.cloud/pub-images/tomcat-base               v8.5.43    66094737053a   6 days ago       1.2GB
harbor.linuxarchitect.io/pub-images/tomcat-base   v8.5.43    66094737053a   6 days ago       1.2GB
zezehu.cloud/pub-images/jdk-base                  v8.212     5198cb715eb5   6 days ago       1.17GB
zezehu.cloud/baseimages/magedu-centos-base        7.9.2009   37bb3db69be0   6 days ago       766MB
ubuntu                                            22.04      e34e831650c1   10 days ago      77.9MB
tomcat                                            latest     f1fd4dbf53f6   12 days ago      454MB
mysql                                             latest     73246731c4b0   4 weeks ago      619MB
alpine                                            3.16.2     9c6f07244728   17 months ago    5.54MB
zezehu.cloud/myserver/alpine                      3.16.2     9c6f07244728   17 months ago    5.54MB
centos                                            7.9.2009   eeb6ee3f44bd   2 years ago      204MB
root@docker1:/data/nginx#
  • 创建并运行nginx容器
root@docker1:/data/nginx# docker run -it  zezehu.cloud/myserver/nginx:20240122 bash 
root@a37e8420d5a0:/# cd /apps/
root@a37e8420d5a0:/apps# ls
nginx
root@a37e8420d5a0:/apps# 

4、部署单机harbor并实现镜像的上传与下载

  • 创建新数据盘 /data/harbor 用于harbor仓库 
root@harbor1:~# df -hT
Filesystem     Type   Size  Used Avail Use% Mounted on
tmpfs          tmpfs  388M  1.5M  387M   1% /run
/dev/sda2      ext4    20G  5.8G   13G  32% /
tmpfs          tmpfs  1.9G     0  1.9G   0% /dev/shm
tmpfs          tmpfs  5.0M     0  5.0M   0% /run/lock
/dev/sda3      ext4   2.0G  129M  1.7G   8% /boot
/dev/sda5      ext4   3.9G   56K  3.7G   1% /home
tmpfs          tmpfs  388M  4.0K  388M   1% /run/user/0
/dev/sdb1      ext4  1007G   28K  956G   1% /data/harbor
  • 安装docker #略

  • 下载harbor包文件 #https://github.com/goharbor/harbor

  • 修改配置文件harbor.yml
root@harbor1:~# mkdir /apps
root@harbor1:~# cd /apps/
root@harbor1:/apps# tar xvf harbor-offline-installer-v2.6.1.tgz
root@harbor1:/apps# cd harbor/

#修改harbor配置文件
root@harbor1:/apps# cp harbor.yml.tmpl harbor.yml
root@harbor1:/apps/harbor# vim harbor.yml 

#注释掉https
# https related config
#https:
  # https port for harbor, default is 443
# port: 443
  # The path of cert and key files for nginx
 # certificate: /your/certificate/path
 # private_key: /your/private/key/path

#修改域名
hostname: zezehu.cloud
#修改密码
harbor_admin_password: 12345678
  •  部署harbor
#安装脚本
root@harbor1:/app/harbor# ./install.sh --with-trivy

[Step 0]: checking if docker is installed ...

Note: docker version: 24.0.2

[Step 1]: checking docker-compose is installed ...

Note: docker-compose version: 1.28.6

[Step 2]: loading Harbor images ...

#docker主机配置域名解析和docker免认证
root@docker1:~# vim /etc/hosts
	192.168.10.15 zezehu.cloud
root@docker1:~# vim /etc/docker/daemon.json
	"insecure-registries": ["zezehu.cloud","www.zezehu.cloud",,"192.168.10.15"], #加上这条配置
root@docker1:~# systemctl restart docker	#重启docker
root@docker1:~# docker login zezehu.cloud	#docker登录harbor
Username: admin
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded
root@docker1:~# docker push zezehu.cloud/myserver/alpine:3.16.2	#上传镜像
The push refers to repository [zezehu.cloud/myserver/alpine]
994393dc58e7: Pushed 
3.16.2: digest: sha256:1304f174557314a7ed9eddb4eab12fed12cb0cd9809e4c28f29af86979a3c870 size: 528

#docker登录harbor成功后会在家目录生成以下密钥文件
root@docker1:~# cat ~/.docker/config.json 
{
	"auths": {
		"zezehu.cloud": {
			"auth": "YWRtaW46MTIzNDU2Nzg="
		}
	}
}
root@docker1:~# 

5.基于systemd实现容器的CPU及内存的使用限制

  • 实现原理:docker/kubelet使用systemd去调用cgroups对容器进行资源限制
root@docker1:/etc/docker# vim daemon.json

"exec-opts": ["native.cgroupdriver=systemd"],
  • docker实现资源限制参数:
-m
--cpus
  • 启动容器
root@docker1:/etc/docker# docker ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
root@docker1:/etc/docker# docker run -it --rm -m 256m --cpus 2 centos:7.9.2009
[root@9037eb7916e0 /]# free -h
              total        used        free      shared  buff/cache   available
Mem:           3.8G        423M        358M        752K        3.0G        3.1G
Swap:          4.0G        2.5M        4.0G
[root@9037eb7916e0 /]# lscpu 
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                4
On-line CPU(s) list:   0-3
Thread(s) per core:    1
Core(s) per socket:    2
Socket(s):             2
NUMA node(s):          1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 79
Model name:            Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz
Stepping:              1
CPU MHz:               2394.453
BogoMIPS:              4788.90
Hypervisor vendor:     VMware
Virtualization type:   full
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              35840K
NUMA node0 CPU(s):     0-3
Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 avx2 smep bmi2 invpcid rdseed adx smap xsaveopt arat md_clear flush_l1d arch_capabilities
[root@9037eb7916e0 /]# 

由于容器⾥⾯的/proc⽂件系统是宿主机的,在容器内使用free、lscpu等命令查看资源是不准确,使用LXCFS可以解决该问题

扩展:

1.总结镜像的分层构建流程

原理:

  • 一个镜像是有一层或者多层合并而成,每一层称为是一个layer
  • 镜像可以基于其它镜像进行重新构建,被引用的镜像称为父镜像

在使用Dockerfile构建镜像时,通常会把经常变动包文件/配置放后面,在有变更时重新构建镜像可以节约时间。

镜像分层构建流程中,通常会把操作系统、JDK、python、go、nginx等基础架构打成镜像,后续业务再根据上述基础镜像构建不同的业务镜像,此类做法有以下优势:

  • 镜像复用,节省资源
  • 减少重新构建镜像变更操作时间
  • 方便管理

2.总结基于lxcfs对容器的内存及CPU的资源限制——略

#安装
apt install lxcfs
  • 26
    点赞
  • 16
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值