1.Containerd简介
Containerd是一个工业标准的容器运行时,重点是它简洁,健壮,便携,在Linux和window上可以作为一个守护进程运行,它可以管理主机系统上容器的完整的生命周期:镜像传输和存储,容器的执行和监控,低级别的存储和网络。 containerd和docker不同,containerd重点是继承在大规模的系统中,例如kubernetes,而不是面向开发者,让开发者使用,更多的是容器运行时的概念,承载容器运行。
1.1 Containerd的由来
相传在很久很久以前呢,Docker 强势崛起,以 “镜像(image)” 这个大招横扫整个武林,对其他容器技术帮派进行致命的降维打击,使其毫无招架之力,就连 Google 公司这种名门正派也不例外。Google 公司为了不被灭门,被迫拉下脸面(当然,也不太可能当舔狗的哈),希望 Docker 公司能和自己联手推进一个开源的容器运行时作为 Docker 的核心依赖,不然就势不两立。Docker 公司觉得自己的智商被侮辱了,就是不和你 Google 公司一起玩,走着瞧呗,谁怕谁呢!时至今日,很明显,Docker 公司的这一决定断送了自己的大好前程,导致了如今的悲剧。
紧接着,Google 公司联合 Red Hat、IBM 等几位行业大佬忽悠 Docker 公司将 libcontainer
捐给中立的社区 OCI (Open Container Intiative), 并改名 runc
,彻底抹去了 Docker 公司的痕迹了~~~~~
且远不止这些,为了扭转 Docker 公司一家独大的局面,几位大佬又成立了一个基金会 CNCF
(Cloud Native Computing Fundation),这名字想必同学们都很熟悉了吧,我就不多介绍了。CNCF
的目标很明确,既然在当前的容器维度干不过 Docker,那一不做二不休,直接上升到大规模容器编排维度,以此来干掉 Docker。Docker 公司也不是吃素的哟,搬出 Swarm
和 Kubernets
进行 battle,最后的结局大家都知道了吧,Swarm
战败。然后 Docker 公司无奈之下,将自己的核心依赖 Containerd
捐献给了 CNCF
基金会,以此来标榜 Docker 是 Paas 平台。很明显这无疑是加速了自己的灭亡。
Google、Red Hat、IBM 等各位大佬们就很懵呀,想当初和你合作搞个中立的核心运行时,你却不乐意,非要自己弄一个,你弄就弄呗,你还捐出来了,这是啥骚操作哟??大佬们转念一想,你都捐出来了,那我就拿 Containerd
直接用了哟,也倒是省事了。 首先呢,为了标榜 Kubernetes
的中立性,当然要弄一个标准化的容器运行时接口啦,并且只要适配了这个接口的容器运行时,都可以和我一起玩了哟,当然第一个支持这个接口的当然就是 Containerd
啦。至于这个接口名字呢,想必同学们都猜到了吧,没错!,它就叫 CRI (Container Runntime Interface)。
大佬们觉得这样还不够,为了蛊惑 Docker 公司,Kubernetes 暂时委屈自己,专门在自己的组件中集成了个 shim
(可以理解为垫片),用来将 CRI 的调用翻译成 Docker 的 API,让 Docker 信以为真的和自己愉快的玩耍,温水煮蛙,熟透了再吃掉~~~~
就这样,Kubernetes 一边假装和 Docker 明修栈道,一边暗度陈仓得不断优化 Containerd 的健壮性以及和 CRI 对接的丝滑性。现在 Containerd 的修炼以及满级了,是时候摊牌了,和 Docker 说拜拜了呢。后面的事情想必大家都知道了吧~~~
Docker 的这个镜像技术成功了,但 Docker 这个公司却失败了。
1.2 Containerd架构
时至今日, Containerd 已经成为一个工业级的容器运行时了,甚至已经有了 solgen : 超简单!超健壮!超强移植性!当然,为了让 Docker 以为自己不会抢饭碗, Containerd 声称自己的设计的主要目的是为了嵌入到更大的系统中(暗指 Kubernetes),而不是直接由开发人员或者终端用户使用。事实上呢,Containerd 基本啥都干了,开发人员或者终端用可以在宿主机中管理完整的容器生命周期,包括镜像的传输和存储、容器的执行和管理、依赖文件的存储和网络等等。
先看 Containerd的架构图:
可以看到 Containerd 仍然采用的是标准的 C/S 架构,服务端通过 GRPC
协议提供稳定的 API,客户端通过调用服务端的 API 进行高级操作。为了解耦,Containerd 将不同的职责划分给不同的组件,每个组件就想当于一个子系统(subsystem)。连接不同的子系统的组件被称作模块。总体上 Containerd 被划分为两大子系统:
💠 Bundle:在 Containerd 中,
Bundle
包含了配置、元数据和根文件系统数据,可以理解为容器的文件系统,而Bundle 子系统
允许用户从镜像中提取和打包 Bundles。💠 Runtime:Runtime 子系统用来执行 Bundles,比如创建容器。
其中,每一个子系统的行为都由一个或者多个模块协同完成(架构图中的 Core
部分)。每种类型的模块都以插件的形式集成到 Containerd 中,而且插件之间是相互依赖的。比如,上图中的每个长虚线框都表示一种类型的插件,包括 Service Plugin
、Metadata Plugin
、GC Plugin
、Runtime Plugin
等,其中 Service Plugin
又会依赖 Metadata Plugin、GC Plugin 和 Runtime Plugin。每个小方框都表示一个细分的插件,例如 Metadata Plugin
依赖 Containerds Plugin、Content Plugin 等。总之,万物皆是插件,插件即是插件。
这里介绍几个常用插件:
💠 Content Plugin: 提供对镜像中可寻址内容的访问,所有不可变的内容都被存储在这里。
💠 Snapshot Plugin: 用来管理容器镜像的文件系统快照。镜像中的每一个 layer 都会被解压成文件系统快照,类似于 Docker 中的
graphdriver
。💠 Metrics: 暴露各个组件的监控指标。
总体来看,Containerd 被分为三大块:Storage
、Metadata
和 Runtime
,提炼上面的架构图如下:
使用 Bucketbench 对 Docker
、Crio
和 Containerd
进行性能测试,比较他们的耗时结果如下(包括启动、停止和删除容器):
可以看出 Containerd 在各方面都表现不俗,总体性能还是要好于 Docker 和 Crio 的。
1.3 Containerd的目标愿景
当 containerd 和 runC 成为标准化容器服务的基石后,上层的应用就可以直接建立在 containerd 和 runC 之上。上图中展示的容器平台都已经支持 containerd 和 runC 的组合了,相信接下来会有更多类似的容器平台出现。 注意: Containerd 被设计成嵌入到一个更大的系统中,而不是直接由开发人员或终端用户使用。所以 containerd 具有宏大的愿景。
2.Containerd 安装
2.1 安装Containerd.io前提
2.1.1 Containerd.io 要求 CentOS 系统的内核版本高于 3.10 ,查看本页面的前提条件来验证你的 CentOS 版本是否支持 Containerd.io 。
通过 uname -r 命令查看你当前的内核版本
[root@k8s-master ~]# uname -r
4.18.0-80.el8.x86_64
2.1.2 使用 root
权限登录 Centos。确保 yum 包更新到最新。
[root@k8s-master ~]# yum update
Repository AppStream is listed more than once in the configuration
Repository BaseOS is listed more than once in the configuration
Repository extras is listed more than once in the configuration
Repository PowerTools is listed more than once in the configuration
Repository centosplus is listed more than once in the configuration
Last metadata expiration check: 0:09:18 ago on Fri 25 Dec 2020 09:04:58 PM CST.
Dependencies resolved.
Nothing to do.
Complete!
[root@k8s-master ~]#
2.1.3 为了解决 podman 冲突的问题 ,可以先执行如下命令:
[root@k8s-master ~]# yum erase podman buildah -y
2.2 两种安装方式
2.1.1 RPM包安装
[root@k8s-master ~]# yum install https://download.docker.com/linux/centos/8/x86_64/stable/Packages/containerd.io-1.4.9-3.1.el8.x86_64.rpm -y
centos7安装
[root@node1 ~]# yum install http://ftp.sjtu.edu.cn/sites/docker-ce/linux/centos/7/x86_64/edge/Packages/containerd.io-1.2.6-3.3.el7.x86_64.rpm
或者直接安装docker-ce源
# step 1: 安装必要的一些系统工具
yum install -y yum-utils device-mapper-persistent-data lvm2
# Step 2: 添加软件源信息
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# Step 3
sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo
# Step 4:查询软件包
yum list | grep containerd
containerd.io.x86_64 1.6.6-3.1.el7 docker-ce-stable
# Step 5:安装软件包
[root@node1 ~]# yum install containerd -y
启动服务,验证可用性
[root@node1 ~]# systemctl enable containerd.service --now
[root@node1 ~]# ctr version
Client:
Version: 1.6.6
Revision: 10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1
Go version: go1.17.11
Server:
Version: 1.6.6
Revision: 10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1
UUID: ce2e1733-9b0e-4c7c-b8b0-150fbb27f5e8
⚠️此命令可能由于下载超时无法安装,可以先本地下载好安装包,进行离线安装,命令
yum install containerd.io-1.4.9-3.1.el8.x86_64.rpm -y
2.1.2 二进制安装
Containerd有两种安装包:
-
第一种是containerd-xxx,这种包用于单机测试没问题,不包runC,需要提前安装。
-
第二种是cri-containerd-cni-xxx,包含runC和k8s里的所需要的相关文件。k8s集群里需要用到此包,虽然包含runC,但是依赖系统中的seccomp(安全计算模式,是一种限制容器调用系统资源的模式。)
2.1.2.1 下载Containerd安装包
wget https://github.com/containerd/containerd/releases/download/v1.7.24/cri-containerd-cni-1.7.24-linux-amd64.tar.gz
2.1.2.2 安装containerd
1.查看已获取的安装包
[root@localhost ~]# ls
cri-containerd-cni-1.7.24-linux-amd64.tar.gz
2.解压已下载的软件包
tar xf cri-containerd-cni-1.7.24-linux-amd64.tar.gz
3.查看解压后目录
[root@localhost ~]# ls
cri-containerd.DEPRECATED.txt etc opt usr
4.查看etc
目录,主要为containerd服务管理配置文件及cni
虚拟网卡配置文件
[root@localhost ~]# ls etc
cni crictl.yaml systemd
[root@localhost ~]# ls etc/systemd/
system
[root@localhost ~]# ls etc/systemd/system/
containerd.service
5.查看opt
目录,主要为gce
环境中使用containerd
配置文件及cni
插件
[root@localhost ~]# ls opt
cni containerd
[root@localhost ~]# ls opt/containerd/
cluster
[root@localhost ~]# ls opt/containerd/cluster
gce version
[root@localhost ~]# ls opt/containerd/cluster/gce
cloud-init cni.template configure.sh env
6.查看usr
目录,主要为containerd
运行时文件,包含runc
(比较小,不含静态文件)
[root@localhost ~]# ls usr
local
[root@localhost ~]# ls usr/local/
bin sbin
[root@localhost ~]# ls usr/local/bin
containerd containerd-shim containerd-shim-runc-v1 containerd-shim-runc-v2 containerd-stress crictl critest ctd-decoder ctr
[root@localhost ~]# ls usr/local/sbin
runc
7.添加containerd.service至服务脚本
[root@node2 ~]# cp /etc/systemd/system/containerd.service /usr/lib/systemd/system/
8.生成配置文件
[root@node2 ~]# mkdir /etc/containerd
[root@node2 ~]# containerd config default > /etc/containerd/config.toml
9. 启动服务
[root@node2 ~]# systemctl enable --now containerd
[root@node2 ~]# ctr version
Client:
Version: v1.6.6
Revision: 10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1
Go version: go1.17.11
Server:
Version: v1.6.6
Revision: 10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1
UUID: dbce7710-68a0-46ed-bb8a-25d286674826
2.1.2.3 runc安装
1.下载runc
#使用wget下载
[root@node2 ~]# wget -c https://github.com/opencontainers/runc/releases/download/v1.1.3/runc.amd64
#复制runc到执行目录下
[root@node2 ~]# cp runc.amd64 /usr/sbin/runc
#增加执行权限
[root@node2 ~]# chmod +x /usr/sbin/runc
#查看下载版本
[root@node2 ~]# runc -v
runc version 1.1.3
commit: v1.1.3-0-g6724737f
spec: 1.0.2-dev
go: go1.17.10
libseccomp: 2.5.4
安装成功!
2.生成默认配置,并修改默认镜像地址
[root@k8s-master ~]# cd /etc/containerd
[root@k8s-master ~]# ls
config.toml
[root@k8s-master ~]# containerd config default | tee config.toml
..........
............
[root@node1 containerd]# containerd config default | tee config.toml
containerd: error while loading shared libraries: libseccomp.so.2: cannot open shared object file: No such file or directory
解决方法:
[root@node1 containerd]# yum install libseccomp
# 重启 containerd
[root@k8s-master ~]# systemctl restart containerd
Failed to restart containerd.service: Access denied
See system logs and 'systemctl status containerd.service' for details.
# 如果出现上面的错误,则执行如下命令
[root@k8s-master ~]# kill -TERM 1
# 然后再次执行重启
[root@k8s-master ~]# systemctl restart containerd
使用 systemd
cgroup 驱动程序
结合 runc
使用 systemd
cgroup 驱动,在 /etc/containerd/config.toml
中设置
.......
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
...
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true # 添加次一行参数
.....
[plugins."io.containerd.grpc.v1.cri".registry]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
# endpoint = ["https://registry-1.docker.io"] # 注释掉此一行
endpoint = ["https://xxxx.mirror.aliyuncs.com"] # 添加此一行参数,镜像地址可查看阿里云
.....
[plugins."io.containerd.internal.v1.opt"]
path = "/opt/containerd" # 修改此处路径(可选)
如果您应用此更改,请确保再次重新启动 containerd:
systemctl restart containerd
3.运行命令验证
containerd 相比于 docker , 多了 namespace 概念,每个 image 和 container 都会在各自的 namespace 下可见,目前 k8s 会使用 k8s.io 作为命名空间:ctr ns ls 可以查看命名空间
ctr 是 containerd 提供的命令行工具,更多命令说明请执行:ctr -h
[root@k8s-master ~]# ctr -h
NAME:
ctr -
__
_____/ /______
/ ___/ __/ ___/
/ /__/ /_/ /
\___/\__/_/
containerd CLI
USAGE:
ctr [global options] command [command options] [arguments...]
VERSION:
1.4.9
DESCRIPTION:
ctr is an unsupported debug and administrative client for interacting
with the containerd daemon. Because it is unsupported, the commands,
options, and operations are not guaranteed to be backward compatible or
stable from release to release of the containerd project.
COMMANDS:
plugins, plugin provides information about containerd plugins
version print the client and server versions
containers, c, container manage containers
content manage content
events, event display containerd events
images, image, i manage images
leases manage leases
namespaces, namespace, ns manage namespaces
pprof provide golang pprof outputs for containerd
run run a container
snapshots, snapshot manage snapshots
tasks, t, task manage tasks
install install a new package
oci OCI tools
shim interact with a shim directly
help, h Shows a list of commands or help for one command
GLOBAL OPTIONS:
--debug enable debug output in logs
--address value, -a value address for containerd's GRPC server (default: "/run/containerd/containerd.sock") [$CONTAINERD_ADDRESS]
--timeout value total timeout for ctr commands (default: 0s)
--connect-timeout value timeout for connecting to containerd (default: 0s)
--namespace value, -n value namespace to use with commands (default: "default") [$CONTAINERD_NAMESPACE]
--help, -h show help
--version, -v print the version
[root@k8s-master ~]#
又成功啦·!
3.Containerd 配置
3.1 生成配置文件
Containerd 的默认配置文件路径为
/etc/containerd/config.toml
,可以通过命令来生成一个默认配置
[root@node ~]# mkdir /etc/containerd
[root@node ~]# containerd config default > /etc/containerd/config.toml
3.2 镜像加速
由于某些原因呢,在国内拉取公共镜像仓库的速度是极其感人的,为了节约时间写代码呢,需要为 Containerd 配置镜像仓库的 mirror
。Containerd 的镜像仓库 mirror 与 Docker 相比有两区别:
💠 Containerd :只支持通过
CRI
拉取镜像的 mirror,即只有通过critrl
或者 kubernetes 调用时 mirror 才会生效,通过ctr
拉取是不生效。💠 Docker :只支持
Docker Hub
配置 mirror,而 Contaienrd 支持任意的镜像仓库配置 mirror。
# docker hub镜像加速
mkdir -p /etc/containerd/certs.d/docker.io
cat > /etc/containerd/certs.d/docker.io/hosts.toml << EOF
server = "https://docker.io"
[host."https://dockerproxy.com"]
capabilities = ["pull", "resolve"]
[host."https://docker.m.daocloud.io"]
capabilities = ["pull", "resolve"]
[host."https://reg-mirror.qiniu.com"]
capabilities = ["pull", "resolve"]
[host."https://registry.docker-cn.com"]
capabilities = ["pull", "resolve"]
[host."http://hub-mirror.c.163.com"]
capabilities = ["pull", "resolve"]
EOF
# registry.k8s.io镜像加速
mkdir -p /etc/containerd/certs.d/registry.k8s.io
tee /etc/containerd/certs.d/registry.k8s.io/hosts.toml << 'EOF'
server = "https://registry.k8s.io"
[host."https://k8s.m.daocloud.io"]
capabilities = ["pull", "resolve", "push"]
EOF
# docker.elastic.co镜像加速
mkdir -p /etc/containerd/certs.d/docker.elastic.co
tee /etc/containerd/certs.d/docker.elastic.co/hosts.toml << 'EOF'
server = "https://docker.elastic.co"
[host."https://elastic.m.daocloud.io"]
capabilities = ["pull", "resolve", "push"]
EOF
# gcr.io镜像加速
mkdir -p /etc/containerd/certs.d/gcr.io
tee /etc/containerd/certs.d/gcr.io/hosts.toml << 'EOF'
server = "https://gcr.io"
[host."https://gcr.m.daocloud.io"]
capabilities = ["pull", "resolve", "push"]
EOF
# ghcr.io镜像加速
mkdir -p /etc/containerd/certs.d/ghcr.io
tee /etc/containerd/certs.d/ghcr.io/hosts.toml << 'EOF'
server = "https://ghcr.io"
[host."https://ghcr.m.daocloud.io"]
capabilities = ["pull", "resolve", "push"]
EOF
# k8s.gcr.io镜像加速
mkdir -p /etc/containerd/certs.d/k8s.gcr.io
tee /etc/containerd/certs.d/k8s.gcr.io/hosts.toml << 'EOF'
server = "https://k8s.gcr.io"
[host."https://k8s-gcr.m.daocloud.io"]
capabilities = ["pull", "resolve", "push"]
EOF
# mcr.m.daocloud.io镜像加速
mkdir -p /etc/containerd/certs.d/mcr.microsoft.com
tee /etc/containerd/certs.d/mcr.microsoft.com/hosts.toml << 'EOF'
server = "https://mcr.microsoft.com"
[host."https://mcr.m.daocloud.io"]
capabilities = ["pull", "resolve", "push"]
EOF
# nvcr.io镜像加速
mkdir -p /etc/containerd/certs.d/nvcr.io
tee /etc/containerd/certs.d/nvcr.io/hosts.toml << 'EOF'
server = "https://nvcr.io"
[host."https://nvcr.m.daocloud.io"]
capabilities = ["pull", "resolve", "push"]
EOF
# quay.io镜像加速
mkdir -p /etc/containerd/certs.d/quay.io
tee /etc/containerd/certs.d/quay.io/hosts.toml << 'EOF'
server = "https://quay.io"
[host."https://quay.m.daocloud.io"]
capabilities = ["pull", "resolve", "push"]
EOF
# registry.jujucharms.com镜像加速
mkdir -p /etc/containerd/certs.d/registry.jujucharms.com
tee /etc/containerd/certs.d/registry.jujucharms.com/hosts.toml << 'EOF'
server = "https://registry.jujucharms.com"
[host."https://jujucharms.m.daocloud.io"]
capabilities = ["pull", "resolve", "push"]
EOF
# rocks.canonical.com镜像加速
mkdir -p /etc/containerd/certs.d/rocks.canonical.com
tee /etc/containerd/certs.d/rocks.canonical.com/hosts.toml << 'EOF'
server = "https://rocks.canonical.com"
[host."https://rocks-canonical.m.daocloud.io"]
capabilities = ["pull", "resolve", "push"]
EOF
注意,住里面除了docker.io
仓库,其余仓库的镜像仓库都是使用了daocloud
的镜像仓库,daocloud
镜像仓库并非支持所有镜像的下载,其支持的镜像列表可以参考:daocloud镜像仓库支持列表
docker.elastic.co/eck/eck-operator
docker.elastic.co/elasticsearch/elasticsearch
docker.elastic.co/kibana/kibana
docker.elastic.co/kibana/kibana-oss
docker.io/alpine
docker.io/alpine/helm
docker.io/amambadev/jenkins
docker.io/amambadev/jenkins-agent-base
docker.io/amambadev/jenkins-agent-go
docker.io/amambadev/jenkins-agent-maven
docker.io/amambadev/jenkins-agent-nodejs
docker.io/amambadev/jenkins-agent-python
docker.io/amazon/aws-alb-ingress-controller
docker.io/amazon/aws-ebs-csi-driver
docker.io/apache/skywalking-java-agent
docker.io/apache/skywalking-oap-server
docker.io/apache/skywalking-ui
docker.io/apitable/backend-server
docker.io/apitable/init-appdata
docker.io/apitable/init-db
docker.io/apitable/openresty
docker.io/apitable/room-server
docker.io/apitable/web-server
docker.io/aquasec/kube-bench
docker.io/aquasec/kube-hunter
docker.io/aquasec/trivy
docker.io/arey/mysql-client
docker.io/bitnami/bitnami-shell
docker.io/bitnami/contour
docker.io/bitnami/elasticsearch
docker.io/bitnami/elasticsearch-curator
docker.io/bitnami/elasticsearch-exporter
docker.io/bitnami/envoy
docker.io/bitnami/grafana
docker.io/bitnami/grafana-operator
docker.io/bitnami/kafka
docker.io/bitnami/kubeapps-apis
docker.io/bitnami/kubeapps-apprepository-controller
docker.io/bitnami/kubeapps-dashboard
docker.io/bitnami/kubeapps-kubeops
docker.io/bitnami/kubectl
docker.io/bitnami/kubernetes-event-exporter
docker.io/bitnami/mariadb
docker.io/bitnami/minideb
docker.io/bitnami/nginx
docker.io/bitnami/postgresql
docker.io/bitnami/wordpress
docker.io/bitpoke/mysql-operator
docker.io/bitpoke/mysql-operator-orchestrator
docker.io/bitpoke/mysql-operator-sidecar-5.7
docker.io/bitpoke/mysql-operator-sidecar-8.0
docker.io/busybox
docker.io/byrnedo/alpine-curl
docker.io/caddy
docker.io/calico/apiserver
docker.io/calico/cni
docker.io/calico/csi
docker.io/calico/kube-controllers
docker.io/calico/node
docker.io/calico/node-driver-registrar
docker.io/calico/pod2daemon-flexvol
docker.io/calico/typha
docker.io/cdkbot/hostpath-provisioner-amd64
docker.io/cdkbot/registry-amd64
docker.io/centos
docker.io/centos/tools
docker.io/cfmanteiga/alpine-bash-curl-jq
docker.io/cfssl/cfssl
docker.io/cilium/json-mock
docker.io/clickhouse/clickhouse-server
docker.io/clickhouse/integration-helper
docker.io/cloudnativelabs/kube-router
docker.io/coredns/coredns
docker.io/csiplugin/snapshot-controller
docker.io/curlimages/curl
docker.io/datawire/ambassador
docker.io/datawire/ambassador-operator
docker.io/debian
docker.io/directxman12/k8s-prometheus-adapter
docker.io/docker
docker.io/dpage/pgadmin4
docker.io/elastic/filebeat
docker.io/envoyproxy/envoy
docker.io/envoyproxy/envoy-distroless
docker.io/envoyproxy/nighthawk-dev
docker.io/f5networks/f5-ipam-controller
docker.io/f5networks/k8s-bigip-ctlr
docker.io/fabulousjohn/kafka-manager
docker.io/falcosecurity/event-generator
docker.io/falcosecurity/falco-driver-loader
docker.io/falcosecurity/falco-exporter
docker.io/falcosecurity/falco-no-driver
docker.io/falcosecurity/falcosidekick
docker.io/falcosecurity/falcosidekick-ui
docker.io/fellah/gitbook
docker.io/flannelcni/flannel-cni-plugin
docker.io/flant/shell-operator
docker.io/fluent/fluent-bit
docker.io/fluent/fluentd
docker.io/fortio/fortio
docker.io/foxdalas/kafka-manager
docker.io/frrouting/frr
docker.io/goharbor/chartmuseum-photon
docker.io/goharbor/harbor-core
docker.io/goharbor/harbor-db
docker.io/goharbor/harbor-exporter
docker.io/goharbor/harbor-jobservice
docker.io/goharbor/harbor-operator
docker.io/goharbor/harbor-portal
docker.io/goharbor/harbor-registryctl
docker.io/goharbor/nginx-photon
docker.io/goharbor/notary-server-photon
docker.io/goharbor/notary-signer-photon
docker.io/goharbor/redis-photon
docker.io/goharbor/registry-photon
docker.io/goharbor/trivy-adapter-photon
docker.io/golang
docker.io/grafana/grafana
docker.io/grafana/tempo
docker.io/halverneus/static-file-server
docker.io/haproxy
docker.io/honkit/honkit
docker.io/integratedcloudnative/ovn4nfv-k8s-plugin
docker.io/istio/citadel
docker.io/istio/examples-bookinfo-details-v1
docker.io/istio/examples-bookinfo-productpage-v1
docker.io/istio/examples-bookinfo-ratings-v1
docker.io/istio/examples-bookinfo-reviews-v1
docker.io/istio/examples-bookinfo-reviews-v2
docker.io/istio/examples-bookinfo-reviews-v3
docker.io/istio/examples-helloworld-v1
docker.io/istio/examples-helloworld-v2
docker.io/istio/galley
docker.io/istio/install-cni
docker.io/istio/kubectl
docker.io/istio/mixer
docker.io/istio/operator
docker.io/istio/pilot
docker.io/istio/proxyv2
docker.io/istio/sidecar_injector
docker.io/jaegertracing/all-in-one
docker.io/jaegertracing/jaeger-agent
docker.io/jaegertracing/jaeger-collector
docker.io/jaegertracing/jaeger-es-index-cleaner
docker.io/jaegertracing/jaeger-es-rollover
docker.io/jaegertracing/jaeger-operator
docker.io/jaegertracing/jaeger-query
docker.io/jaegertracing/spark-dependencies
docker.io/java
docker.io/jboss/keycloak
docker.io/jenkins/jnlp-slave
docker.io/jertel/elastalert2
docker.io/jimmidyson/configmap-reload
docker.io/joosthofman/wget
docker.io/joseluisq/static-web-server
docker.io/jujusolutions/juju-db
docker.io/jujusolutions/jujud-operator
docker.io/k8scloudprovider/cinder-csi-plugin
docker.io/karmada/karmada-agent
docker.io/karmada/karmada-aggregated-apiserver
docker.io/karmada/karmada-controller-manager
docker.io/karmada/karmada-descheduler
docker.io/karmada/karmada-scheduler
docker.io/karmada/karmada-scheduler-estimator
docker.io/karmada/karmada-search
docker.io/karmada/karmada-webhook
docker.io/kedacore/keda
docker.io/kedacore/keda-metrics-apiserver
docker.io/kennethreitz/httpbin
docker.io/keyval/otel-go-agent
docker.io/kindest/base
docker.io/kindest/haproxy
docker.io/kindest/node
docker.io/kiwigrid/k8s-sidecar
docker.io/kubeedge/cloudcore
docker.io/kubeovn/kube-ovn
docker.io/kuberhealthy/dns-resolution-check
docker.io/kuberhealthy/kuberhealthy
docker.io/kubernetesui/dashboard
docker.io/kubernetesui/dashboard-amd64
docker.io/kubernetesui/metrics-scraper
docker.io/library/alpine
docker.io/library/busybox
docker.io/library/caddy
docker.io/library/centos
docker.io/library/debian
docker.io/library/docker
docker.io/library/golang
docker.io/library/haproxy
docker.io/library/java
docker.io/library/mariadb
docker.io/library/mongo
docker.io/library/mysql
docker.io/library/nats-streaming
docker.io/library/nextcloud
docker.io/library/nginx
docker.io/library/node
docker.io/library/openjdk
docker.io/library/percona
docker.io/library/perl
docker.io/library/phpmyadmin
docker.io/library/postgres
docker.io/library/python
docker.io/library/rabbitmq
docker.io/library/redis
docker.io/library/registry
docker.io/library/traefik
docker.io/library/ubuntu
docker.io/library/wordpress
docker.io/library/zookeeper
docker.io/longhornio/backing-image-manager
docker.io/longhornio/csi-attacher
docker.io/longhornio/csi-node-driver-registrar
docker.io/longhornio/csi-provisioner
docker.io/longhornio/csi-resizer
docker.io/longhornio/csi-snapshotter
docker.io/longhornio/longhorn-engine
docker.io/longhornio/longhorn-instance-manager
docker.io/longhornio/longhorn-manager
docker.io/longhornio/longhorn-share-manager
docker.io/longhornio/longhorn-ui
docker.io/mariadb
docker.io/merbridge/merbridge
docker.io/metallb/controller
docker.io/metallb/speaker
docker.io/minio/console
docker.io/minio/kes
docker.io/minio/logsearchapi
docker.io/minio/mc
docker.io/minio/minio
docker.io/minio/operator
docker.io/mirantis/k8s-netchecker-agent
docker.io/mirantis/k8s-netchecker-server
docker.io/mirrorgooglecontainers/defaultbackend-amd64
docker.io/mirrorgooglecontainers/hpa-example
docker.io/moby/buildkit
docker.io/mohsinonxrm/mongodb-agent
docker.io/mohsinonxrm/mongodb-kubernetes-operator
docker.io/mohsinonxrm/mongodb-kubernetes-operator-version-upgrade-post-start-hook
docker.io/mohsinonxrm/mongodb-kubernetes-readiness
docker.io/mongo
docker.io/multiarch/qemu-user-static
docker.io/mysql
docker.io/n8nio/n8n
docker.io/nacos/nacos-server
docker.io/nats-streaming
docker.io/neuvector/controller
docker.io/neuvector/enforcer
docker.io/neuvector/manager
docker.io/neuvector/scanner
docker.io/neuvector/updater
docker.io/nextcloud
docker.io/nfvpe/multus
docker.io/nginx
docker.io/nginxdemos/hello
docker.io/node
docker.io/oamdev/cluster-gateway
docker.io/oamdev/kube-webhook-certgen
docker.io/oamdev/terraform-controller
docker.io/oamdev/vela-apiserver
docker.io/oamdev/vela-core
docker.io/oamdev/vela-rollout
docker.io/oamdev/velaux
docker.io/oliver006/redis_exporter
docker.io/openebs/admission-server
docker.io/openebs/linux-utils
docker.io/openebs/m-apiserver
docker.io/openebs/node-disk-manager
docker.io/openebs/node-disk-operator
docker.io/openebs/openebs-k8s-provisioner
docker.io/openebs/provisioner-localpv
docker.io/openebs/snapshot-controller
docker.io/openebs/snapshot-provisioner
docker.io/openjdk
docker.io/openpolicyagent/gatekeeper
docker.io/openstorage/stork
docker.io/openzipkin/zipkin
docker.io/osixia/openldap
docker.io/otel/demo
docker.io/otel/opentelemetry-collector
docker.io/otel/opentelemetry-collector-contrib
docker.io/percona
docker.io/percona/mongodb_exporter
docker.io/perl
docker.io/phpmyadmin
docker.io/phpmyadmin/phpmyadmin
docker.io/pingcap/coredns
docker.io/portainer/portainer-ce
docker.io/postgres
docker.io/prom/alertmanager
docker.io/prom/mysqld-exporter
docker.io/prom/node-exporter
docker.io/prom/prometheus
docker.io/prometheuscommunity/postgres-exporter
docker.io/python
docker.io/rabbitmq
docker.io/rabbitmqoperator/cluster-operator
docker.io/rancher/helm-controller
docker.io/rancher/k3d-tools
docker.io/rancher/k3s
docker.io/rancher/kubectl
docker.io/rancher/local-path-provisioner
docker.io/rclone/rclone
docker.io/redis
docker.io/redislabs/redisearch
docker.io/registry
docker.io/sonobuoy/cluster-inventory
docker.io/sonobuoy/kube-bench
docker.io/sonobuoy/sonobuoy
docker.io/sonobuoy/systemd-logs
docker.io/squidfunk/mkdocs-material
docker.io/swaggerapi/swagger-codegen-cli
docker.io/tgagor/centos-stream
docker.io/thanosio/thanos
docker.io/timberio/vector
docker.io/traefik
docker.io/ubuntu
docker.io/velero/velero
docker.io/victoriametrics/operator
docker.io/victoriametrics/victoria-logs
docker.io/victoriametrics/victoria-metrics
docker.io/victoriametrics/vmagent
docker.io/victoriametrics/vmalert
docker.io/victoriametrics/vminsert
docker.io/victoriametrics/vmselect
docker.io/victoriametrics/vmstorage
docker.io/weaveworks/scope
docker.io/weaveworks/weave-kube
docker.io/weaveworks/weave-npc
docker.io/wordpress
docker.io/xueshanf/install-socat
docker.io/zenko/kafka-manager
docker.io/zookeeper
gcr.io/cadvisor/cadvisor
gcr.io/distroless/base
gcr.io/distroless/static
gcr.io/distroless/static-debian11
gcr.io/google-containers/pause
gcr.io/google.com/cloudsdktool/cloud-sdk
gcr.io/google_containers/hyperkube
gcr.io/heptio-images/ks-guestbook-demo
gcr.io/istio-release/app_sidecar_base_centos_7
gcr.io/istio-release/app_sidecar_base_centos_8
gcr.io/istio-release/base
gcr.io/istio-release/distroless
gcr.io/istio-release/iptables
gcr.io/istio-testing/app
gcr.io/istio-testing/build-tools
gcr.io/istio-testing/buildkit
gcr.io/istio-testing/dotdotpwn
gcr.io/istio-testing/ext-authz
gcr.io/istio-testing/fake-gce-metadata
gcr.io/istio-testing/fake-stackdriver
gcr.io/istio-testing/fuzz_tomcat
gcr.io/istio-testing/jwttool
gcr.io/istio-testing/kind-node
gcr.io/istio-testing/kindest/node
gcr.io/istio-testing/mynewproxy
gcr.io/istio-testing/myproxy
gcr.io/istio-testing/operator
gcr.io/istio-testing/pilot
gcr.io/istio-testing/proxyv2
gcr.io/k8s-staging-etcd/etcd
gcr.io/k8s-staging-gateway-api/admission-server
gcr.io/k8s-staging-kube-state-metrics/kube-state-metrics
gcr.io/k8s-staging-nfd/node-feature-discovery
gcr.io/k8s-staging-test-infra/krte
gcr.io/kaniko-project/executor
gcr.io/knative-releases/knative.dev/client/cmd/kn
gcr.io/knative-releases/knative.dev/eventing/cmd/apiserver_receive_adapter
gcr.io/knative-releases/knative.dev/eventing/cmd/controller
gcr.io/knative-releases/knative.dev/eventing/cmd/in_memory/channel_controller
gcr.io/knative-releases/knative.dev/eventing/cmd/in_memory/channel_dispatcher
gcr.io/knative-releases/knative.dev/eventing/cmd/mtbroker/filter
gcr.io/knative-releases/knative.dev/eventing/cmd/mtbroker/ingress
gcr.io/knative-releases/knative.dev/eventing/cmd/mtchannel_broker
gcr.io/knative-releases/knative.dev/eventing/cmd/mtping
gcr.io/knative-releases/knative.dev/eventing/cmd/webhook
gcr.io/knative-releases/knative.dev/net-istio/cmd/controller
gcr.io/knative-releases/knative.dev/net-istio/cmd/webhook
gcr.io/knative-releases/knative.dev/net-kourier/cmd/kourier
gcr.io/knative-releases/knative.dev/serving/cmd/activator
gcr.io/knative-releases/knative.dev/serving/cmd/autoscaler
gcr.io/knative-releases/knative.dev/serving/cmd/controller
gcr.io/knative-releases/knative.dev/serving/cmd/default-domain
gcr.io/knative-releases/knative.dev/serving/cmd/domain-mapping
gcr.io/knative-releases/knative.dev/serving/cmd/domain-mapping-webhook
gcr.io/knative-releases/knative.dev/serving/cmd/queue
gcr.io/knative-releases/knative.dev/serving/cmd/webhook
gcr.io/kuar-demo/kuard-amd64
gcr.io/kubebuilder/kube-rbac-proxy
gcr.io/kubecost1/cost-model
gcr.io/kubecost1/frontend
gcr.io/tekton-releases/github.com/tektoncd/dashboard/cmd/dashboard
gcr.io/tekton-releases/github.com/tektoncd/operator/cmd/kubernetes/operator
gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/controller
gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/entrypoint
gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/git-init
gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/imagedigestexporter
gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/webhook
gcr.io/tekton-releases/github.com/tektoncd/results/cmd/api
gcr.io/tekton-releases/github.com/tektoncd/results/cmd/watcher
gcr.io/tekton-releases/github.com/tektoncd/triggers/cmd/controllers
gcr.io/tekton-releases/github.com/tektoncd/triggers/cmd/webhook
ghcr.io/aquasecurity/trivy
ghcr.io/aquasecurity/trivy-db
ghcr.io/aquasecurity/trivy-java-db
ghcr.io/chaos-mesh/chaos-daemon
ghcr.io/chaos-mesh/chaos-dashboard
ghcr.io/chaos-mesh/chaos-dlv
ghcr.io/chaos-mesh/chaos-kernel
ghcr.io/chaos-mesh/chaos-mesh
ghcr.io/clusterpedia-io/clusterpedia/apiserver
ghcr.io/clusterpedia-io/clusterpedia/clustersynchro-manager
ghcr.io/daocloud/ckube
ghcr.io/daocloud/dao-2048
ghcr.io/dependabot/dependabot-core
ghcr.io/dependabot/dependabot-core-development
ghcr.io/dexidp/dex
ghcr.io/dtzar/helm-kubectl
ghcr.io/ferryproxy/ferry/ferry-controller
ghcr.io/ferryproxy/ferry/ferry-tunnel
ghcr.io/fluxcd/helm-controller
ghcr.io/fluxcd/kustomize-controller
ghcr.io/fluxcd/notification-controller
ghcr.io/fluxcd/source-controller
ghcr.io/helm/chartmuseum
ghcr.io/hwameistor/admission
ghcr.io/hwameistor/apiserver
ghcr.io/hwameistor/drbd-reactor
ghcr.io/hwameistor/drbd9-bionic
ghcr.io/hwameistor/drbd9-focal
ghcr.io/hwameistor/drbd9-jammy
ghcr.io/hwameistor/drbd9-rhel7
ghcr.io/hwameistor/drbd9-rhel8
ghcr.io/hwameistor/drbd9-shipper
ghcr.io/hwameistor/evictor
ghcr.io/hwameistor/hwameistor-ui
ghcr.io/hwameistor/local-disk-manager
ghcr.io/hwameistor/local-storage
ghcr.io/hwameistor/operator
ghcr.io/hwameistor/scheduler
ghcr.io/hwameistor/self-signed
ghcr.io/k8snetworkplumbingwg/multus-cni
ghcr.io/k8snetworkplumbingwg/network-resources-injector
ghcr.io/k8snetworkplumbingwg/sriov-cni
ghcr.io/k8snetworkplumbingwg/sriov-network-device-plugin
ghcr.io/k8snetworkplumbingwg/sriov-network-operator
ghcr.io/k8snetworkplumbingwg/sriov-network-operator-config-daemon
ghcr.io/k8snetworkplumbingwg/sriov-network-operator-webhook
ghcr.io/klts-io/kubernetes-lts/coredns
ghcr.io/klts-io/kubernetes-lts/etcd
ghcr.io/klts-io/kubernetes-lts/kube-apiserver
ghcr.io/klts-io/kubernetes-lts/kube-controller-manager
ghcr.io/klts-io/kubernetes-lts/kube-proxy
ghcr.io/klts-io/kubernetes-lts/kube-scheduler
ghcr.io/klts-io/kubernetes-lts/pause
ghcr.io/ksmartdata/logical-backup
ghcr.io/kube-vip/kube-vip
ghcr.io/kubean-io/kubean-operator
ghcr.io/kubean-io/kubespray
ghcr.io/kubean-io/spray-job
ghcr.io/megacloudcontainer/kube-hunter
ghcr.io/megacloudcontainer/kubeaudit
ghcr.io/open-telemetry/demo
ghcr.io/open-telemetry/opentelemetry-go-instrumentation/autoinstrumentation-go
ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-dotnet
ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-java
ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-nodejs
ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-python
ghcr.io/open-telemetry/opentelemetry-operator/opentelemetry-operator
ghcr.io/openfaas/basic-auth
ghcr.io/openfaas/faas-netes
ghcr.io/openfaas/gateway
ghcr.io/openfaas/queue-worker
ghcr.io/openinsight-proj/demo
ghcr.io/openinsight-proj/elastic-alert
ghcr.io/openinsight-proj/openinsight
ghcr.io/openinsight-proj/opentelemetry-demo-helm-chart/adservice
ghcr.io/openinsight-proj/opentelemetry-demo-helm-chart/sentinel
ghcr.io/ovn-org/ovn-kubernetes/ovn-kube-f
ghcr.io/ovn-org/ovn-kubernetes/ovn-kube-u
ghcr.io/projectcontour/contour
ghcr.io/pterodactyl/yolks
ghcr.io/scholzj/zoo-entrance
ghcr.io/spidernet-io/cni-plugins/meta-plugins
ghcr.io/spidernet-io/egressgateway-agent
ghcr.io/spidernet-io/egressgateway-controller
ghcr.io/spidernet-io/spiderdoctor-agent
ghcr.io/spidernet-io/spiderdoctor-controller
ghcr.io/spidernet-io/spiderpool/spiderpool-agent
ghcr.io/spidernet-io/spiderpool/spiderpool-base
ghcr.io/spidernet-io/spiderpool/spiderpool-controller
ghcr.io/sumologic/tailing-sidecar
ghcr.io/sumologic/tailing-sidecar-operator
quay.io/argoproj/argo-events
quay.io/argoproj/argo-rollouts
quay.io/argoproj/argocd
quay.io/argoproj/argocd-applicationset
quay.io/argoproj/argocli
quay.io/argoproj/argoexec
quay.io/argoproj/kubectl-argo-rollouts
quay.io/argoproj/workflow-controller
quay.io/argoprojlabs/argocd-image-updater
quay.io/brancz/kube-rbac-proxy
quay.io/calico/apiserver
quay.io/calico/cni
quay.io/calico/ctl
quay.io/calico/kube-controllers
quay.io/calico/node
quay.io/calico/pod2daemon-flexvol
quay.io/calico/typha
quay.io/cilium/certgen
quay.io/cilium/cilium
quay.io/cilium/cilium-etcd-operator
quay.io/cilium/cilium-init
quay.io/cilium/clustermesh-apiserver
quay.io/cilium/hubble-relay
quay.io/cilium/hubble-ui
quay.io/cilium/hubble-ui-backend
quay.io/cilium/json-mock
quay.io/cilium/operator
quay.io/cilium/operator-alibabacloud
quay.io/cilium/operator-generic
quay.io/cilium/startup-script
quay.io/containers/skopeo
quay.io/coreos/etcd
quay.io/coreos/flannel
quay.io/datawire/ambassador-operator
quay.io/external_storage/cephfs-provisioner
quay.io/external_storage/local-volume-provisioner
quay.io/external_storage/nfs-client-provisioner
quay.io/external_storage/rbd-provisioner
quay.io/fluentd_elasticsearch/elasticsearch
quay.io/fluentd_elasticsearch/fluentd
quay.io/goswagger/swagger
quay.io/grafana-operator/grafana_plugins_init
quay.io/iovisor/bcc
quay.io/jaegertracing/jaeger-operator
quay.io/jetstack/cert-manager-cainjector
quay.io/jetstack/cert-manager-controller
quay.io/jetstack/cert-manager-ctl
quay.io/jetstack/cert-manager-webhook
quay.io/k8scsi/csi-attacher
quay.io/k8scsi/csi-node-driver-registrar
quay.io/k8scsi/csi-provisioner
quay.io/k8scsi/csi-resizer
quay.io/k8scsi/csi-snapshotter
quay.io/k8scsi/livenessprobe
quay.io/k8scsi/snapshot-controller
quay.io/keycloak/keycloak
quay.io/kiali/kiali
quay.io/kiwigrid/k8s-sidecar
quay.io/kubespray/kubespray
quay.io/kubevirt/cdi-apiserver
quay.io/kubevirt/cdi-cloner
quay.io/kubevirt/cdi-controller
quay.io/kubevirt/cdi-importer
quay.io/kubevirt/cdi-operator
quay.io/kubevirt/cdi-uploadproxy
quay.io/kubevirt/cdi-uploadserver
quay.io/kubevirt/virt-api
quay.io/kubevirt/virt-controller
quay.io/kubevirt/virt-exportserver
quay.io/kubevirt/virt-handler
quay.io/kubevirt/virt-launcher
quay.io/kubevirt/virt-operator
quay.io/l23network/k8s-netchecker-agent
quay.io/l23network/k8s-netchecker-server
quay.io/metallb/controller
quay.io/metallb/speaker
quay.io/minio/minio
quay.io/mongodb/mongodb-agent
quay.io/mongodb/mongodb-kubernetes-operator
quay.io/mongodb/mongodb-kubernetes-operator-version-upgrade-post-start-hook
quay.io/mongodb/mongodb-kubernetes-readinessprobe
quay.io/nmstate/kubernetes-nmstate-handler
quay.io/nmstate/kubernetes-nmstate-operator
quay.io/operator-framework/olm
quay.io/opstree/redis
quay.io/opstree/redis-exporter
quay.io/opstree/redis-operator
quay.io/piraeusdatastore/drbd-reactor
quay.io/piraeusdatastore/drbd9-centos7
quay.io/piraeusdatastore/piraeus-client
quay.io/piraeusdatastore/piraeus-csi
quay.io/piraeusdatastore/piraeus-ha-controller
quay.io/piraeusdatastore/piraeus-operator
quay.io/prometheus-operator/prometheus-config-reloader
quay.io/prometheus-operator/prometheus-operator
quay.io/prometheus/alertmanager
quay.io/prometheus/blackbox-exporter
quay.io/prometheus/node-exporter
quay.io/prometheus/prometheus
quay.io/prometheuscommunity/elasticsearch-exporter
quay.io/spotahome/redis-operator
quay.io/strimzi/jmxtrans
quay.io/strimzi/kafka
quay.io/strimzi/kafka-bridge
quay.io/strimzi/kaniko-executor
quay.io/strimzi/maven-builder
quay.io/strimzi/operator
quay.io/submariner/submariner
quay.io/submariner/submariner-gateway
quay.io/submariner/submariner-globalnet
quay.io/submariner/submariner-networkplugin-syncer
quay.io/submariner/submariner-operator
quay.io/submariner/submariner-operator-index
quay.io/submariner/submariner-route-agent
quay.io/tigera/operator
registry.k8s.io/addon-resizer
registry.k8s.io/build-image/debian-iptables
registry.k8s.io/build-image/go-runner
registry.k8s.io/build-image/kube-cross
registry.k8s.io/cluster-api/cluster-api-controller
registry.k8s.io/cluster-api/kubeadm-bootstrap-controller
registry.k8s.io/cluster-api/kubeadm-control-plane-controller
registry.k8s.io/conformance
registry.k8s.io/coredns
registry.k8s.io/coredns/coredns
registry.k8s.io/cpa/cluster-proportional-autoscaler
registry.k8s.io/cpa/cluster-proportional-autoscaler-amd64
registry.k8s.io/cpa/cluster-proportional-autoscaler-arm64
registry.k8s.io/debian-base
registry.k8s.io/dns/k8s-dns-node-cache
registry.k8s.io/etcd
registry.k8s.io/etcd/etcd
registry.k8s.io/ingress-nginx/controller
registry.k8s.io/ingress-nginx/e2e-test-runner
registry.k8s.io/ingress-nginx/kube-webhook-certgen
registry.k8s.io/kube-apiserver
registry.k8s.io/kube-apiserver-amd64
registry.k8s.io/kube-controller-manager
registry.k8s.io/kube-controller-manager-amd64
registry.k8s.io/kube-proxy
registry.k8s.io/kube-proxy-amd64
registry.k8s.io/kube-registry-proxy
registry.k8s.io/kube-scheduler
registry.k8s.io/kube-scheduler-amd64
registry.k8s.io/kube-state-metrics/kube-state-metrics
registry.k8s.io/kueue/kueue
registry.k8s.io/kwok/cluster
registry.k8s.io/kwok/kwok
registry.k8s.io/metrics-server
registry.k8s.io/metrics-server-amd64
registry.k8s.io/metrics-server/metrics-server
registry.k8s.io/metrics-server/metrics-server-amd64
registry.k8s.io/nfd/node-feature-discovery
registry.k8s.io/node-problem-detector/node-problem-detector
registry.k8s.io/node-test
registry.k8s.io/node-test-amd64
registry.k8s.io/pause
registry.k8s.io/prometheus-adapter/prometheus-adapter
registry.k8s.io/sig-storage/csi-attacher
registry.k8s.io/sig-storage/csi-node-driver-registrar
registry.k8s.io/sig-storage/csi-provisioner
registry.k8s.io/sig-storage/csi-resizer
registry.k8s.io/sig-storage/csi-snapshotter
registry.k8s.io/sig-storage/livenessprobe
registry.k8s.io/sig-storage/local-volume-provisioner
registry.k8s.io/sig-storage/nfs-subdir-external-provisioner
registry.k8s.io/sig-storage/snapshot-controller
registry.opensource.zalan.do/acid/logical-backup
registry.opensource.zalan.do/acid/pgbouncer
registry.opensource.zalan.do/acid/postgres-operator
registry.opensource.zalan.do/acid/spilo-14
registry.opensource.zalan.do/acid/spilo-15
镜像仓库加速验证
1.镜像仓库配置如下
root@containerd:~# tree /etc/containerd/certs.d/
/etc/containerd/certs.d/
├── 192.168.11.20
│ └── hosts.toml
├── docker.io
│ └── hosts.toml
├── gcr.io
│ └── hosts.toml
├── k8s.gcr.io
│ └── hosts.toml
└── registry.k8s.io
└── hosts.toml
5 directories, 5 files
root@containerd:~#
root@containerd:~#
root@containerd:~# nerdctl images
REPOSITORY TAG IMAGE ID CREATED PLATFORM SIZE BLOB SIZE
root@containerd:~#
root@containerd:~#
2. registry.k8s.io镜像仓库验证
root@containerd:~# nerdctl --debug=true image pull k8s.gcr.io/kube-apiserver:v1.17.3
DEBU[0000] verifying process skipped
DEBU[0000] Found hosts dir "/etc/containerd/certs.d"
DEBU[0000] Ignoring hosts dir "/etc/docker/certs.d" error="stat /etc/docker/certs.d: no such file or directory"
DEBU[0000] The image will be unpacked for platform {"amd64" "linux" "" [] ""}, snapshotter "overlayfs".
DEBU[0000] fetching image="k8s.gcr.io/kube-apiserver:v1.17.3"
DEBU[0000] loading host directory dir=/etc/containerd/certs.d/k8s.gcr.io
DEBU[0000] resolving host=k8s-gcr.m.daocloud.io
DEBU[0000] do request host=k8s-gcr.m.daocloud.io request.header.accept="application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.list.v2+json, application/vnd.oci.image.manifest.v1+json, application/vnd.oci.image.index.v1+json, */*" request.header.user-agent=containerd/1.7.1+unknown request.method=HEAD url="https://k8s-gcr.m.daocloud.io/v2/kube-apiserver/manifests/v1.17.3?ns=k8s.gcr.io"
k8s.gcr.io/kube-apiserver:v1.17.3: resolving |--------------------------------------|
elapsed: 1.6 s total: 0.0 B (0.0 B/s)
DEBU[0001] fetch response received host=k8s-gcr.m.daocloud.io response.header.cache-status=MISS response.header.connection=keep-alive response.header.content-length=1665 response.header.content-type=application/vnd.docker.distribution.manifest.list.v2+json response.header.date="Fri, 28 Jul 2023 02:34:13 GMT" response.header.docker-content-digest="sha256:33400ea29255bd20714b6b8092b22ebb045ae134030d6bf476bddfed9d33e900" response.header.docker-distribution-api-version=registry/2.0 response.header.etag="\"sha256:33400ea29255bd20714b6b8092b22ebb045ae134030d6bf476bddfed9d33e900\"" response.header.server=nginx response.header.x-content-type-options=nosniff response.status="200 OK" url="https://k8s-gcr.m.daocloud.io/v2/kube-apiserver/manifests/v1.17.3?ns=k8s.gcr.io"
DEBU[0001] resolved desc.digest="sha256:33400ea29255bd20714b6b8092b22ebb045ae134030d6bf476bddfed9d33e900" host=k8s-gcr.m.daocloud.io
DEBU[0001] loading host directory dir=/etc/containerd/certs.d/k8s.gcr.io
DEBU[0001] fetch digest="sha256:33400ea29255bd20714b6b8092b22ebb045ae134030d6bf476bddfed9d33e900" mediatype=application/vnd.docker.distribution.manifest.list.v2+json size=1665
DEBU[0001] do request digest="sha256:33400ea29255bd20714b6b8092b22ebb045ae134030d6bf476bddfed9d33e900" mediatype=application/vnd.docker.distribution.manifest.list.v2+json request.header.acck8s.gcr.io/kube-apiserver:v1.17.3: resolved |++++++++++++++++++++++++++++++++++++++|
index-sha256:33400ea29255bd20714b6b8092b22ebb045ae134030d6bf476bddfed9d33e900: downloading |--------------------------------------| 0.0 B/1.6 KiB
elapsed: 2.0 s total: 0.0 B (0.0 B/s)
DEBU[0002] fetch response received digest="sha256:33400ea29255bd20714b6b8092b22ebb045ae134030d6bf476bddfed9d33e900" mediatype=application/vnd.docker.distribution.manifest.list.v2+json response.header.cache-status=MISS response.header.connection=keep-alive response.header.content-length=1665 response.header.content-type=application/vnd.docker.distribution.manifest.list.v2+json response.header.date="Fri, 28 Jul 2023 02:34:14 GMT" response.header.docker-content-digest="sha256:33400ea29255bd20714b6b8092b22ebb045ae134030d6bf476bddfed9d33e900" response.header.docker-distribution-api-version=registry/2.0 response.header.etag="\"sha256:33400ea29255bd20714b6b8092b22ebb045ae134030d6bf476bddfed9d33e900\"" response.header.server=nginx response.header.x-content-type-options=nosniff response.status="200 OK" size=1665 url="https://k8s-gcr.m.daocloud.io/v2/kube-apiserver/manifests/sha256:33400ea29255bd20714b6b8092b22ebb045ae134030d6bf476bddfed9d33e900?ns=k8s.gcr.io"
DEBU[0002] fetch digest="sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62" mediatype=application/vnd.docker.distribution.manifest.v2+json size=741
k8s.gcr.io/kube-apiserver:v1.17.3: resolved |++++++++++++++++++++++++++++++++++++++|
index-sha256:33400ea29255bd20714b6b8092b22ebb045ae134030d6bf476bddfed9d33e900: done |++++++++++++++++++++++++++++++++++++++|
manifest-sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62: downloading |--------------------------------------| 0.0 B/741.0 B
elapsed: 2.6 s total: 1.6 Ki (639.0 B/s)
DEBU[0002] fetch response received digest="sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62" mediatype=application/vnd.docker.distribution.manifest.v2+json response.header.cache-status=MISS response.header.connection=keep-alive response.header.content-length=741 response.header.content-type=application/vnd.docker.distribution.manifest.v2+json response.header.date="Fri, 28 Jul 2023 02:34:14 GMT" respok8s.gcr.io/kube-apiserver:v1.17.3: resolved |++++++++++++++++++++++++++++++++++++++|
index-sha256:33400ea29255bd20714b6b8092b22ebb045ae134030d6bf476bddfed9d33e900: done |++++++++++++++++++++++++++++++++++++++|
manifest-sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62: downloading |--------------------------------------| 0.0 B/741.0 B
k8s.gcr.io/kube-apiserver:v1.17.3: resolved |++++++++++++++++++++++++++++++++++++++|
index-sha256:33400ea29255bd20714b6b8092b22ebb045ae134030d6bf476bddfed9d33e900: done |++++++++++++++++++++++++++++++++++++++|
manifest-sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62: done |++++++++++++++++++++++++++++++++++++++|
config-sha256:90d27391b7808cde8d9a81cfa43b1e81de5c4912b4b52a7dccb19eb4fe3c236b: downloading |--------------------------------------| 0.0 B/1.7 KiB
elapsed: 3.2 s total: 2.3 Ki (751.0 B/s)
DEBU[0003] fetch response received digest="sha256:90d27391b7808cde8d9a81cfa43b1e81de5c4912b4b52a7dccb19eb4fe3c236b" mediatype=application/vnd.docker.container.image.v1+json response.header.cache-status=MISS response.header.connection=keep-alive response.header.content-length=1767 response.header.content-type=text/html response.header.date="Fri, 28 Jul 2023 02:34:15 GMT" response.header.docker-content-digest="sha256:90d27391b7808cde8d9a81cfa43b1e81de5c4912b4b52a7dccb19eb4fe3c236b" response.header.docker-distribution-api-version=registry/2.0 response.header.etag="sha256:90d27391b7808cde8d9a81cfa43b1e81de5c4912b4b52a7dccb19eb4fe3c236b" response.header.server=nginx response.header.x-content-type-options=nosniff response.status="200 OK" size=1767 url="https://k8s-gcr.m.daocloud.io/v2/kube-apiserver/blobs/sha256:90d27391b7808cde8d9a81cfa43b1e81de5c4912b4b52a7dccb19eb4fe3c236b?ns=k8s.gcr.io"
DEBU[0003] fetch digest="sha256:694976bfeffdb162655017f2c99283712340bd8c23e50c78e3e8d8aa002e9c95" mediatype=application/vnd.docker.image.rootfs.diff.tar.gzip size=29540037
DEBU[0003] fetch digest="sha256:597de8ba0c30cdd0b372023aa2ea3ca9b3affbcba5ac8db922f57d6cb67db7c8" mediatype=application/vnd.docker.image.rootfs.diff.tar.gzip size=21089561
DEBU[0003] do request digest="sha256:597de8ba0c30cdd0b372023aa2ea3ca9b3affbcba5ac8db922f57d6cb67db7c8" mediatype=application/vnd.docker.image.rootfs.diff.tar.gzip request.header.accept="appk8s.gcr.io/kube-apiserver:v1.17.3: resolved |++++++++++++++++++++++++++++++++++++++|
index-sha256:33400ea29255bd20714b6b8092b22ebb045ae134030d6bf476bddfed9d33e900: done |++++++++++++++++++++++++++++++++++++++|
manifest-sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62: done |++++++++++++++++++++++++++++++++++++++|
config-sha256:90d27391b7808cde8d9a81cfa43b1e81de5c4912b4b52a7dccb19eb4fe3c236b: done |++++++++++++++++++++++++++++++++++++++|
layer-sha256:694976bfeffdb162655017f2c99283712340bd8c23e50c78e3e8d8aa002e9c95: downloading |--------------------------------------| 0.0 B/28.2 MiB
k8s.gcr.io/kube-apiserver:v1.17.3: resolved |++++++++++++++++++++++++++++++++++++++|
index-sha256:33400ea29255bd20714b6b8092b22ebb045ae134030d6bf476bddfed9d33e900: done |++++++++++++++++++++++++++++++++++++++|
manifest-sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62: done |++++++++++++++++++++++++++++++++++++++|
config-sha256:90d27391b7808cde8d9a81cfa43b1e81de5c4912b4b52a7dccb19eb4fe3c236b: done |++++++++++++++++++++++++++++++++++++++|
layer-sha256:694976bfeffdb162655017f2c99283712340bd8c23e50c78e3e8d8aa002e9c95: downloading |--------------------------------------| 0.0 B/28.2 MiB
k8s.gcr.io/kube-apiserver:v1.17.3: resolved |++++++++++++++++++++++++++++++++++++++|
k8s.gcr.io/kube-apiserver:v1.17.3: resolved |++++++++++++++++++++++++++++++++++++++|
index-sha256:33400ea29255bd20714b6b8092b22ebb045ae134030d6bf476bddfed9d33e900: done |++++++++++++++++++++++++++++++++++++++|
manifest-sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62: done |++++++++++++++++++++++++++++++++++++++|
config-sha256:90d27391b7808cde8d9a81cfa43b1e81de5c4912b4b52a7dccb19eb4fe3c236b: done |++++++++++++++++++++++++++++++++++++++|
layer-sha256:694976bfeffdb162655017f2c99283712340bd8c23e50c78e3e8d8aa002e9c95: done |++++++++++++++++++++++++++++++++++++++|
layer-sha256:597de8ba0c30cdd0b372023aa2ea3ca9b3affbcba5ac8db922f57d6cb67db7c8: done |++++++++++++++++++++++++++++++++++++++|
elapsed: 133.0s total: 48.3 M (371.8 KiB/s)
root@containerd:~# nerdctl images
REPOSITORY TAG IMAGE ID CREATED PLATFORM SIZE BLOB SIZE
k8s.gcr.io/kube-apiserver v1.17.3 33400ea29255 About a minute ago linux/amd64 167.3 MiB 48.3 MiB
registry.k8s.io/sig-storage/csi-provisioner v3.5.0 d078dc174323 6 minutes ago linux/amd64 66.1 MiB 27.0 MiB
3. k8s.gcr.io镜像仓库验证
root@containerd:~# nerdctl --debug=true image pull docker.io/library/ubuntu:20.04
DEBU[0000] verifying process skipped
DEBU[0000] Found hosts dir "/etc/containerd/certs.d"
DEBU[0000] Ignoring hosts dir "/etc/docker/certs.d" error="stat /etc/docker/certs.d: no such file or directory"
DEBU[0000] The image will be unpacked for platform {"amd64" "linux" "" [] ""}, snapshotter "overlayfs".
DEBU[0000] fetching image="docker.io/library/ubuntu:20.04"
DEBU[0000] loading host directory dir=/etc/containerd/certs.d/docker.io
DEBU[0000] resolving host=hub-mirror.c.163.com
DEBU[0000] do request host=hub-mirror.c.163.com request.header.accept="application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.list.v2+json, application/vnd.oci.image.manifest.v1+json, application/vnd.oci.image.index.v1+json, */*" request.header.user-agent=containerd/1.7.1+unknown request.method=HEAD url="https://hub-mirror.c.163.com/v2/library/ubuntu/manifests/20.04?ns=docker.io"
docker.io/library/ubuntu:20.04: resolving |--------------------------------------|
elapsed: 1.2 s total: 0.0 B (0.0 B/s)
DEBU[0001] fetch response received host=hub-mirror.c.163.com response.header.connection=keep-alive response.header.content-length=1201 response.header.content-type=application/vnd.docker.distribution.manifest.list.v2+json response.header.date="Fri, 28 Jul 2023 02:42:20 GMT" response.header.docker-content-digest="sha256:b872b0383a2149196c67d16279f051c3e36f2acb32d7eb04ef364c8863c6264f" response.header.docker-distribution-api-version=registry/2.0 response.header.etag="\"sha256:b872b0383a2149196c67d16279f051c3e36f2acb32d7eb04ef364c8863c6264f\"" response.header.server=nginx/1.10.1 response.status="200 OK" url="https://hub-mirror.c.163.com/v2/library/ubuntu/manifests/20.04?ns=docker.io"
DEBU[0001] resolved desc.digest="sha256:b872b0383a2149196c67d16279f051c3e36f2acb32d7eb04ef364c8863c6264f" host=hub-mirror.c.163.com
DEBU[0001] loading host directory dir=/etc/containerd/certs.d/docker.io
DEBU[0001] fetch digest="sha256:b872b0383a2149196c67d16279f051c3e36f2acb32d7eb04ef364c8863c6264f" mediatype=application/vnd.docker.distribution.manifest.list.v2+json size=1201
DEBU[0001] fetch digest="sha256:8eb87f3d6c9f2feee114ff0eff93ea9dfd20b294df0a0353bd6a4abf403336fe" mediatype=application/vnd.docker.distribution.manifest.v2+json size=529
docker.io/library/ubuntu:20.04: resolved |++++++++++++++++++++++++++++++++++++++|
docker.io/library/ubuntu:20.04: resolved |++++++++++++++++++++++++++++++++++++++|
index-sha256:b872b0383a2149196c67d16279f051c3e36f2acb32d7eb04ef364c8863c6264f: done |++++++++++++++++++++++++++++++++++++++|
manifest-sha256:8eb87f3d6c9f2feee114ff0eff93ea9dfd20b294df0a0353bd6a4abf403336fe: done |++++++++++++++++++++++++++++++++++++++|
config-sha256:d5447fc01ae62c20beffbfa50bc51b2797f9d7ebae031b8c2245b5be8ff1c75b: done |++++++++++++++++++++++++++++++++++++++|
layer-sha256:846c0b181fff0c667d9444f8378e8fcfa13116da8d308bf21673f7e4bea8d580: done |++++++++++++++++++++++++++++++++++++++|
elapsed: 2.6 s total: 27.3 M (10.5 MiB/s)
root@containerd:~#
root@containerd:~#
root@containerd:~# nerdctl images
REPOSITORY TAG IMAGE ID CREATED PLATFORM SIZE BLOB SIZE
ubuntu 20.04 b872b0383a21 40 seconds ago linux/amd64 75.8 MiB 27.3 MiB
k8s.gcr.io/kube-apiserver v1.17.3 33400ea29255 9 minutes ago linux/amd64 167.3 MiB 48.3 MiB
registry.k8s.io/sig-storage/csi-provisioner v3.5.0 d078dc174323 14 minutes ago linux/amd64 66.1 MiB 27.0 MiB
root@containerd:~#
4. docker.io镜像仓库验证
root@containerd:~# nerdctl --debug=true image pull docker.io/library/ubuntu:20.04
DEBU[0000] verifying process skipped
DEBU[0000] Found hosts dir "/etc/containerd/certs.d"
DEBU[0000] Ignoring hosts dir "/etc/docker/certs.d" error="stat /etc/docker/certs.d: no such file or directory"
DEBU[0000] The image will be unpacked for platform {"amd64" "linux" "" [] ""}, snapshotter "overlayfs".
DEBU[0000] fetching image="docker.io/library/ubuntu:20.04"
DEBU[0000] loading host directory dir=/etc/containerd/certs.d/docker.io
DEBU[0000] resolving host=hub-mirror.c.163.com
DEBU[0000] do request host=hub-mirror.c.163.com request.header.accept="application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.list.v2+json, application/vnd.oci.image.manifest.v1+json, application/vnd.oci.image.index.v1+json, */*" request.header.user-agent=containerd/1.7.1+unknown request.method=HEAD url="https://hub-mirror.c.163.com/v2/library/ubuntu/manifests/20.04?ns=docker.io"
docker.io/library/ubuntu:20.04: resolving |--------------------------------------|
elapsed: 1.2 s total: 0.0 B (0.0 B/s)
DEBU[0001] fetch response received host=hub-mirror.c.163.com response.header.connection=keep-alive response.header.content-length=1201 response.header.content-type=application/vnd.docker.distribution.manifest.list.v2+json response.header.date="Fri, 28 Jul 2023 02:42:20 GMT" response.header.docker-content-digest="sha256:b872b0383a2149196c67d16279f051c3e36f2acb32d7eb04ef364c8863c6264f" response.header.docker-distribution-api-version=registry/2.0 response.header.etag="\"sha256:b872b0383a2149196c67d16279f051c3e36f2acb32d7eb04ef364c8863c6264f\"" response.header.server=nginx/1.10.1 response.status="200 OK" url="https://hub-mirror.c.163.com/v2/library/ubuntu/manifests/20.04?ns=docker.io"
DEBU[0001] resolved desc.digest="sha256:b872b0383a2149196c67d16279f051c3e36f2acb32d7eb04ef364c8863c6264f" host=hub-mirror.c.163.com
DEBU[0001] loading host directory dir=/etc/containerd/certs.d/docker.io
DEBU[0001] fetch digest="sha256:b872b0383a2149196c67d16279f051c3e36f2acb32d7eb04ef364c8863c6264f" mediatype=application/vnd.docker.distribution.manifest.list.v2+json size=1201
DEBU[0001] fetch digest="sha256:8eb87f3d6c9f2feee114ff0eff93ea9dfd20b294df0a0353bd6a4abf403336fe" mediatype=application/vnd.docker.distribution.manifest.v2+json size=529
docker.io/library/ubuntu:20.04: resolved |++++++++++++++++++++++++++++++++++++++|
docker.io/library/ubuntu:20.04: resolved |++++++++++++++++++++++++++++++++++++++|
index-sha256:b872b0383a2149196c67d16279f051c3e36f2acb32d7eb04ef364c8863c6264f: done |++++++++++++++++++++++++++++++++++++++|
manifest-sha256:8eb87f3d6c9f2feee114ff0eff93ea9dfd20b294df0a0353bd6a4abf403336fe: done |++++++++++++++++++++++++++++++++++++++|
config-sha256:d5447fc01ae62c20beffbfa50bc51b2797f9d7ebae031b8c2245b5be8ff1c75b: done |++++++++++++++++++++++++++++++++++++++|
layer-sha256:846c0b181fff0c667d9444f8378e8fcfa13116da8d308bf21673f7e4bea8d580: done |++++++++++++++++++++++++++++++++++++++|
elapsed: 2.6 s total: 27.3 M (10.5 MiB/s)
root@containerd:~#
root@containerd:~#
root@containerd:~# nerdctl images
REPOSITORY TAG IMAGE ID CREATED PLATFORM SIZE BLOB SIZE
ubuntu 20.04 b872b0383a21 40 seconds ago linux/amd64 75.8 MiB 27.3 MiB
k8s.gcr.io/kube-apiserver v1.17.3 33400ea29255 9 minutes ago linux/amd64 167.3 MiB 48.3 MiB
registry.k8s.io/sig-storage/csi-provisioner v3.5.0 d078dc174323 14 minutes ago linux/amd64 66.1 MiB 27.0 MiB
root@containerd:~#
参考daocloud镜像仓库:
https://github.com/containerd/containerd/blob/main/docs/hosts.md
https://github.com/containerd/containerd/blob/main/docs/cri/registry.md
4.Containerd之Ctr使用
4.1 镜像
- pull 镜像(拉取镜像):
拉取镜像可以使用
ctr image pull
来完成,比如拉取 Docker Hub 官方镜像nginx:alpine
,需要注意的是镜像地址需要加上docker.io
ctr image pull docker.io/library/nginx:alpine
说明:
ctr命令pull镜像时,不能直接把镜像名字写成"nginx:alpine"
也可以使用
--platform
选项指定对应平台的镜像。当然对应的也有推送镜像的命令ctr image push
,如果是私有镜像则在推送的时候可以通过--user
来自定义仓库的用户名和密码。
4.2 列出本地镜像
~ ctr image ls
~ ctr image list
使用 -q(--quiet)
选项可以只打印镜像名称。
4.3 检测本地镜像
ctr image check
主要查看其中的 STATUS
,complete
表示镜像是完整可用的状态。
4.4 重新打标签
同样的我们也可以重新给指定的镜像打一个 Tag:
➜ ~ ctr image tag docker.io/library/nginx:alpine harbor.k8s.local/course/nginx:alpine
harbor.k8s.local/course/nginx:alpine
➜ ~ ctr image ls -q
docker.io/library/nginx:alpine
harbor.k8s.local/course/nginx:alpine
4.5 删除镜像
不需要使用的镜像也可以使用 ctr image rm
进行删除:
➜ ~ ctr image rm harbor.k8s.local/course/nginx:alpine
harbor.k8s.local/course/nginx:alpine
➜ ~ ctr image ls -q
docker.io/library/nginx:alpine
加上 --sync
选项可以同步删除镜像和所有相关的资源。
4.6 将镜像挂载到主机目录
➜ ~ ctr image mount docker.io/library/nginx:alpine /mnt
sha256:c3554b2d61e3c1cffcaba4b4fa7651c644a3354efaafa2f22cb53542f6c600dc
/mnt
➜ ~ tree -L 1 /mnt
/mnt
├── bin
├── dev
├── docker-entrypoint.d
├── docker-entrypoint.sh
├── etc
├── home
├── lib
├── media
├── mnt
├── opt
├── proc
├── root
├── run
├── sbin
├── srv
├── sys
├── tmp
├── usr
└── var
18 directories, 1 file
4.7 将镜像从主机目录上卸载
➜ ~ ctr image unmount /mnt
/mnt
4.8 将镜像导出为压缩包
➜ ~ ctr image export nginx.tar.gz docker.io/library/nginx:alpine
4.9 从压缩包导入镜像
➜ ~ ctr image import nginx.tar.gz
# ctr镜像导入报错ctr: content digest sha256:xxxxxx not found
现象:
直接导入j镜像可能会出现类似于 ctr: content digest sha256:xxxxxx not found
解决办法:
拉取镜像、导出镜像时,都加上--all-platforms 时,最后在用ctr i import nginx.tar.gz
4.2 容器操作
容器相关操作可以通过 ctr container
获取。
创建容器
➜ ~ ctr container create docker.io/library/nginx:alpine nginx
列出容器
➜ ~ ctr container ls
CONTAINER IMAGE RUNTIME
nginx docker.io/library/nginx:alpine io.containerd.runc.v2
同样可以加上 -q
选项精简列表内容:
➜ ~ ctr container ls -q
nginx
查看容器详细配置
类似于 docker inspect
功能。
➜ ~ ctr container info nginx
{
"ID": "nginx",
"Labels": {
"io.containerd.image.config.stop-signal": "SIGQUIT"
},
"Image": "docker.io/library/nginx:alpine",
"Runtime": {
"Name": "io.containerd.runc.v2",
"Options": {
"type_url": "containerd.runc.v1.Options"
}
},
"SnapshotKey": "nginx",
"Snapshotter": "overlayfs",
"CreatedAt": "2021-08-12T08:23:13.792871558Z",
"UpdatedAt": "2021-08-12T08:23:13.792871558Z",
"Extensions": null,
"Spec": {
......
删除容器
➜ ~ ctr container rm nginx
➜ ~ ctr container ls
CONTAINER IMAGE RUNTIME
4.3 任务
上面我们通过
container create
命令创建的容器,并没有处于运行状态,只是一个静态的容器。一个 container 对象只是包含了运行一个容器所需的资源及相关配置数据,表示 namespaces、rootfs 和容器的配置都已经初始化成功了,只是用户进程还没有启动。一个容器真正运行起来是由 Task 任务实现的,Task 可以为容器设置网卡,还可以配置工具来对容器进行监控等。
Task 相关操作可以通过
ctr task
获取,如下我们通过 Task 来启动容器:
➜ ~ ctr task start -d nginx
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
启动容器后可以通过 task ls
查看正在运行的容器:
➜ ~ ctr task ls
TASK PID STATUS
nginx 3630 RUNNING
同样也可以使用 exec
命令进入容器进行操作
➜ ~ ctr task exec --exec-id 0 -t nginx sh
/ #
不过这里需要注意必须要指定 --exec-id
参数,这个 id 可以随便写,只要唯一就行。
暂停容器,和 docker pause
类似的功能:
➜ ~ ctr task pause nginx
暂停后容器状态变成了 PAUSED
:
➜ ~ ctr task ls
TASK PID STATUS
nginx 3630 PAUSED
同样也可以使用 resume
命令来恢复容器:
➜ ~ ctr task resume nginx
➜ ~ ctr task ls
TASK PID STATUS
nginx 3630 RUNNING
不过需要注意 ctr 没有 stop 容器的功能,只能暂停或者杀死容器。杀死容器可以使用 task kill
命令:
➜ ~ ctr task kill nginx
➜ ~ ctr task ls
TASK PID STATUS
nginx 3630 STOPPED
杀掉容器后可以看到容器的状态变成了 STOPPED
。同样也可以通过 task rm
命令删除 Task:
➜ ~ ctr task rm nginx
➜ ~ ctr task ls
TASK PID STATUS
除此之外我们还可以获取容器的 cgroup 相关信息,可以使用 task metrics
命令用来获取容器的内存、CPU 和 PID 的限额与使用量。
# 重新启动容器
➜ ~ ctr task metrics nginx
ID TIMESTAMP
nginx 2021-08-12 08:50:46.952769941 +0000 UTC
METRIC VALUE
memory.usage_in_bytes 8855552
memory.limit_in_bytes 9223372036854771712
memory.stat.cache 0
cpuacct.usage 22467106
cpuacct.usage_percpu [2962708 860891 1163413 1915748 1058868 2888139 6159277 5458062]
pids.current 9
pids.limit 0
还可以使用 task ps
命令查看容器中所有进程在宿主机中的 PID:
➜ ~ ctr task ps nginx
PID INFO
3984 -
4029 -
4030 -
4031 -
4032 -
4033 -
4034 -
4035 -
4036 -
➜ ~ ctr task ls
TASK PID STATUS
nginx 3984 RUNNING
```
其中第一个 PID 3984
就是我们容器中的1号进程
4.4 命名空间
另外 Containerd 中也支持命名空间的概念,比如查看命名空间:
➜ ~ ctr ns ls
NAME LABELS
default
如果不指定,ctr 默认使用的是 default
空间。同样也可以使用 ns create
命令创建一个命名空间:
➜ ~ ctr ns create test
➜ ~ ctr ns ls
NAME LABELS
default
test
使用 remove
或者 rm
可以删除 namespace:
➜ ~ ctr ns rm test
test
➜ ~ ctr ns ls
NAME LABELS
default
有了命名空间后就可以在操作资源的时候指定 namespace,比如查看 test 命名空间的镜像,可以在操作命令后面加上 -n test
选项:
➜ ~ ctr -n test image ls
REF TYPE DIGEST SIZE PLATFORMS LABELS
我们知道 Docker 其实也是默认调用的 containerd,事实上 Docker 使用的 containerd 下面的命名空间默认是 moby
,而不是 default
,所以假如我们有用 docker 启动容器,那么我们也可以通过 ctr -n moby
来定位下面的容器:
➜ ~ ctr -n moby container ls
同样 Kubernetes 下使用的 containerd 默认命名空间是 k8s.io
,所以我们可以使用 ctr -n k8s.io
来查看 Kubernetes 下面创建的容器。
5. crictl
crictl 是 kubernetes cri-tools 的一部分,是专门为 kubernetes 使用 containerd 而专门制作的,提供了 Pod、容器和镜像等资源的管理命令。
需要注意的是:使用其他非 kubernetes 创建的容器、镜像,crictl 是无法看到和调试的,比如说 ctr run 在未指定 namespace 情况下运行起来的容器就无法使用 crictl 看到。当然 ctr 可以使用 -n k8s.io
指定操作的 namespace 为 k8s.io,从而可以看到/操作 kubernetes 集群中容器、镜像等资源。可以理解为:crictl 操作的时候指定了 containerd 的 namespace 为 k8s.io。
安装crictl
# wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.2/crictl-v1.24.2-linux-amd64.tar.gz
# tar xf crictl-v1.24.2-linux-amd64.tar.gz -C /usr/local/bin/
创建crictl的配置文件
# cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///var/run/containerd/containerd.sock
image-endpoint: unix:///var/run/containerd/containerd.sock
timeout: 10
debug: false
pull-image-on-create: false
EOF
使用crictl
[root@node2 ~]# crictl pull busybox
Image is up to date for sha256:9d5226e6ce3fb6aee2822206a5ef85f38c303d2b37bfc894b419fca2c0501269
[root@node2 ~]# crictl images
IMAGE TAG IMAGE ID SIZE
docker.io/library/busybox latest 9d5226e6ce3fb 777kB
6.nerdctl
ctr 功能简单,而且对已经习惯使用 docker cli 的人来说,ctr 并不友好(比如无法像 docker cli 那样)。这个时候,nerdctl 就可以替代 ctr 了。**nerdctl 是一个与 docker cli 风格兼容的 containerd 的 cli 工具**,并且已经被作为子项目加入了 containerd 项目中。从 nerdctl 0.8 开始,nerdctl 直接兼容了 docker compose 的语法(不包含 swarm), 这很大程度上提高了直接将 containerd 作为本地开发、测试和单机容器部署使用的体验。
需要注意的是:**安装 nerdctl 之后,要想可以使用 nerdctl 还需要安装 CNI 相关工具和插件**。containerd 不包含网络功能的实现,想要实现端口映射这样的容器网络能力,需要额外安装 CNI 相关工具和插件。
另外 **nerdctl 也可以使用** `-n` **指定使用的 namespace**。
-
安装nerdctl
下载nerdctl
wget https://github.com/containerd/nerdctl/releases/download/v0.22.0/nerdctl-0.22.0-linux-amd64.tar.gz
解压软件包到相应目录
tar xf nerdctl-0.22.0-linux-amd64.tar.gz -C /usr/local/
cp /usr/local/nerdctl /usr/local/bin/nerdctl
-
验证nerdctl
[root@node2 ~]# nerdctl version
WARN[0000] unable to determine buildctl version: exec: "buildctl": executable file not found in $PATH
Client:
Version: v0.22.0
OS/Arch: linux/amd64
Git commit: 8e278e2aa61a89d4e50d1a534217f264bd1a5ddf
builctl:
Version:
Server:
containerd:
Version: v1.6.10
GitCommit: 770bd0108c32f3fb5c73ae1264f7e503fe7b2661
runc:
Version: 1.1.4
GitCommit: v1.1.4-0-g5fd4c4d1
-
安装buildkit(可选,用于nerdctl build ) ,使用`nerdctl build`进行Dockerfile的镜像构建时,报缺少buildkit,所以需要进行安装
# mkdir buildkit
# cd buildkit/
# wget https://github.com/moby/buildkit/releases/download/v0.8.3/buildkit-v0.8.3.linux-amd64.tar.gz
# tar xf buildkit-v0.8.3.linux-amd64.tar.gz -C /usr/local
-
编写buildkitd的启动文件
[root@node2 ~]# cat /etc/systemd/system/buildkit.service [Unit] Description=BuildKit Documentation=https://github.com/moby/buildkit [Service] ExecStart=/usr/local/bin/buildkitd --oci-worker=false --containerd-worker=true [Install] WantedBy=multi-user.target
-
启动buildkitd服务端程序
[root@node2 ~]# systemctl daemon-reload
[root@node2 ~]# systemctl enable buildkit --now
Created symlink from /etc/systemd/system/multi-user.target.wants/buildkit.service to /etc/systemd/system/buildkit.service.