华子目录
k8s
中容器
的管理方式
k8s
集群创建方式有3种
:
-
cri-o
cri-o
的方式是Kubernetes
创建容器最直接
的一种方式,在创建集群
的时候,需要借助于cri-o
插件的方式在实现Kubernetes
集群的创建
。
-
cri-containerd
默认情况
下,k8s
在创建集群
时使用的方式
-
cri-docker
docker
使用的记录最高
,虽然k8s
在1.24
版本后已经废除
了Kubernetes
对docker
的支持,但是可以借助cri-docker
方式来实现集群的创建
。
注意:
cri-docker
和cri-o
这两种方式要对Kubelet
程序的启动参数
进行设置
k8s
集群部署
接上一篇博客k8s的环境搭建
https://blog.csdn.net/huaz_md/article/details/142676163?spm=1001.2014.3001.5501,准备好基础环境和Docker环境
,下面就开始通过kubeadm
来部署kubernetes集群
。我们这里在rhel9
上部署k8s集群
主机名 | ip | 角色 |
---|---|---|
harbor.huazi.org | 172.25.254.250 | harbor仓库 |
k8s-master.org | 172.25.254.100 | master ,k8s 集群控制节点 |
k8s-node1.org | 172.25.254.10 | worker ,k8s 集群工作节点 |
k8s-node2.org | 172.25.254.20 | worker ,k8s 集群工作节点 |
安装k8s部署工具
k8s
安装有很多种方式,这里我们使用最直观的方式kubeadm
部署软件仓库,添加k8s源
k8s-master
[root@k8s-master ~]# cd /etc/yum.repos.d/
[root@k8s-master yum.repos.d]# vim k8s.repo
[k8s]
name=k8s
baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/rpm/
gpgcheck=0
#--showduplicates:列出历史版本
[root@k8s-master yum.repos.d]# yum list kubelet --showduplicates
正在更新 Subscription Management 软件仓库。
无法读取客户身份
本系统尚未在权利服务器中注册。可使用 subscription-manager 进行注册。
k8s 37 kB/s | 17 kB 00:00
可安装的软件包
kubelet.aarch64 1.30.0-150500.1.1 k8s
kubelet.ppc64le 1.30.0-150500.1.1 k8s
kubelet.s390x 1.30.0-150500.1.1 k8s
kubelet.src 1.30.0-150500.1.1 k8s
kubelet.x86_64 1.30.0-150500.1.1 k8s
kubelet.aarch64 1.30.1-150500.1.1 k8s
kubelet.ppc64le 1.30.1-150500.1.1 k8s
kubelet.s390x 1.30.1-150500.1.1 k8s
kubelet.src 1.30.1-150500.1.1 k8s
kubelet.x86_64 1.30.1-150500.1.1 k8s
[root@k8s-master ~]# yum list kubeadm --showduplicates
正在更新 Subscription Management 软件仓库。
无法读取客户身份
本系统尚未在权利服务器中注册。可使用 subscription-manager 进行注册。
上次元数据过期检查:0:06:24 前,执行于 2024年10月02日 星期三 08时59分31秒。
已安装的软件包
kubeadm.x86_64 1.30.0-150500.1.1 @k8s
可安装的软件包
kubeadm.aarch64 1.30.0-150500.1.1 k8s
kubeadm.ppc64le 1.30.0-150500.1.1 k8s
kubeadm.s390x 1.30.0-150500.1.1 k8s
kubeadm.src 1.30.0-150500.1.1 k8s
kubeadm.x86_64 1.30.0-150500.1.1 k8s
kubeadm.aarch64 1.30.1-150500.1.1 k8s
kubeadm.ppc64le 1.30.1-150500.1.1 k8s
kubeadm.s390x 1.30.1-150500.1.1 k8s
kubeadm.src 1.30.1-150500.1.1 k8s
kubeadm.x86_64 1.30.1-150500.1.1 k8s
[root@k8s-master ~]# yum list kubectl --showduplicates
正在更新 Subscription Management 软件仓库。
无法读取客户身份
本系统尚未在权利服务器中注册。可使用 subscription-manager 进行注册。
上次元数据过期检查:0:07:00 前,执行于 2024年10月02日 星期三 08时59分31秒。
已安装的软件包
kubectl.x86_64 1.30.0-150500.1.1 @k8s
可安装的软件包
kubectl.aarch64 1.30.0-150500.1.1 k8s
kubectl.ppc64le 1.30.0-150500.1.1 k8s
kubectl.s390x 1.30.0-150500.1.1 k8s
kubectl.src 1.30.0-150500.1.1 k8s
kubectl.x86_64 1.30.0-150500.1.1 k8s
kubectl.aarch64 1.30.1-150500.1.1 k8s
kubectl.ppc64le 1.30.1-150500.1.1 k8s
kubectl.s390x 1.30.1-150500.1.1 k8s
kubectl.src 1.30.1-150500.1.1 k8s
kubectl.x86_64 1.30.1-150500.1.1 k8s
在k8s-master
上安装kubelet
,kubeadm
,kubectl
kubectl
:命令行管理工具
kubeadm
:安装K8S集群工具
kubelet管理容器工具
[root@k8s-master ~]# yum install \
> kubelet-1.30.0-150500.1.1 \
> kubeadm-1.30.0-150500.1.1 \
> kubectl-1.30.0-150500.1.1 -y
k8s-node1
- 在
k8s-node1
上安装kubelet
,kubeadm
即可
[root@k8s-node1 ~]# cat /etc/yum.repos.d/k8s.repo
[k8s]
name=k8s
baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/rpm/
gpgcheck=0
[root@k8s-node1 ~]# yum install \
> kubelet-1.30.0-150500.1.1 \
> kubeadm-1.30.0-150500.1.1 -y
k8s-node2
- 在
k8s-node2
上安装kubelet
,kubeadm
即可
[root@k8s-node2 ~]# cat /etc/yum.repos.d/k8s.repo
[k8s]
name=k8s
baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/rpm/
gpgcheck=0
[root@k8s-node2 ~]# yum install \
> kubelet-1.30.0-150500.1.1 \
> kubeadm-1.30.0-150500.1.1 -y
设置kubectl
命令补全功能
k8s-master
- 因为
kubectl
只在master
上,所以我们只需要修改master
即可
[root@k8s-master ~]# yum install bash-completion -y
正在更新 Subscription Management 软件仓库。
无法读取客户身份
本系统尚未在权利服务器中注册。可使用 subscription-manager 进行注册。
上次元数据过期检查:0:14:33 前,执行于 2024年10月02日 星期三 08时59分31秒。
软件包 bash-completion-1:2.11-4.el9.noarch 已安装。
依赖关系解决。
无需任何处理。
完毕!
[root@k8s-master ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc
[root@k8s-master ~]# source ~/.bashrc
k8s
集群中安装cri-docker
k8s
从1.24版本
开始就移除了dockershim
,所以需要安装cri-docker
插件才能使用docker
- 软件下载:https://github.com/Mirantis/cri-dockerd/tags
k8s-master
[root@k8s-master ~]# yum install \
> cri-dockerd-0.3.14-3.el8.x86_64.rpm \
> libcgroup-0.41-19.el8.x86_64.rpm -y
k8s-node1
[root@k8s-node1 ~]# yum install \
> cri-dockerd-0.3.14-3.el8.x86_64.rpm \
> libcgroup-0.41-19.el8.x86_64.rpm -y
k8s-node2
[root@k8s-node2 ~]# yum install \
> cri-dockerd-0.3.14-3.el8.x86_64.rpm \
> libcgroup-0.41-19.el8.x86_64.rpm -y
k8s
集群中启动cri-docker
服务
k8s-master
[root@k8s-master ~]# systemctl enable --now cri-docker
k8s-node1
[root@k8s-node1 ~]# systemctl enable --now cri-docker
k8s-node2
[root@k8s-node2 ~]# systemctl enable --now cri-docker
在k8s-master
上拉取k8s
所需镜像
并上传到harbor
上
[root@k8s-master ~]# kubeadm config print init-defaults
#cri-dockerd的套接字位置
[root@k8s-master ~]# ll /var/run/cri-dockerd.sock
srw-rw---- 1 root docker 0 10月 2 09:39 /var/run/cri-dockerd.sock
k8s-master
[root@k8s-master ~]# kubeadm config images pull \
> --image-repository registry.aliyuncs.com/google_containers \
> --kubernetes-version v1.30.0 \
> --cri-socket=unix:///var/run/cri-dockerd.sock
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.30.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.30.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.30.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.30.0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.11.1
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.9
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.12-0
- 我们可以看到
镜像
已经成功的拉取下来
[root@k8s-master ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.aliyuncs.com/google_containers/kube-apiserver v1.30.0 c42f13656d0b 5 months ago 117MB
registry.aliyuncs.com/google_containers/kube-controller-manager v1.30.0 c7aad43836fa 5 months ago 111MB
registry.aliyuncs.com/google_containers/kube-scheduler v1.30.0 259c8277fcbb 5 months ago 62MB
registry.aliyuncs.com/google_containers/kube-proxy v1.30.0 a0bf559e280c 5 months ago 84.7MB
registry.aliyuncs.com/google_containers/etcd 3.5.12-0 3861cfcd7c04 7 months ago 149MB
registry.aliyuncs.com/google_containers/coredns v1.11.1 cbb01a7bd410 13 months ago 59.8MB
registry.aliyuncs.com/google_containers/pause 3.9 e6f181688397 23 months ago 744kB
- 在
harbor
上创建名为k8s
的项目
- 上传镜像到
harbor
[root@k8s-master ~]# docker images | awk '/google/{ print $1":"$2}' \
> | awk -F "/" '{system("docker tag "$0" harbor.huazi.org/k8s/"$3)}'
[root@k8s-master ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.aliyuncs.com/google_containers/kube-apiserver v1.30.0 c42f13656d0b 5 months ago 117MB
harbor.huazi.org/k8s/kube-apiserver v1.30.0 c42f13656d0b 5 months ago 117MB
harbor.huazi.org/k8s/kube-controller-manager v1.30.0 c7aad43836fa 5 months ago 111MB
registry.aliyuncs.com/google_containers/kube-controller-manager v1.30.0 c7aad43836fa 5 months ago 111MB
harbor.huazi.org/k8s/kube-scheduler v1.30.0 259c8277fcbb 5 months ago 62MB
registry.aliyuncs.com/google_containers/kube-scheduler v1.30.0 259c8277fcbb 5 months ago 62MB
harbor.huazi.org/k8s/kube-proxy v1.30.0 a0bf559e280c 5 months ago 84.7MB
registry.aliyuncs.com/google_containers/kube-proxy v1.30.0 a0bf559e280c 5 months ago 84.7MB
harbor.huazi.org/k8s/etcd 3.5.12-0 3861cfcd7c04 7 months ago 149MB
registry.aliyuncs.com/google_containers/etcd 3.5.12-0 3861cfcd7c04 7 months ago 149MB
harbor.huazi.org/k8s/coredns v1.11.1 cbb01a7bd410 13 months ago 59.8MB
registry.aliyuncs.com/google_containers/coredns v1.11.1 cbb01a7bd410 13 months ago 59.8MB
harbor.huazi.org/k8s/pause 3.9 e6f181688397 23 months ago 744kB
registry.aliyuncs.com/google_containers/pause 3.9 e6f181688397 23 months ago 744kB
#上传
[root@k8s-master ~]# docker images | awk '/k8s/{system("docker push "$1":"$2)}'
- 可以发现已经成功上传
修改cri-docker.service
文件
- 指定
网络插件名称
和基础容器镜像
k8s-master
#指定网络插件名称及基础容器镜像
[root@k8s-master ~]# vim /lib/systemd/system/cri-docker.service
ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --network-plugin=cni --pod-infra-container-image=harbor.huazi.org/k8s/pause:3.9
[root@k8s-master ~]# systemctl daemon-reload
[root@k8s-master ~]# systemctl restart cri-docker
k8s-node1
[root@k8s-node1 ~]# vim /lib/systemd/system/cri-docker.service
ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --network-plugin=cni --pod-infra-container-image=harbor.huazi.org/k8s/pause:3.9
[root@k8s-node1 ~]# systemctl daemon-reload
[root@k8s-node1 ~]# systemctl restart cri-docker
k8s-node2
[root@k8s-node2 ~]# vim /lib/systemd/system/cri-docker.service
ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --network-plugin=cni --pod-infra-container-image=harbor.huazi.org/k8s/pause:3.9
[root@k8s-node2 ~]# systemctl daemon-reload
[root@k8s-node2 ~]# systemctl restart cri-docker
k8s
集群中启动kubelet
k8s-master
[root@k8s-master ~]# systemctl enable --now kubelet
k8s-node1
[root@k8s-node1 ~]# systemctl enable --now kubelet
k8s-node2
[root@k8s-node2 ~]# systemctl enable --now kubelet
k8s
集群初始化
k8s-master
[root@k8s-master ~]# kubeadm init --pod-network-cidr=10.244.0.0/16 \
> --image-repository harbor.huazi.org/k8s \
> --kubernetes-version v1.30.0 \
> --cri-socket=unix:///var/run/cri-dockerd.sock
[root@k8s-master ~]# export KUBECONFIG=/etc/kubernetes/admin.conf
[root@k8s-master ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
[root@k8s-master ~]# source ~/.bash_profile
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master.org NotReady control-plane 11m v1.30.0
如果在初始化
过程中失败
,我们需要重置
,那如何进行重置
呢?
[root@k8s-master ~]# kubeadm reset --cri-docker=unix:///var/run/cri-dockerd.sock
我们发现这里的status
是NotReady
,这时就需要我们安装flannel网络插件
安装flannel
网络插件
-
下载
flannel
的yaml
部署文件
-
k8s-master
上 -
下载镜像
[root@k8s-master ~]# docker pull docker.io/flannel/flannel:v0.25.5
[root@k8s-master ~]# docekr pull docker.io/flannel/flannel-cni-plugin:v1.5.1-flannel1
[root@k8s-master ~]# vim kube-flannel.yml
[root@k8s-master ~]# docker load -i flannel-0.25.5.tag.gz
ef7a14b43c43: Loading layer [==================================================>] 8.079MB/8.079MB
1d9375ff0a15: Loading layer [==================================================>] 9.222MB/9.222MB
4af63c5dc42d: Loading layer [==================================================>] 16.61MB/16.61MB
2b1d26302574: Loading layer [==================================================>] 1.544MB/1.544MB
d3dd49a2e686: Loading layer [==================================================>] 42.11MB/42.11MB
7278dc615b95: Loading layer [==================================================>] 5.632kB/5.632kB
c09744fc6e92: Loading layer [==================================================>] 6.144kB/6.144kB
0a2b46a5555f: Loading layer [==================================================>] 1.923MB/1.923MB
5f70bf18a086: Loading layer [==================================================>] 1.024kB/1.024kB
601effcb7aab: Loading layer [==================================================>] 1.928MB/1.928MB
Loaded image: flannel/flannel:v0.25.5
21692b7dc30c: Loading layer [==================================================>] 2.634MB/2.634MB
Loaded image: flannel/flannel-cni-plugin:v1.5.1-flannel1
- 上传
flannel
到harbor
上
[root@k8s-master ~]# docker images flannel/*
REPOSITORY TAG IMAGE ID CREATED SIZE
flannel/flannel v0.25.5 b9f4beb93d68 2 months ago 80.4MB
flannel/flannel-cni-plugin v1.5.1-flannel1 0b2af0d15971 3 months ago 10.4MB
[root@k8s-master ~]# docker tag flannel/flannel:v0.25.5 harbor.huazi.org/flannel/flannel:v0.25.5
[root@k8s-master ~]# docker tag flannel/flannel-cni-plugin:v1.5.1-flannel1 harbor.huazi.org/flannel/flannel-cni-plugin:v1.5.1-flannel1
[root@k8s-master ~]# docker push harbor.huazi.org/flannel/flannel:v0.25.5
[root@k8s-master ~]# docker push harbor.huazi.org/flannel/flannel-cni-plugin:v1.5.1-flannel1
发现上传成功
- 安装
flannel
网络插件
[root@k8s-master ~]# kubectl apply -f kube-flannel.yml
namespace/kube-flannel created
serviceaccount/flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
- 此时发现已经
Ready
了
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master.org Ready control-plane 12h v1.30.0
节点扩容
节点扩容
也就是将node节点
加入master
当中
我们在k8s集群初始化
的过程中,会出现下面这个集群token
,记录它
如果忘了也没有关系,我们可以重新生成
[root@k8s-master ~]# kubeadm token create --print-join-command
kubeadm join 172.25.254.100:6443 --token 3vikps.lp7x33vo9cd7l29t --discovery-token-ca-cert-hash sha256:cdb056a4a33dfb3604855fc1500f62253da2fec1ca882fd038fe9afb661a572e
复制集群token
到node
中执行,即可加入
k8s-node1
上复制完,在最后添加--cri-socket=unix:///var/run/cri-dockerd.sock
[root@k8s-node1 ~]# kubeadm join 172.25.254.100:6443 --token 3vikps.lp7x33vo9cd7l29t --discovery-token-ca-cert-hash sha256:cdb056a4a33dfb3604855fc1500f62253da2fec1ca882fd038fe9afb661a572e \
> --cri-socket=unix:///var/run/cri-dockerd.sock
- 发现
node1
已经加入
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master.org Ready control-plane 12h v1.30.0
k8s-node1.org Ready <none> 52s v1.30.0
k8s-node2
上复制完,在最后添加--cri-socket=unix:///var/run/cri-dockerd.sock
[root@k8s-node2 ~]# kubeadm join 172.25.254.100:6443 --token 3vikps.lp7x33vo9cd7l29t --discovery-token-ca-cert-hash sha256:cdb056a4a33dfb3604855fc1500f62253da2fec1ca882fd038fe9afb661a572e \
> --cri-socket=unix:///var/run/cri-dockerd.sock
- 发现
node2
已经加入
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master.org Ready control-plane 12h v1.30.0
k8s-node1.org Ready <none> 6m40s v1.30.0
k8s-node2.org Ready <none> 2m36s v1.30.0
[root@k8s-master ~]# kubectl -n kube-flannel get pods
NAME READY STATUS RESTARTS AGE
kube-flannel-ds-m7ksl 1/1 Running 0 2m40s
kube-flannel-ds-q55gr 1/1 Running 0 8m38s
kube-flannel-ds-twvv4 1/1 Running 0 110s
k8s集群
中容器运行状态
我们发现整个k8s
都是以容器
的形式进行部署
的
k8s-master
[root@k8s-master ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f2cf5e76a528 harbor.huazi.org/k8s/pause:3.9 "/pause" 26 minutes ago Up 26 minutes k8s_POD_kube-flannel-ds-mnwh6_kube-flannel_90a37e3b-3463-46f4-bc05-38983ae72022_0
8d4c35e030e1 a0bf559e280c "/usr/local/bin/kube…" 13 hours ago Up 13 hours k8s_kube-proxy_kube-proxy-v59ls_kube-system_89adb3d7-bf7e-4435-814a-9f2028c35b21_0
9c0b964d9950 harbor.huazi.org/k8s/pause:3.9 "/pause" 13 hours ago Up 13 hours k8s_POD_kube-proxy-v59ls_kube-system_89adb3d7-bf7e-4435-814a-9f2028c35b21_0
f13c7025a49b 259c8277fcbb "kube-scheduler --au…" 13 hours ago Up 13 hours k8s_kube-scheduler_kube-scheduler-k8s-master.org_kube-system_f7a1b780be204295e245a244d33633b5_0
05b49eaef33b c7aad43836fa "kube-controller-man…" 13 hours ago Up 13 hours k8s_kube-controller-manager_kube-controller-manager-k8s-master.org_kube-system_c40cd2ff1ec7ca43fa916fb4e4a51400_0
27ac68a80c74 3861cfcd7c04 "etcd --advertise-cl…" 13 hours ago Up 13 hours k8s_etcd_etcd-k8s-master.org_kube-system_e9d747ad8b7195b88bd5c791f2186262_0
6ff27e184297 c42f13656d0b "kube-apiserver --ad…" 13 hours ago Up 13 hours k8s_kube-apiserver_kube-apiserver-k8s-master.org_kube-system_ef3485df7f45f1c606bdfcd3a47e3e0f_0
2ff1e1a79bee harbor.huazi.org/k8s/pause:3.9 "/pause" 13 hours ago Up 13 hours k8s_POD_kube-scheduler-k8s-master.org_kube-system_f7a1b780be204295e245a244d33633b5_0
d902bad78435 harbor.huazi.org/k8s/pause:3.9 "/pause" 13 hours ago Up 13 hours k8s_POD_kube-controller-manager-k8s-master.org_kube-system_c40cd2ff1ec7ca43fa916fb4e4a51400_0
6ee3ceb21a17 harbor.huazi.org/k8s/pause:3.9 "/pause" 13 hours ago Up 13 hours k8s_POD_etcd-k8s-master.org_kube-system_e9d747ad8b7195b88bd5c791f2186262_0
f2bf6b9523ea harbor.huazi.org/k8s/pause:3.9 "/pause" 13 hours ago Up 13 hours k8s_POD_kube-apiserver-k8s-master.org_kube-system_ef3485df7f45f1c606bdfcd3a47e3e0f_0
k8s-node1
[root@k8s-node1 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9802c5ade6cb harbor.huazi.org/k8s/kube-proxy "/usr/local/bin/kube…" 11 minutes ago Up 11 minutes k8s_kube-proxy_kube-proxy-tc5hv_kube-system_07feb7be-a06c-4d60-a0b2-fb39a849b972_0
795c112bd550 harbor.huazi.org/k8s/pause:3.9 "/pause" 11 minutes ago Up 11 minutes k8s_POD_kube-flannel-ds-2mqsw_kube-flannel_bd721f60-26d8-4899-a2c0-fd88f8ca32ce_0
c965aff3504b harbor.huazi.org/k8s/pause:3.9 "/pause" 11 minutes ago Up 11 minutes k8s_POD_kube-proxy-tc5hv_kube-system_07feb7be-a06c-4d60-a0b2-fb39a849b972_0
k8s-node2
[root@k8s-node2 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
649b4574e0f7 harbor.huazi.org/k8s/kube-proxy "/usr/local/bin/kube…" 9 minutes ago Up 9 minutes k8s_kube-proxy_kube-proxy-k2m2r_kube-system_dab05120-0d16-4919-a593-4e52efec736c_0
0486735102d1 harbor.huazi.org/k8s/pause:3.9 "/pause" 10 minutes ago Up 10 minutes k8s_POD_kube-flannel-ds-c2rt5_kube-flannel_74c06ccf-7ea0-4273-8bef-2d9a61ab5196_0
18dd7567d8f9 harbor.huazi.org/k8s/pause:3.9 "/pause" 10 minutes ago Up 10 minutes k8s_POD_kube-proxy-k2m2r_kube-system_dab05120-0d16-4919-a593-4e52efec736c_0
测试k8s
集群运行情况
[root@k8s-master ~]# kubectl version
Client Version: v1.30.0
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.30.0
[root@k8s-master ~]# kubectl run webserver1 --image nginx
pod/webserver1 created
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
webserver1 0/1 ContainerCreating 0 2m19s <none> k8s-node2.org <none> <none>
[root@k8s-master ~]# kubectl run ubuntu --image ubuntu
pod/ubuntu created
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
ubuntu 0/1 ContainerCreating 0 2s
webserver1 0/1 ContainerCreating 0 16m
- 我们可以发现,一个在
k8s-node1
上,一个在k8s-node2
上,所以k8s
会自动
选择合适
的node
运行容器
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ubuntu 0/1 ContainerCreating 0 24s <none> k8s-node1.org <none> <none>
webserver1 0/1 ContainerCreating 0 16m <none> k8s-node2.org <none> <none>
k8s
集群如何全部重置
当集群已经搭建好,如何重置呢
- 在
master
上删除所有节点
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master.org Ready control-plane 43h v1.30.0
k8s-node1.org Ready <none> 31h v1.30.0
k8s-node2.org Ready <none> 31h v1.30.0
[root@k8s-master ~]# kubectl delete nodes k8s-node1.org
node "k8s-node1.org" deleted
[root@k8s-master ~]# kubectl delete nodes k8s-node2.org
node "k8s-node2.org" deleted
[root@k8s-master ~]# kubectl delete nodes k8s-master.org
node "k8s-master.org" deleted
[root@k8s-master ~]# kubectl get nodes
No resources found
- 重置
master
[root@k8s-master ~]# kubeadm reset --cri-socket=unix:///var/run/cri-dockerd.sock
- 初始化
[root@k8s-master ~]# kubeadm init --pod-network-cidr=10.244.0.0/16 \
> --image-repository harbor.huazi.org/k8s \
> --kubernetes-version v1.30.0 \
> --cri-socket=unix:///var/run/cri-dockerd.sock
flannel
[root@k8s-master ~]# ls
kube-flannel.yml
[root@k8s-master ~]# kubectl apply -f kube-flannel.yml
namespace/kube-flannel created
serviceaccount/flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
k8s-node1
上重置
[root@k8s-node1 ~]# kubeadm reset --cri-socket=unix:///var/run/cri-dockerd.sock
k8s-node1
上复制完,在最后添加--cri-socket=unix:///var/run/cri-dockerd.sock
[root@k8s-node1 ~]# kubeadm join 172.25.254.100:6443 --token sgnnhn.nrd19s6bbfw1afw3 --discovery-token-ca-cert-hash sha256:f429541942f8f0fa21179410284c9ce94dc658fae14564d2607079287f37e2dd \
> --cri-socket=unix:///var/run/cri-dockerd.sock
k8s-node2
上重置
[root@k8s-node2 ~]# kubeadm reset --cri-socket=unix:///var/run/cri-dockerd.sock
k8s-node2
上复制完,在最后添加--cri-socket=unix:///var/run/cri-dockerd.sock
[root@k8s-node2 ~]# kubeadm join 172.25.254.100:6443 --token sgnnhn.nrd19s6bbfw1afw3 --discovery-token-ca-cert-hash sha256:f429541942f8f0fa21179410284c9ce94dc658fae14564d2607079287f37e2dd \
> --cri-socket=unix:///var/run/cri-dockerd.sock
- 测试1
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master.org Ready control-plane 8m59s v1.30.0
k8s-node1.org Ready <none> 101s v1.30.0
k8s-node2.org Ready <none> 51s v1.30.0
- 测试2
[root@k8s-master ~]# kubectl -n kube-flannel get pods
NAME READY STATUS RESTARTS AGE
kube-flannel-ds-m7ksl 1/1 Running 0 2m40s
kube-flannel-ds-q55gr 1/1 Running 0 8m38s
kube-flannel-ds-twvv4 1/1 Running 0 110s
只有这2个测试都成功,才算搭建完成
k8s
中容器能否运行成功取决于三点
- 所有node节点的状态是否ready
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master.org Ready control-plane 23d v1.30.0
k8s-node1.org Ready <none> 23d v1.30.0
k8s-node2.org Ready <none> 23d v1.30.0
- 集群中的网络插件是否安装成功
[root@k8s-master ~]# kubectl -n kube-flannel get pods
NAME READY STATUS RESTARTS AGE
kube-flannel-ds-m7ksl 1/1 Running 6 (2d21h ago) 23d
kube-flannel-ds-q55gr 1/1 Running 6 (2d21h ago) 23d
kube-flannel-ds-twvv4 1/1 Running 7 (2d21h ago) 23d
- docker镜像是否能够正常拉取