kubeadm快速部署Kubernetes1.29.0版本集群

一、Kubernetes集群节点准备

1.1 主机操作系统说明

序号操作系统及版本备注
1CentOS7u9

1.2 主机硬件配置说明

需求CPU内存硬盘角色主机名
8C8G1024GBmasterk8s-master01
8C16G1024GBworker(node)k8s-worker01
8C16G1024GBworker(node)k8s-worker02

1.3 主机配置

1.3.1 主机名配置

由于本次使用3台主机完成kubernetes集群部署,其中1台为master节点,名称为k8s-master01;其中2台为worker节点,名称分别为:k8s-worker01及k8s-worker02

master节点
# hostnamectl set-hostname k8s-master01
worker01节点
# hostnamectl set-hostname k8s-worker01
worker02节点
# hostnamectl set-hostname k8s-worker02

1.3.2 主机IP地址配置

k8s-master01节点IP地址为:192.168.233.87/24
# vim /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="static"
IPADDR=192.168.233.87
GATEWAY=192.168.233.2
NETMASK=255.255.255.0
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens33"
UUID="54e9909d-4ac7-4e13-af4d-80129ba1336d"
DEVICE="ens33"
ONBOOT="yes"
DNS1=114.114.114.114
DNS2=114.114.115.115
k8s-worker01节点IP地址为:192.168.233.88/24
# vim /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="static"
IPADDR=192.168.233.88
GATEWAY=192.168.233.2
NETMASK=255.255.255.0
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens33"
UUID="54e9909d-4ac7-4e13-af4d-80129ba1336d"
DEVICE="ens33"
ONBOOT="yes"
DNS1=114.114.114.114
DNS2=114.114.115.115
k8s-worker02节点IP地址为:192.168.233.89/24
# vim /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="static"
IPADDR=192.168.233.89
GATEWAY=192.168.233.2
NETMASK=255.255.255.0
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens33"
UUID="54e9909d-4ac7-4e13-af4d-80129ba1336d"
DEVICE="ens33"
ONBOOT="yes"
DNS1=114.114.114.114
DNS2=114.114.115.115

1.3.3 主机名与IP地址解析

所有集群主机均需要进行配置。

# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.10.160 k8s-master01
192.168.10.161 k8s-worker01
192.168.10.162 k8s-worker02

1.3.4 防火墙配置

所有主机均需要操作。

关闭现有防火墙firewalld
# systemctl disable firewalld
# systemctl stop firewalld# systemctl disable --now firewalld

查看firewalld状态
# firewall-cmd --state
not running

1.3.5 SELINUX配置

所有主机均需要操作。修改SELinux配置需要重启操作系统。

# sed -ri 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
# sestatus

1.3.6 时间同步配置

所有主机均需要操作。最小化安装系统需要安装ntpdate软件。

# crontab -l
0 */1 * * * /usr/sbin/ntpdate time1.aliyun.com

1.3.7 升级操作系统内核

所有主机均需要操作。

导入elrepo gpg key
# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
安装elrepo YUM源仓库
# yum -y install https://www.elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm
安装kernel-ml版本,ml为长期稳定版本,lt为长期维护版本
# yum --enablerepo="elrepo-kernel" -y install kernel-lt.x86_64
设置grub2默认引导为0
# grub2-set-default 0
重新生成grub2引导文件
# grub2-mkconfig -o /boot/grub2/grub.cfg
更新后,需要重启,使用升级的内核生效。
# reboot
重启后,需要验证内核是否为更新对应的版本
# uname -r

1.3.8 配置内核路由转发及网桥过滤

所有主机均需要操作。

添加网桥过滤及内核转发配置文件
# cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
加载br_netfilter模块
# modprobe br_netfilter
查看是否加载
# lsmod | grep br_netfilter
br_netfilter           22256  0
bridge                151336  1 br_netfilter
使其生效
# sysctl --system

1.3.9 安装ipset及ipvsadm

所有主机均需要操作。

安装ipset及ipvsadm
# yum -y install ipset ipvsadm
配置ipvsadm模块加载方式
添加需要加载的模块
# cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF
授权、运行、检查是否加载
# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack

1.3.10 关闭SWAP分区

修改完成后需要重启操作系统,如不重启,可临时关闭,命令为swapoff -a

永远关闭swap分区,需要重启操作系统
# cat /etc/fstab
......

# /dev/mapper/centos-swap swap                    swap    defaults        0 0

在上一行中行首添加#

二、Docker-ce及cri-dockerd准备

2.1 Docker安装YUM源准备

使用阿里云开源软件镜像站。

# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

2.2 Docker安装

# yum -y install docker-ce

2.3 启动Docker服务

# systemctl enable --now docker

2.4 修改cgroup方式

/etc/docker/daemon.json 默认没有此文件,需要单独创建

/etc/docker/daemon.json添加如下内容

# cat > /etc/docker/daemon.json <<EOF
{
        "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
# systemctl restart docker

2.5 cri-dockerd安装

cri-dockerd安装

在这里插入图片描述
在这里插入图片描述

# wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.8/cri-dockerd-0.3.8-3.el7.x86_64.rpm
# yum -y install cri-dockerd-0.3.8-3.el7.x86_64.rpm
# vim /usr/lib/systemd/system/cri-docker.service

修改第10行内容
ExecStart=/usr/bin/cri-dockerd --pod-infra-container-image=registry.k8s.io/pause:3.9 --container-runtime-endpoint fd://
# systemctl start cri-docker
# systemctl enable cri-docker

三、kubernetes 1.29.0 集群部署

3.1 集群软件及版本说明

kubeadmkubeletkubectl
版本1.29.01.29.01.29.0
安装位置集群所有主机集群所有主机集群所有主机
作用初始化集群、管理集群等用于接收api-server指令,对pod生命周期进行管理集群应用命令行管理工具

3.2 kubernetes YUM源准备

使用kubernetes社区YUM源

# cat > /etc/yum.repos.d/k8s.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/repodata/repomd.xml.key
#exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
EOF

3.3 集群软件安装

所有节点均可安装

默认安装
# yum -y install  kubeadm  kubelet kubectl
安装指定版本
# yum -y install  kubeadm-1.29.0-150500.1.1  kubelet-1.29.0-150500.1.1 kubectl-1.29.0-150500.1.1

3.4 配置kubelet

为了实现docker使用的cgroupdriver与kubelet使用的cgroup的一致性,建议修改如下文件内容。

# vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
设置kubelet为开机自启动即可,由于没有生成配置文件,集群初始化后自动启动
# systemctl enable kubelet

3.5 集群镜像准备

可使用VPN实现下载。

每个node都需要有本地镜像。

# kubeadm config images list --kubernetes-version=v1.29.0

registry.k8s.io/kube-apiserver:v1.29.0
registry.k8s.io/kube-controller-manager:v1.29.0
registry.k8s.io/kube-scheduler:v1.29.0
registry.k8s.io/kube-proxy:v1.29.0
registry.k8s.io/coredns/coredns:v1.11.1
registry.k8s.io/pause:3.9
registry.k8s.io/etcd:3.5.10-0


docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.29.0
docker pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.29.0
docker pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.29.0
docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.29.0
docker pull registry.aliyuncs.com/google_containers/coredns:v1.11.1
docker pull registry.aliyuncs.com/google_containers/pause:3.9
docker pull registry.aliyuncs.com/google_containers/etcd:3.5.10-0


docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.29.0 registry.k8s.io/kube-apiserver:v1.29.0
docker tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.29.0 registry.k8s.io/kube-controller-manager:v1.29.0
docker tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.29.0 registry.k8s.io/kube-scheduler:v1.29.0
docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.29.0 registry.k8s.io/kube-proxy:v1.29.0
docker tag registry.aliyuncs.com/google_containers/coredns:v1.11.1 registry.k8s.io/coredns/coredns:v1.11.1
docker tag registry.aliyuncs.com/google_containers/pause:3.9 registry.k8s.io/pause:3.9
docker tag registry.aliyuncs.com/google_containers/etcd:3.5.10-0 registry.k8s.io/etcd:3.5.10-0


# cat image_download.sh
#!/bin/bash
images_list='
镜像列表'

for i in $images_list
do
        docker pull $i
done

docker save -o k8s-1-29-0.tar $images_list

3.6 集群初始化

此处记得更改master的ip,不然会出错。

[root@k8s-master01 ~]# kubeadm init --kubernetes-version=v1.29.0 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.233.87   --cri-socket unix:///var/run/cri-dockerd.sock
初始化过程输出
[init] Using Kubernetes version: v1.29.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.233.87]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.233.87 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.233.87 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 12.510531 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: bid4gb.831cyurw8u5vl1iy
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.233.87:6443 --token bid4gb.831cyurw8u5vl1iy \
	--discovery-token-ca-cert-hash sha256:516b5cae726fcc2b4e8ee0c04bc61a96b92fbfaf8f700b2f8ef67c7c9469443b 

这个6443端口是api-server的端口。

3.7 集群应用客户端管理集群文件准备

[root@k8s-master01 ~]# mkdir -p $HOME/.kube
[root@k8s-master01 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master01 ~]# chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-master01 ~]#  
config

3.8 集群网络插件部署 calico

使用calico部署集群网络

安装参考网址:https://projectcalico.docs.tigera.io/about/about-calico

在这里插入图片描述
在这里插入图片描述

由于国内网络原因,calico的相关镜像可能pull不下来,可改成quay.io/calico/的前缀联网下载,最好还是存到本地。

每个node都需要有本地镜像

calico/typha:v3.27.0
calico/kube-controllers:v3.27.0
calico/cni:v3.27.0
calico/node-driver-registrar:v3.27.0
calico/csi:v3.27.0
calico/pod2daemon-flexvol:v3.27.0
calico/node:v3.27.0
docker.io/calico/apiserver:v3.27.0
应用operator资源清单文件,需要镜像quay.io/tigera/operator:v1.32.3
[root@k8s-master01 ~]# kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/tigera-operator.yaml
通过自定义资源方式安装,先别执行,下载到本地更改后执行
[root@k8s-master01 ~]# kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/custom-resources.yaml
修改文件第13行,修改为使用kubeadm init ----pod-network-cidr对应的IP地址段
[root@k8s-master01 ~]# vim custom-resources.yaml
......
 11     ipPools:
 12     - blockSize: 26
 13       cidr: 10.244.0.0/16 
 14       encapsulation: VXLANCrossSubnet
......
应用资源清单文件
[root@k8s-master01 ~]# kubectl create -f custom-resources.yaml
监视calico-sysem命名空间中pod运行情况
[root@k8s-master01 ~]# watch kubectl get pods -n calico-system

[root@k8s-master01 ~]# kubectl get pod -n calico-system
NAME                                       READY   STATUS    RESTARTS       AGE
calico-kube-controllers-66756674f6-sdmzz   1/1     Running   1 (176m ago)   41h
calico-node-cg9vl                          1/1     Running   1 (176m ago)   41h
calico-node-kmh64                          1/1     Running   0              3h28m
calico-node-phq7n                          1/1     Running   0              3h28m
calico-typha-7577588bff-9fxcx              1/1     Running   1 (176m ago)   41h
calico-typha-7577588bff-9pz92              1/1     Running   0              3h28m
csi-node-driver-6rrvf                      2/2     Running   0              173m
csi-node-driver-l9jl8                      2/2     Running   0              144m
csi-node-driver-q54hn                      2/2     Running   0              148m

[root@k8s-master01 ~]# kubectl get pod -n calico-apiserver
NAME                                READY   STATUS    RESTARTS       AGE
calico-apiserver-66bc79b7c9-5wdc7   1/1     Running   5 (5h4m ago)   144d
calico-apiserver-66bc79b7c9-r9wjd   1/1     Running   5 (5h4m ago)   144d

可见,有多个node-driver、node、typha等pod。master节点只是一个入口,能获取到整个集群的pod信息,由于镜像下载的问题,疏忽了其他worker节点没有上传镜像导致有几个pod找不到镜像,镜像文件每个node都得有。

Wait until each pod has the STATUS of Running.

已经全部运行
[root@k8s-master01 ~]# kubectl get pods -n calico-system
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-666bb9949-dzp68   1/1     Running   0          11m
calico-node-jhcf4                         1/1     Running   4          11m
calico-typha-68b96d8d9c-7qfq7             1/1     Running   2          11m


3.9 集群工作节点添加

因容器镜像下载较慢,可能会导致报错,主要错误为没有准备好cni(集群网络插件),如有网络,请耐心等待即可。

[root@k8s-worker01 ~]# kubeadm join 192.168.233.87:6443 --token bid4gb.831cyurw8u5vl1iy --discovery-token-ca-cert-hash sha256:516b5cae726fcc2b4e8ee0c04bc61a96b92fbfaf8f700b2f8ef67c7c9469443b --cri-socket unix:///var/run/cri-dockerd.sock
[root@k8s-worker02 ~]#kubeadm join 192.168.233.87:6443 --token bid4gb.831cyurw8u5vl1iy --discovery-token-ca-cert-hash sha256:516b5cae726fcc2b4e8ee0c04bc61a96b92fbfaf8f700b2f8ef67c7c9469443b --cri-socket unix:///var/run/cri-dockerd.sock

如果没有提前安装网络插件,用命令kubectl get nodes时,节点的status会显示成NotReady

四、 Kubernetes集群可用性验证

查看所有的节点
[root@k8s-master01 ~]# kubectl get nodes
NAME           STATUS   ROLES           AGE   VERSION
k8s-master01   Ready    control-plane   25m   v1.29.0
k8s-worker01   Ready    <none>          24m   v1.29.0
k8s-worker02   Ready    <none>          24m   v1.29.0

kubectl label node k8s-master01 node-role.kubernetes.io/master=master
kubectl label node k8s-worker01 node-role.kubernetes.io/worker=worker
kubectl label node k8s-worker02 node-role.kubernetes.io/worker=worker

[root@k8s-master01 ~]# kubectl get nodes
NAME           STATUS   ROLES                  AGE    VERSION
k8s-master01   Ready    control-plane,master   140d   v1.29.0
k8s-worker01   Ready    worker                 137d   v1.29.0
k8s-worker02   Ready    worker                 137d   v1.29.0
查看集群健康情况
[root@k8s-master01 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE   ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   ok
查看kubernetes集群pod运行情况
[root@k8s-master01 ~]# kubectl get pods -n kube-system
NAME                                   READY   STATUS    RESTARTS       AGE
coredns-76f75df574-dvwd5               1/1     Running   1 (3h1m ago)   2d6h
coredns-76f75df574-hzzp9               1/1     Running   1 (3h1m ago)   2d6h
etcd-k8s-master01                      1/1     Running   2 (3h1m ago)   2d6h
kube-apiserver-k8s-master01            1/1     Running   2 (3h1m ago)   2d6h
kube-controller-manager-k8s-master01   1/1     Running   2 (3h1m ago)   2d6h
kube-proxy-2gjdc                       1/1     Running   0              3h34m
kube-proxy-b248f                       1/1     Running   0              3h34m
kube-proxy-zntpd                       1/1     Running   2 (3h1m ago)   2d6h
kube-scheduler-k8s-master01            1/1     Running   2 (3h1m ago)   2d6h
遇到的问题分析

1、k8s相关镜像下载问题,改用阿里云的仓库下载,另外,别的节点同样也要上传镜像。

2、calico镜像下载问题,改用quay.io的镜像仓库下载,另外,别的节点同样也要上传镜像。

3、master节点作为操作入口,上面的pod资源是整个集群的,而不只是master节点自己的,查询到当前命名空间部分pod无法拉取镜像时,可能是其他节点本地没有镜像。

可通过-o wide来查看pod对应在哪个node上运行

[root@k8s-master01 ~]# kubectl get pods -n calico-system -o wide
NAME                                       READY   STATUS    RESTARTS       AGE     IP               NODE           NOMINATED NODE   READINESS GATES
calico-kube-controllers-66756674f6-sdmzz   1/1     Running   1 (3h7m ago)   41h     10.244.32.136    k8s-master01   <none>           <none>
calico-node-cg9vl                          1/1     Running   1 (3h7m ago)   41h     192.168.233.87   k8s-master01   <none>           <none>
calico-node-kmh64                          1/1     Running   0              3h39m   192.168.233.88   k8s-worker01   <none>           <none>
calico-node-phq7n                          1/1     Running   0              3h39m   192.168.233.89   k8s-worker02   <none>           <none>
calico-typha-7577588bff-9fxcx              1/1     Running   1 (3h7m ago)   41h     192.168.233.87   k8s-master01   <none>           <none>
calico-typha-7577588bff-9pz92              1/1     Running   0              3h39m   192.168.233.89   k8s-worker02   <none>           <none>
csi-node-driver-6rrvf                      2/2     Running   0              3h4m    10.244.32.138    k8s-master01   <none>           <none>
csi-node-driver-l9jl8                      2/2     Running   0              155m    10.244.69.195    k8s-worker02   <none>           <none>
csi-node-driver-q54hn                      2/2     Running   0              159m    10.244.79.68     k8s-worker01   <none>           <none>
  • 29
    点赞
  • 12
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值