centos7使用kubeadm部署k8s集群(使用containerd做运行时)

一、环境准备

  版本信息

名称版本
系统版本CentOS Linux release 7.9.2009 (Core)
内核版本5.4.180-1.el7.elrepo.x86_64
kubeadm版本v1.22.7
containerd版本1.4.12

  版本要求
使用containerd,官方建议4.x以上内核,centos7默认使用3.1版本内核,建议升级内核。

官方建议
  谷歌翻译
在这里插入图片描述

1.centos7升级内核

  参考之前的文章
  centos7升级内核

2.Containerd和Docker一些区别

  架构图
在这里插入图片描述
  使用docker做运行时
在这里插入图片描述

  使用containerd做运行时
在这里插入图片描述

使用containerd做运行时,绕开了dockershim,降低复杂度

  Containerd和Docker在命令使用上的一些区别主要如下:

DockerContainerd
显示本地镜像docker imagescrictl images
下载镜像docker pullcrictl pull
上传镜像docker push
删除本地镜像docker rmicrictl rmi
查看详情docker inspectcrictl inspect
显示容器docker pscrictl ps
创建docker createcrictl create
启动docker startcrictl start
停止docker stopcrictl stop
删除docker rmcrictl rm
查看日志docker logscrictl logs
查看资源占比docker statscrictl stats
执行命令docker execcrictl exec

3.节点环境初始化(所有节点都执行)

  尽量选择纯净的机器,IP地址静态配置,不宜变动。运行以下脚本:
(node节点可不安装kubectl,kubectl在控制节点使用)

#!/bin/bash

# 更新
yum update -y

# 卸载 firewalld
systemctl stop firewalld
yum remove firewalld -y

# 卸载 networkmanager
systemctl stop NetworkManager
yum remove NetworkManager -y

# 同步服务器时间
yum install chrony -y
systemctl enable --now chronyd
chronyc sources

# 安装iptables(搭建完集群后再安装,比较省事)
#yum install -y iptables iptables-services && systemctl enable --now iptables.service

# 关闭 selinux
setenforce 0
sed -i '/^SELINUX=/cSELINUX=disabled' /etc/selinux/config
getenforce

# 关闭swap分区
swapoff -a # 临时
sed -i '/ swap / s/^/# /g' /etc/fstab #永久

# 安装常用工具包
yum install -y net-tools sysstat vim wget lsof unzip zip bind-utils lrzsz telnet

# 安装keepalived,nginx或haproxy依赖(部署高可用集群使用)
#yum install -y zlib zlib-devel openssl openssl-devel pcre pcre-devel gcc gcc-c++ automake autoconf make

# 如果是从安装过docker的服务器升级k8s,建议将/etc/sysctl.conf配置清掉
# 这条命令会清除所有没被注释的行
# sed -i '/^#/!d' /etc/sysctl.conf

# 安装ipvs
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules 
bash /etc/sysconfig/modules/ipvs.modules 
lsmod | grep -e ip_vs -e nf_conntrack_ipv4
yum install ipset ipvsadm -y

# 允许检查桥接流量
cat <<EOF | tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
sysctl --system

cat <<EOF | tee /etc/sysctl.d/k8s.conf
vm.swappiness = 0
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
modprobe br_netfilter
lsmod | grep netfilter
sysctl -p /etc/sysctl.d/k8s.conf

# 安装containerd
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum list containerd.io --showduplicates
yum install -y containerd.io
mkdir -p /etc/containerd
containerd config default | tee /etc/containerd/config.toml

cat <<EOF | tee /etc/crictl.yaml
runtime-endpoint: "unix:///run/containerd/containerd.sock"
image-endpoint: "unix:///run/containerd/containerd.sock"
timeout: 10
debug: false
pull-image-on-create: false
disable-pull-on-run: false
EOF

# 使用 systemd cgroup驱动程序
sed -i "s#k8s.gcr.io#registry.aliyuncs.com/google_containers#g"  /etc/containerd/config.toml
#sed -i '/containerd.runtimes.runc.options/a\ \ \ \ \ \ \ \ \ \ \ \ SystemdCgroup = true' /etc/containerd/config.toml
#由于脚本没指定containerd版本,新版本containerd,增加了SystemdCgroup = false参数,
#使用如下命令修改cgroup驱动
sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
sed -i "s#https://registry-1.docker.io#https://registry.aliyuncs.com#g"  /etc/containerd/config.toml
systemctl daemon-reload
systemctl enable containerd
systemctl restart containerd

# 添加kubernetes yum软件源
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

# 安装kubeadm,kubelet和kubectl
yum list kubeadm --showduplicates
yum install -y kubelet-1.22.7 kubeadm-1.22.7 kubectl-1.22.7 --disableexcludes=kubernetes

# 设置开机自启
systemctl daemon-reload
systemctl enable --now kubelet
# kubelet每隔几秒就会重启,陷入等待 kubeadm 指令的死循环

# 命令自动补全
yum install -y bash-completion
source <(crictl completion bash)
crictl completion bash >/etc/bash_completion.d/crictl
source <(kubectl completion bash)
kubectl completion bash >/etc/bash_completion.d/kubectl
source /usr/share/bash-completion/bash_completion

如果习惯二进制安装,附二进制安装runc containerd kubectl脚本(脚本中组件为目前最新版本,根据实际情况调整版本)

#runc
wget https://ghproxy.com/https://github.com/opencontainers/runc/releases/download/v1.1.4/runc.amd64
install -m 755 runc.amd64 /usr/local/sbin/runc
#containerd
wget https://ghproxy.com/https://github.com/containerd/containerd/releases/download/v1.6.8/containerd-1.6.8-linux-amd64.tar.gz
tar Cxzvf /usr/local containerd-1.6.8-linux-amd64.tar.gz
wget https://ghproxy.com/https://github.com/containerd/containerd/blob/main/containerd.service
cp containerd.service /usr/lib/systemd/system/
systemctl daemon-reload
systemctl enable --now containerd

mkdir -p /etc/containerd
containerd config default | tee /etc/containerd/config.toml
sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
#重载pause镜像,交通大学镜像
sed -i 's/k8s.gcr.io/k8s-gcr-io.mirrors.sjtug.sjtu.edu.cn/' /etc/containerd/config.toml
systemctl restart containerd.service
systemctl status containerd.service

#kubectl
curl -LO https://dl.k8s.io/release/v1.25.1/bin/linux/amd64/kubectl
install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
kubectl version --client --output=yaml

4.重新命名主机,修改hosts文件

#分别修改主机名称(根据实际情况选择不同服务器重命名)
hostnamectl set-hostname k8s-master
hostnamectl set-hostname k8s-node1
hostnamectl set-hostname k8s-node2
	
#修改hosts文件(按实际情况修改后,所有节点执行)
cat >> /etc/hosts << EOF 
192.168.1.15 k8s-master
192.168.1.65 k8s-node1
192.168.1.26 k8s-node2
EOF

  以上步骤k8s集群所有主机都需要执行

二、部署Kubernetes Master

1.kubeadm常用命令

命令效果
kubeadm init用于搭建控制平面节点
kubeadm join用于搭建工作节点并将其加入到集群中
kubeadm upgrade用于升级 Kubernetes 集群到新版本
kubeadm config如果你使用了 v1.7.x 或更低版本的 kubeadm 版本初始化你的集群,则使用 kubeadm upgrade 来配置你的集群
kubeadm token用于管理 kubeadm join 使用的令牌
kubeadm reset用于恢复通过 kubeadm init 或者 kubeadm join 命令对节点进行的任何变更
kubeadm certs用于管理 Kubernetes 证书
kubeadm kubeconfig用于管理 kubeconfig 文件
kubeadm version用于打印 kubeadm 的版本信息
kubeadm alpha用于预览一组可用于收集社区反馈的特性

  部署命令示例

# 部署Kubernetes Master
	
kubeadm init \
--apiserver-advertise-address=192.168.1.15 \
--image-repository registry.aliyuncs.com/google_containers \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16

2.推荐使用配置文件部署

  默认配置文件

# 查看默认配置文件
kubeadm config print init-defaults
# 查看所需镜像
kubeadm config images list --image-repository registry.aliyuncs.com
# 导出默认配置文件到当前目录
kubeadm config print init-defaults > kubeadm.yaml

  配置文件参考

kubeadm-config.yaml

apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: qjbkjd.zp1ta327pwur2k8g
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.1.15
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock 
  name: k8s-master
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.22.7
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/12
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd

选择IPVS 代理模式 ,cgroup选择systemd(kubelet和containerd使用相同的cgroup驱动,官方建议使用systemd),镜像库使用阿里云镜像站,其他的根据实际情况填写

  提前拉取镜像

kubeadm config images pull --config kubeadm-config.yaml

3.集群初始化

kubeadm init --config kubeadm-config.yaml

  部分信息

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.15:6443 --token qjbkjd.zp1ta327pwur2k8g \
	--discovery-token-ca-cert-hash sha256:9bbdaf992cd0becdb2de36fc6fccbaa5034733bcc60c34eadbe61fcd83a6d9e5

  根据提示操作,配置文件

mkdir -p $HOME/.kube && \
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config && \
chown $(id -u):$(id -g) $HOME/.kube/config

  其他节点执行命令加入集群
示例:

[root@k8s-node1 ~]# kubeadm join 192.168.1.15:6443 --token qjbkjd.zp1ta327pwur2k8g \
> --discovery-token-ca-cert-hash sha256:9bbdaf992cd0becdb2de36fc6fccbaa5034733bcc60c34eadbe61fcd83a6d9e5
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

4.部署calico插件

  部署网络插件,使用 Kubernetes API 数据存储进行安装——50 个节点或更少

  使用3.20版本,下载

curl -O https://docs.projectcalico.org/archive/v3.20/manifests/calico.yaml

  如果想使用最新主线版本

curl -O https://docs.projectcalico.org/manifests/calico.yaml

使用calico插件,网卡名称最好是常用的,比如说eth、eno等。如果是enp0s6这种,可能导致部分pod报错。使用服务器部署容易遇到这个问题,虚拟机、云主机网卡名称一般是统一的。

修改网卡名称:

  Centos7修改网卡名称
  不想修改网卡名称也可在yaml文件中指定网卡,示例:

      containers:
        # Runs calico-node container on each Kubernetes node. This
        # container programs network policy and routes on each
        # host.
        - name: calico-node
          image: docker.io/calico/node:v3.20.4
          envFrom:
          - configMapRef:
              # Allow KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT to be overridden for eBPF mode.
              name: kubernetes-services-endpoint
              optional: true
          env:
            # Use Kubernetes API as the backing datastore.
            - name: DATASTORE_TYPE
              value: "kubernetes"
            # Wait for the datastore.
            - name: WAIT_FOR_DATASTORE
              value: "true"
            # Set based on the k8s node name.
            - name: NODENAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            # Choose the backend to use.
            - name: CALICO_NETWORKING_BACKEND
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: calico_backend
            # Cluster type to identify the deployment type
            - name: CLUSTER_TYPE
              value: "k8s,bgp"
            # 新增两行配置
            - name: IP_AUTODETECTION_METHOD
              value: "interface=eno.*"
            # Auto-detect the BGP IP address.
            - name: IP
              value: "autodetect"
            # Enable IPIP
            - name: CALICO_IPV4POOL_IPIP
              value: "Always"

  根据实际情况修改配置文件,例如pod网段

POD_CIDR="<10.244.0.0/16>" && sed -i -e "s?192.168.0.0/16?$POD_CIDR?g" calico.yaml

  使用apply部署

kubectl apply -f calico.yaml

  查看pod

kubectl get pods --all-namespaces -o wide

  等待一段时间,查看集群运行情况,查看node节点

kubectl get nodes

  效果示例

[root@k8s-master ~]# kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE     IP               NODE         NOMINATED NODE   READINESS GATES
kube-system   calico-kube-controllers-654b987fd9-xn65b   1/1     Running   0          73s     10.244.235.193   k8s-master   <none>           <none>
kube-system   calico-node-57wrb                          1/1     Running   0          73s     192.168.1.65     k8s-node1    <none>           <none>
kube-system   calico-node-dkw4m                          1/1     Running   0          73s     192.168.1.26     k8s-node2    <none>           <none>
kube-system   calico-node-hjp5d                          1/1     Running   0          73s     192.168.1.15     k8s-master   <none>           <none>
kube-system   coredns-7f6cbbb7b8-b8xw9                   1/1     Running   0          4m4s    10.244.235.194   k8s-master   <none>           <none>
kube-system   coredns-7f6cbbb7b8-srp2g                   1/1     Running   0          4m4s    10.244.235.195   k8s-master   <none>           <none>
kube-system   etcd-k8s-master                            1/1     Running   0          4m19s   192.168.1.15     k8s-master   <none>           <none>
kube-system   kube-apiserver-k8s-master                  1/1     Running   0          4m18s   192.168.1.15     k8s-master   <none>           <none>
kube-system   kube-controller-manager-k8s-master         1/1     Running   0          4m11s   192.168.1.15     k8s-master   <none>           <none>
kube-system   kube-proxy-75dc9                           1/1     Running   0          4m4s    192.168.1.15     k8s-master   <none>           <none>
kube-system   kube-proxy-sp729                           1/1     Running   0          3m16s   192.168.1.26     k8s-node2    <none>           <none>
kube-system   kube-proxy-wxmf6                           1/1     Running   0          3m19s   192.168.1.65     k8s-node1    <none>           <none>
kube-system   kube-scheduler-k8s-master                  1/1     Running   0          4m19s   192.168.1.15     k8s-master   <none>           <none>
[root@k8s-master ~]# kubectl get nodes
NAME         STATUS   ROLES                  AGE     VERSION
k8s-master   Ready    control-plane,master   4m33s   v1.22.7
k8s-node1    Ready    <none>                 3m31s   v1.22.7
k8s-node2    Ready    <none>                 3m28s   v1.22.7

  node和pod都显示ready,k8s集群部署成功

5.查看证书有效期

kubeadm certs check-expiration
CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Mar 03, 2023 09:27 UTC   364d            ca                      no      
apiserver                  Mar 03, 2023 09:27 UTC   364d            ca                      no      
apiserver-etcd-client      Mar 03, 2023 09:27 UTC   364d            etcd-ca                 no      
apiserver-kubelet-client   Mar 03, 2023 09:27 UTC   364d            ca                      no      
controller-manager.conf    Mar 03, 2023 09:27 UTC   364d            ca                      no      
etcd-healthcheck-client    Mar 03, 2023 09:27 UTC   364d            etcd-ca                 no      
etcd-peer                  Mar 03, 2023 09:27 UTC   364d            etcd-ca                 no      
etcd-server                Mar 03, 2023 09:27 UTC   364d            etcd-ca                 no      
front-proxy-client         Mar 03, 2023 09:27 UTC   364d            front-proxy-ca          no      
scheduler.conf             Mar 03, 2023 09:27 UTC   364d            ca                      no      

CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Feb 29, 2032 09:27 UTC   9y              no      
etcd-ca                 Feb 29, 2032 09:27 UTC   9y              no      
front-proxy-ca          Feb 29, 2032 09:27 UTC   9y              no

使用kubeadm部署的k8s集群,大部分证书有效期是一年

6.解散k8s集群

k8s集群部署出现问题,可以解散集群,重新部署

node节点执行如下脚本

vim k8s-reset.sh

#!/bin/bash

kubeadm reset
rm -rf /etc/cni/net.d/ /root/.kube/
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
ipvsadm -C

#crictl rmi $(crictl images -q)

master节点

kubectl delete nodes k8s-node1
kubectl delete nodes k8s-node2

delete后,master节点运行k8s-reset.sh

三、常用扩展工具部署

1.部署Dashboard

mkdir -p /home/yaml/dashboard && cd /home/yaml/dashboard
# 下载
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.0/aio/deploy/recommended.yaml
# 改名
mv recommended.yaml dashboard-recommended.yaml

若被墙,下载链接

https://download.csdn.net/download/weixin_44254035/83056768

  修改dashboard-recommended.yaml,修改service的类型为NodePort

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort    # 新增
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30005    # 新增
  selector:
    k8s-app: kubernetes-dashboard
kubectl apply -f dashboard-recommended.yaml

  创建Service Account 及 ClusterRoleBinding
vim auth.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
kubectl apply -f auth.yaml

  获取访问 Kubernetes Dashboard所需的 Token

kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
Name:         admin-user-token-9vxkn
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: 555b7e24-7892-4706-93f8-1126ece36ecf

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1099 bytes
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6Ildhc0dkNlE3WTJCWFpwRkdCYzJqMDRvSXA4dnlaajVINkFFWUNBcWJKV0UifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTl2eGtuIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI1NTViN2UyNC03ODkyLTQ3MDYtOTNmOC0xMTI2ZWNlMzZlY2YiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.kCeNyv9rqi0auPFPyDMEdR56MQF6miI1XOSXAFB4-mNwGnhfiw7N7CygTT5I7alEOkp5Utwq8VSdSDf1rJFuJDGhEgmOdBhD4ea25Wfzal-aOajAS4AUXIjGsbv1ifKI0c-W36oj2U7f8pyOZMA80ufYX0uhdxxD2lrcOasE0YrGa38eyB4Br3olwPhHk5ooFXpzIXOrp7lltli_35pDCmjKr49QrzbjPOa5sU0CjeiNKY4Dpd0Vo9dCztvY2IY9oKTXHI267pHfEHmkSSj0wO5pTXVV49hLb_Xk_Pm1N1WLCD23MeToFtPxgTC9dVz6ZANqZ00iwO3fknkkmFOFoQ

  查看服务运行情况

kubectl get services --all-namespaces -o wide

  浏览器访问

https://192.168.1.15:30005

在这里插入图片描述

复制token粘贴后进入

  简单测试
vim nginx-test.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 8
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.19.1
        ports:
        - containerPort: 80
kubectl apply -f nginx-test.yaml

在这里插入图片描述

  下面通过kubeconfig登陆

获取token

kubectl get secret -n kubernetes-dashboard
NAME                               TYPE                                  DATA   AGE
admin-user-token-wzwk5             kubernetes.io/service-account-token   3      15h
default-token-9qd7q                kubernetes.io/service-account-token   3      15h
kubernetes-dashboard-certs         Opaque                                0      15h
kubernetes-dashboard-csrf          Opaque                                1      15h
kubernetes-dashboard-key-holder    Opaque                                2      15h
kubernetes-dashboard-token-tlgkl   kubernetes.io/service-account-token   3      15h

使用admin-user-token-**

DASH_TOCKEN=$(kubectl -n kubernetes-dashboard get secret admin-user-token-wzwk5 -o jsonpath={.data.token} |base64 -d)
# 设置集群条目
cd /etc/kubernetes/pki
kubectl config set-cluster kubernetes --certificate-authority=./ca.crt --server="https://192.168.1.15:6443" --embed-certs=true  --kubeconfig=/root/dashboard-admin.conf

# 设置用户条目
kubectl config set-credentials dashboard-admin --token=$DASH_TOCKEN --kubeconfig=/root/dashboard-admin.conf
# 设置上下文条目
kubectl config set-context dashboard-admin@kubernetes --cluster=kubernetes --user=dashboard-admin --kubeconfig=/root/dashboard-admin.conf
# 设置当前上下文
kubectl config use-context dashboard-admin@kubernetes --kubeconfig=/root/dashboard-admin.conf
sz /root/dashboard-admin.conf

2.部署metrics-server

  下载地址

https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.5.2/components.yaml

若被墙,下载链接

https://download.csdn.net/download/weixin_44254035/83205018

  修改yaml文件

spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 0
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
        - --kubelet-insecure-tls    #忽略证书要求,使用阿里云镜像站
        image: registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server:v0.5.2
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /livez
            port: https
            scheme: HTTPS
          periodSeconds: 10
        name: metrics-server

  使用kubectl apply -f部署

kubectl apply -f metrics-0.5.2.yaml
# 查看pod,等待ready
kubectl get pods -n kube-system

若报错,可以查看日志定位问题
kubectl logs pod名称 -n kube-system

  使用kubectl top命令

kubectl top nodes
kubectl top pods

效果示例

[root@k8s-master metrics]# kubectl top nodes
NAME         CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
k8s-master   382m         9%     1394Mi          36%       
k8s-node1    149m         0%     897Mi           2%        
k8s-node2    368m         0%     1097Mi          0%        
[root@k8s-master metrics]# kubectl top pods
NAME                                CPU(cores)   MEMORY(bytes)   
nginx-deployment-5bbdfb5879-5thlk   0m           6Mi             
nginx-deployment-5bbdfb5879-6djzg   0m           5Mi             
nginx-deployment-5bbdfb5879-74ccs   0m           4Mi             
nginx-deployment-5bbdfb5879-bgb8b   0m           6Mi             
nginx-deployment-5bbdfb5879-cvnf9   0m           5Mi             
nginx-deployment-5bbdfb5879-kfb8p   0m           6Mi             
nginx-deployment-5bbdfb5879-l9f7r   0m           5Mi             
nginx-deployment-5bbdfb5879-ntvd6   0m           6Mi
  • 1
    点赞
  • 19
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 3
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

ly-ram

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值