基于cri-docker搭建K8s集群v1.25.6

[toc]


一、安装要求

在开始之前,部署Kubernetes集群机器需要满足以下几个条件:

  • 一台或多台机器,操作系统 CentOS7.x-86_x64
  • 硬件配置:2GB或更多RAM,2个CPU或更多CPU,硬盘30GB或更多
  • 集群中所有机器之间网络互通
  • 可以访问外网,需要拉取镜像
  • 禁止swap分区

二、准备环境

kubernetes架构图

【集群环境】

角色IP
k8s-master-01192.168.4.114
k8s-node1192.168.4.115
k8s-node2192.168.4.116
k8s-node3192.168.4.118

【版本】

服务版本号
docker版本23.0.1
kubeadm版本1.25.6
cri-dockerd版本0.31
calico版本v3.25.0
kube-promethues版本012
granfana版本9.3.2
metrics版本2.7.0
node_exporter版本2.7.0
blackbox-exporter版本0.23.0
#设置主机名:
hostnamectl set-hostname <hostname>
#关闭selinux:
sed -i 's/enforcing/disabled/' /etc/selinux/config  # 永久
setenforce 0  # 临时

#关闭swap:
# 临时关闭
swapoff -a  
# 永久
sed -ri 's/.*swap.*/#&/' /etc/fstab  

#在master添加hosts:
cat >> /etc/hosts << EOF
192.168.4.114 k8s-master-01 master.example.com
192.168.4.115 k8s-node1 node1.example.com
192.168.4.116 k8s-node2 node2.example.com
192.168.4.118 k8s-node3 node3.example.com
EOF

#将桥接的IPV4流量传递到iptables的链同一节点的不同pod,利用linux bridge进行二层通讯,由于没有原路返回造成pod请求services时的session无法收到返回值而连接超时,所以需要设置让第二层的bridge在转发时也通过第三层的iptables进行通信,并禁止使用swap,只有当系统 OOM 时才允许使用它
cat <<EOF> /etc/sysctl.d/kubernetes.conf
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF
modprobe br_netfilter &&
sysctl -p /etc/sysctl.d/kubernetes.conf


#配置YUM仓库
cd /etc/yum.repos.d/
mkdir back && mv *.repo back
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
curl -o /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum clean all -y && yum makecache fast -y

#时间同步:
yum install ntpdate -y
ntpdate time.windows.com

#修改时区
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime

#修改语言
sudo echo 'LANG="en_US.UTF-8"' >> /etc/profile;source /etc/profile

#设置防火墙为 Iptables 并清空所有规则
systemctl stop firewalld && systemctl disable firewalld
yum -y install iptables-services && systemctl start iptables && systemctl enable
iptables -F && iptables-save

#安装常用软件
yum install wget vim lrzsz -y
rz 
#升级内核
cp /etc/default/grub /etc/default/grub-bak
rpm -ivh kernel-lt-5.4.186-1.el7.elrepo.x86_64.rpm 
grub2-set-default 0
grub2-mkconfig -o /boot/grub2/grub.cfg
grub2-editenv list
yum makecache
awk -F\' '$1=="menuentry " {print i++ " : " $2}' /etc/grub2.cfg
rm kernel-lt-5.4.186-1.el7.elrepo.x86_64.rpm -rf
reboot
uname -r
yum update -y
#不开启ipvs将会使用iptables,但是效率低,所以官网推荐开启ipvs内核
yum install ipset ipvsadm -y
modprobe br_netfilter

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in \${ipvs_modules}; do
/sbin/modinfo -F filename \${kernel_module} > /dev/null 2>&1
if [ $? -eq 0 ]; then
/sbin/modprobe \${kernel_module}
fi
done
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs
#配置持久化
mkdir /var/log/journal
mkdir /etc/systemd/journald.conf.d
cat > /etc/systemd/journald.conf.d/99-prophet.conf <<EOF
[Journal]
# 持久化保存到磁盘
Storage=persistent
# 压缩历史日志
Compress=yes
SyncIntervalSec=5m
RateLimitInterval=30s
RateLimitBurst=1000
# 最大占用空间 10G
SystemMaxUse=10G
# 单日志文件最大 200M
SystemMaxFileSize=200M
# 日志保存时间 2 周
MaxRetentionSec=2week
# 不将日志转发到 syslog
ForwardToSyslog=no
EOF
systemctl restart systemd-journald

三、安装docker

#移除docker
sudo yum -y remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-selinux \
                  docker-engine-selinux \
                  docker-ce-cli \
                  docker-engine
#查看还有没有存在的docker组件
rpm -qa|grep docker
#有则通过命令 yum -y remove XXX 来删除,比如:
#yum remove docker-ce-cli

#安装最新版docker
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
yum -y install docker-ce
#如需指定版本可以使用 yum list docker-ce --showduplicates | sort -r 查看版本信息
通过yum -y install docker-ce-xx-xx.x安装指定版本
#yum  -y install docker-ce-20.10.10-3.el7  

#重启加载
systemctl daemon-reload
systemctl start docker
systemctl enable --now docker
docker ps 
#确认镜像目录是否改变
docker info |grep "Docker Root Dir"
docker --version
#配置镜像下载加速器和数据路径:
mkdir -p /data/docker
cat >/etc/docker/daemon.json <<EOF
{
"data-root": "/data/docker" ,
 "exec-opts": ["native.cgroupdriver=systemd"],
"registry-mirrors": ["https://almtd3fa.mirror.aliyuncs.com"]
}
EOF
#删除之前的数据目录
rm -rvf  /var/lib/docker
systemctl daemon-reload
systemctl restart docker
docker ps 
#确认镜像目录是否改变
docker info |grep "Docker Root Dir"
docker --version

四、安装Cri-Dockerd【所有节点】

Kubernetes默认CRI(容器运行时)为Docker,因此先安装Docker。

【安装地址】https://github.com/Mirantis/cri-dockerd/blob/master/README.md

#安装
rpm -ivh https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.1/cri-dockerd-0.3.1-3.el7.x86_64.rpm

#修改cri-docker.service文件
sed -i "s/ExecStart/#ExecStart/g"  /usr/lib/systemd/system/cri-docker.service
sed -i '11i ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.8' /usr/lib/systemd/system/cri-docker.service

#必须先停止,不能使用restart
systemctl daemon-reload
systemctl stop cri-docker
systemctl start cri-docker
#重新加载启动cri-docker,确认为runing才行
systemctl daemon-reload
systemctl enable cri-docker.service
systemctl enable --now cri-docker.socket
systemctl restart cri-docker.socket 
systemctl status  cri-docker.socket 
systemctl status cri-docker

五、部署kubeadm/kubelet

1)安装kubeadm

由于版本更新频繁,这里指定版本号部署:

yum install -y kubelet-1.25.6 kubeadm-1.25.6 kubectl-1.25.6

#为了实现docker使用的cgroupdriver与kubelet使用的cgroup的一致性
#sed -i  's/KUBELET_EXTRA_ARGS=/KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"/g' /etc/sysconfig/kubelet
#cat /etc/sysconfig/kubelet

#设置kubelet为开机自启动即可,由于没有生成配置文件,集群初始化后自动启动。
systemctl enable --now kubelet
systemctl is-active kubelet
kubelet --version

2)部署kubeadm

  • 在192.168.4.114(Master)执行。
#部署master
kubeadm reset --cri-socket=unix:///var/run/cri-dockerd.sock

#部署master
kubeadm init --kubernetes-version=v1.25.6 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.224.0.0/16 \
--apiserver-advertise-address=192.168.4.114 \
--cri-socket unix:///var/run/cri-dockerd.sock \
--image-repository=registry.aliyuncs.com/google_containers

# 国内服务器必须指定repository仓库,默认官方源被墙无法使用 
#  --ignore-preflight-errors=all  忽略报错注释掉。
  • –apiserver-advertise-address 集群通告地址
  • –image-repository 由于默认拉取镜像地址k8s.gcr.io国内无法访问,这里指定阿里云镜像仓库地址
  • –kubernetes-version K8s版本,与上面安装的一致
  • –service-cidr 集群内部虚拟网络,Pod统一访问入口
  • –pod-network-cidr Pod网络,,与下面部署的CNI网络组件yaml中保持一致
  • –cri-socket /var/run/cri-dockerd.sock k8s CRI选择Cri-dockerd
#零时生效
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
#永久生效
export KUBECONFIG=/etc/kubernetes/admin.conf
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source ~/.bash_profile
$ kubectl get nodes
NAME               STATUS     ROLES            AGE   VERSION
localhost.localdomain   NotReady   control-plane,master   20s   v1.20.0

2)安装自动补全

# 配置自动补全命令(三台都需要操作)
 yum -y install bash-completion
# 设置kubectl与kubeadm命令补全,下次login生效
 kubectl completion bash > /etc/bash_completion.d/kubectl
 kubeadm completion bash > /etc/bash_completion.d/kubeadm
#退出重新登录生效
 exit

3)数据目录修改

  • https://blog.csdn.net/qq_39826987/article/details/126473129?spm=1001.2014.3001.5502

六、加入Kubernetes Node

在192.168.4.114/115/118(Node)执行。

Node节点部署
根据提示把node节点加如到master节点中,复制你们各自在日志里的提示,然后分别粘贴在2个node节点上,最后回车即可(注意要在后面加上–cri-socket unix:///var/run/cri-dockerd.sock这一参数,不然会失败)

kubeadm join 192.168.4.114:6443 --token oomoka.p189u3qkokp5ylob \
	--discovery-token-ca-cert-hash sha256:68a0d0c4f036f163e7093b7a5e2ce2fcd47b7bf17260245ba6c697d72520fd1c  --cri-socket unix:///var/run/cri-dockerd.sock

默认token有效期为24小时,当过期之后,该token就不可用了。这时就需要重新创建token,可以直接使用命令快捷生成:

kubeadm token create --print-join-command

https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/

七、部署容器网络(CNI)

1)官网

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network

注意:只需要部署下面其中一个,推荐Calico。

Calico是一个纯三层的数据中心网络方案,Calico支持广泛的平台,包括Kubernetes、OpenStack等。

Calico 在每一个计算节点利用 Linux Kernel 实现了一个高效的虚拟路由器( vRouter) 来负责数据转发,而每个 vRouter 通过 BGP 协议负责把自己上运行的 workload 的路由信息向整个 Calico 网络内传播。

此外,Calico 项目还实现了 Kubernetes 网络策略,提供ACL功能。

【官方说明】

  • https://docs.projectcalico.org/getting-started/kubernetes/quickstart

【git地址】

  • https://github.com/projectcalico/calico

【部署地址】

  • https://docs.tigera.io/archive/v3.25/manifests/calico.yam

【支持版本说明】

  • https://docs.tigera.io/archive/v3.25/getting-started/kubernetes/requirements

Supported versions
我们将Calico v3.25与以下Kubernetes版本进行测试。

  • v1.22
  • v1.23
  • v1.24
    由于Kubernetes API的变化,Calico v3.25将不能在Kubernetes v1.15或以下版本上运行。V1.16-v1.18可以工作,但不再测试它们。更新的版本也可以工作,但我们建议升级到针对更新的Kubernetes版本进行测试的Calico版本。

2)部署修改

# --no-check-certificate
wget https://docs.tigera.io/archive/v3.25/manifests/calico.yaml --no-check-certificate

下载完后还需要修改里面定义Pod网络(CALICO_IPV4POOL_CIDR),与前面kubeadm init指定的一样

修改完后应用清单:

#修改为:
            - name: CALICO_IPV4POOL_CIDR
              value: "10.244.0.0/16"


                  key: calico_backend
            # Cluster type to identify the deployment type
            - name: CLUSTER_TYPE
              value: "k8s,bgp"
            # 下方新增
            - name: IP_AUTODETECTION_METHOD
              value: "interface=ens160"
    # ens192为本地网卡名字             

3)更换模式为bgp

#修改CALICO_IPV4POOL_IPIP为Never,不过这种方式只适用于第一次部署,也就是如果已经部署了IPIP模式,这种方式就不奏效了,除非把calico删除,修改。
#更改IPIP模式为bgp模式,可以提高网速,但是节点之前相互不能通信。
            - name: CALICO_IPV4POOL_IPIP
              value: "Always"
#修改yaml
            - name: CALICO_IPV4POOL_IPIP
              value: "Never"
【报错】Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget

【原因】api版本已经过期,需要修改下。
将apiVersion: policy/v1beta1 修改为apiVersion: policy/v1
【报错】
 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "0b2f62ae0e97048b9d2b94bd5b2642fd49d1bc977c8db06493d66d1483c6cf45": plugin type="calico" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
 【解决】
 # 下方熙增新增
  - name: IP_AUTODETECTION_METHOD
    value: "interface=ens160"
    # ens160为本地网卡名字
#生效
kubectl apply -f calico.yaml
kubectl get pods -n kube-system

5)更改网络模式为ipvs

  • 升级之前先确定内核版本为5
#修改网络模式为IPVS
[root@k8s-master ~]# kubectl edit -n kube-system cm kube-proxy
修改:将mode: " "
修改为mode: “ipvs”
:wq保存退出

#看kube-system命名空间下的kube-proxy并删除,删除后,k8s会自动再次生成,新生成的kube-proxy会采用刚刚配置的ipvs模式
kubectl get pod -n kube-system |grep kube-proxy |awk '{system("kubectl delete pod "$1" -n kube-system")}'

#清除防火墙规则
iptables -t filter -F; iptables -t filter -X; iptables -t nat -F; iptables -t nat -X;

#查看日志,确认使用的是ipvs,下面的命令,将proxy的名字换成自己查询出来的名字即可
[root@k8s-master ~]# kubectl get pod -n kube-system | grep kube-proxy
kube-proxy-2bvgz                           1/1     Running   0              34s
kube-proxy-hkctn                           1/1     Running   0              35s
kube-proxy-pjf5j                           1/1     Running   0              38s
kube-proxy-t5qbc                           1/1     Running   0              36s
[root@k8s-master ~]# kubectl logs -n kube-system kube-proxy-pjf5j
I0303 04:32:34.613408       1 node.go:163] Successfully retrieved node IP: 192.168.50.116
I0303 04:32:34.613505       1 server_others.go:138] "Detected node IP" address="192.168.50.116"
I0303 04:32:34.656137       1 server_others.go:269] "Using ipvs Proxier"
I0303 04:32:34.656170       1 server_others.go:271] "Creating dualStackProxier for ipvs"

#通过ipvsadm命令查看是否正常,判断ipvs开启了
[root@k8s-master ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.17.0.1:32684 rr
  -> 10.244.36.65:80              Masq    1      0          0         
TCP  192.168.50.114:32684 rr
  -> 10.244.36.65:80              Masq    1      0          0         
TCP  10.96.0.1:443 rr
  -> 192.168.50.114:6443          Masq    1      0          0         
TCP  10.96.0.10:53 rr
  -> 10.244.36.64:53              Masq    1      0          0         
  -> 10.244.169.128:53            Masq    1      0          0         
TCP  10.96.0.10:9153 rr
  -> 10.244.36.64:9153            Masq    1      0          0         
  -> 10.244.169.128:9153          Masq    1      0          0         
TCP  10.105.5.170:80 rr
  -> 10.244.36.65:80              Masq    1      0          0         
UDP  10.96.0.10:53 rr
  -> 10.244.36.64:53              Masq    1      0          0         
  -> 10.244.169.128:53            Masq    1      0          0         

八、测试kubernetes集群

  • 验证Pod工作
  • 验证Pod网络通信
  • 验证DNS解析

在Kubernetes集群中创建一个pod,验证是否正常运行:

kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=NodePort
kubectl get pod,svc

访问地址:http://NodeIP:Port

#查看日志
journalctl -u kube-scheduler 
journalctl -xefu kubelet #实时刷新
journalctl -u kube-apiserver 
journalctl -u kubelet |tail
journalctl -xe

【参考】https://blog.csdn.net/agonie201218/article/details/127878279

九、部署 Dashboard

wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.3/aio/deploy/recommended.yaml

默认Dashboard只能集群内部访问,修改Service为NodePort类型,暴露到外部:

$ vi recommended.yaml
...
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30000
  selector:
    k8s-app: kubernetes-dashboard
  type: NodePort
...

$ kubectl apply -f recommended.yaml
$ kubectl get pods -n kubernetes-dashboard
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-6b4884c9d5-gl8nr   1/1     Running   0          13m
kubernetes-dashboard-7f99b75bf4-89cds        1/1     Running   0          13m

访问地址:https://NodeIP:30000

创建service account并绑定默认cluster-admin管理员集群角色:

# 创建用户
kubectl create serviceaccount dashboard-admin -n kube-system
# 用户授权
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
#解决WEB页面报错
kubectl create clusterrolebinding system:anonymous   --clusterrole=cluster-admin  --user=system:anonymous

#访问:https://node节点:30000/

https://192.168.4.115:30000/

# 获取用户Token
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

使用输出的token登录Dashboard。

十=、解决谷歌不能登录dashboard

#查看
kubectl get secrets  -n  kubernetes-dashboard

#删除默认的secret,用自签证书创建新的secret
kubectl delete secret kubernetes-dashboard-certs -n kubernetes-dashboard

#创建ca
openssl genrsa -out ca.key 2048
openssl req -new -x509 -key ca.key -out ca.crt -days 3650 -subj "/C=CN/ST=HB/L=WH/O=DM/OU=YPT/CN=CA"
openssl x509 -in ca.crt -noout -text

#签发Dashboard证书
openssl genrsa -out dashboard.key 2048
openssl req -new -key dashboard.key -out dashboard.csr -subj "/O=white/CN=dasnboard"
openssl x509 -req -in dashboard.csr  -CA ca.crt -CAkey ca.key -CAcreateserial -out dashboard.crt -days 3650


#生成新的secret
kubectl create secret generic kubernetes-dashboard-certs  --from-file=dashboard.crt=/opt/dashboard/dashboard.crt --from-file=dashboard.key=/opt/dashboard/dashboard.key -n kubernetes-dashboard

# 删除默认的secret,用自签证书创建新的secret
#kubectl create secret generic #kubernetes-dashboard-certs \
#--from-file=/etc/kubernetes/pki/apiserver.key \
#--from-file=/etc/kubernetes/pki/apiserver.crt \ 
#-n kubernetes-dashboard

# vim recommended.yaml 加入证书路径
          args:
            - --auto-generate-certificates
            - --tls-key-file=dashboard.key
            - --tls-cert-file=dashboard.crt
#           - --tls-key-file=apiserver.key
#           - --tls-cert-file=apiserver.crt

#重新生效
kubectl apply  -f recommended.yaml 

#删除pods重新生效
kubectl get pod -n kubernetes-dashboard | grep -v NAME | awk '{print "kubectl delete po " $1 " -n kubernetes-dashboard"}' | sh

访问:https://192.168.4.116:30000/#/login

# 获取用户Token
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

十一、安装metrics

#下载
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server:v0.5.0
#改名
docker tag   registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server:v0.5.0 k8s.gcr.io/metrics-server/metrics-server:v0.5.0
#删除已经改名镜像
docker rmi -f registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server:v0.5.0
#创建目录
mkdir -p /opt/k8s/metrics-server
cd /opt/k8s/metrics-server
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-view: "true"
  name: system:aggregated-metrics-reader
rules:
- apiGroups:
  - metrics.k8s.io
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  - nodes/stats
  - namespaces
  - configmaps
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 0
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
        - --kubelet-insecure-tls
        image: k8s.gcr.io/metrics-server/metrics-server:v0.5.0
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /livez
            port: https
            scheme: HTTPS
          periodSeconds: 10
        name: metrics-server
        ports:
        - containerPort: 443
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readyz
            port: https
            scheme: HTTPS
          initialDelaySeconds: 20
          periodSeconds: 10
        resources:
          requests:
            cpu: 100m
            memory: 200Mi
        securityContext:
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      volumes:
      - emptyDir: {}
        name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  labels:
    k8s-app: metrics-server
  name: v1beta1.metrics.k8s.io
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
  version: v1beta1
  versionPriority: 100


十二、crictl管理容器

crictl 是 CRI 兼容的容器运行时命令行接口。 你可以使用它来检查和调试 Kubernetes 节点上的容器运行时和应用程序。 crictl 和它的源代码在 cri-tools 代码库。

【视频】https://asciinema.org/a/179047

【使用网址】https://kubernetes.io/zh/docs/tasks/debug-application-cluster/crictl/

十三、常用命令

1、常用命令

#查看master组件状态:
kubectl get cs

#查看node状态:
kubectl get node

#查看Apiserver代理的URL:
kubectl cluster-info

#查看集群详细信息:
kubectl cluster-info dump

#查看资源信息:
kubectl describe <资源> <名称>

#查看K8S信息
kubectl api-resources

2、命令操作+


[root@k8s-master ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS      MESSAGE                                                                                       ERROR
scheduler            Unhealthy   Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused
controller-manager   Unhealthy   Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused
etcd-0               Healthy     {"health":"true"}
#修改yaml,注释掉port
$ vim /etc/kubernetes/manifests/kube-scheduler.yaml
$ vim /etc/kubernetes/manifests/kube-controller-manager.yaml
#    - --port=0

#重启
systemctl restart kubelet

#查看master组件
[root@k8s-master ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-0               Healthy   {"health":"true"}

命令区别

命令dockerctr(containerd)crictl(kubernetes)
查看运行的容器docker psctr task ls/ctr container lscrictl ps
查看镜像docker imagesctr image lscrictl images
查看容器日志docker logscrictl logs
查看容器数据信息docker inspectctr container infocrictl inspect
查看容器资源docker statscrictl stats
启动/关闭已有的容器docker start/stopctr task start/killcrictl start/stop
运行一个新的容器docker runctr run无(最小单元为pod)
修改镜像标签docker tagctr image tag
创建一个新的容器docker createctr container createcrictl create
导入镜像docker loadctr -n k8s.io i import
导出镜像docker savectr -n k8s.io i export
删除容器docker rmctr container rmcrictl rm
删除镜像docker rmictr -n k8s.io i rmcrictl rmi
拉取镜像docker pullctr -n k8s.io i pull -kctictl pull
推送镜像docker pushctr -n k8s.io i push -k
在容器内部执行命令docker execcrictl exec

评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

烟雨话浮生

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值