kubernetes集群快速部署1.23

kubernetes集群快速部署1.23


centos7.6系统

生产环境建议:

1、使用大版本下超过小版本后5个版本,如v1.23.8

2、etcd 应该独立部署且为SSD盘

3、集群高可用部署

主机名IP配置CPU与内存(最低)
master192.168.10.642 核4G
node01192.168.10.652 核4G
node02192.168.10.662 核4G

需要的yaml文件获取(components.yaml kube-flannel.yml recommended.yaml)
链接:https://pan.baidu.com/s/1O84Flt_PZg_AltaIWuk4RA
提取码:hndu

1、环境准备(所有节点)

#1.修改主机名
192.168.10.64
hostnamectl set-hostname k8s-master && bash
192.168.10.65
hostnamectl set-hostname k8s-node01 && bash
192.168.10.66
hostnamectl set-hostname k8s-node02 && bash
#2.配置yum源
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo
#3.安装基础软件包
yum install net-tools vim tree htop iftop iotop lrzsz sl wget unzip telnet nmap nc psmisc dos2unix bash-completion bash-completion-extras sysstat rsync nfs-utils httpd-tools -y
#4.关闭防火墙firewalld
systemctl disable firewalld
systemctl stop firewalld
#5.关闭selinux
sed -i '/^SELINUX=/c SELINUX=disabled' /etc/selinux/config
#6.关闭swap交换分区
swapoff -a && sysctl -w vm.swappiness=0
sed -ir '/^[^#]*swap/s@^@#@' /etc/fstab

2、配置免密登录

#1.编辑hosts文件k8s-master
[root@k8s-master ~]# vim /etc/hosts
192.168.10.64 k8s-master
192.168.10.65 k8s-node01
192.168.10.66 k8s-node02
#2.192.168.10.64 k8s-master节点创建密钥
[root@k8s-master ~]# ssh-keygen     #一路回车
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:zyioU+3C1L08gT3wBZbcXXx1TNB3sy8tHzz9Va9buYs root@k8s-master
The key's randomart image is:
+---[RSA 2048]----+
|       . o . oo==|
|        = . . ..B|
|       . .     .=|
|      .   .    ..|
|     o *S.    ..=|
|    o.+ *+    o+B|
|   +.....+o    **|
|  ..o ..+     o.+|
|  .. .   .   E.+.|
+----[SHA256]-----+
#3.拷贝密钥至其它节点
[root@k8s-master ~]# for i in k8s-master k8s-node01 k8s-node02;do ssh-copy-id -i .ssh/id_rsa.pub $i;done
[root@k8s-master ~]# scp -rp /etc/hosts  k8s-node01:/etc/
[root@k8s-master ~]# scp -rp /etc/hosts  k8s-node02:/etc/

3、配置ipv4

#1.编辑 /etc/sysctl.d/k8s.conf 将流量传递到iptables链
[root@k8s-master ~]# vim /etc/sysctl.d/k8s.conf
[root@k8s-master ~]# cat /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
#2.将配置文件推送到其他节点
sysctl --system
scp -rp /etc/sysctl.d/k8s.conf  k8s-node01:/etc/
scp -rp /etc/sysctl.d/k8s.conf  k8s-node02:/etc/
#3.在其他两个节点执行查看是否生效
[root@k8s-node01 ~]# sysctl --system
[root@k8s-node02 ~]# sysctl --system

4、安装docker(所有节点)

#1.所有节点安装Docker-ce 19.03
yum install docker-ce-19.03.* -y
# 说明:新版kubelet建议使用systemd,我们把docker的CgroupDriver改成systemd
mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
#所有节点设置开机自启动Docker:------>所有节点
systemctl daemon-reload && systemctl start docker && systemctl enable docker && systemctl status docker

5、部署kubernetes1.23(所有节点)

#以下步骤三个节点都需要
# 1.添加源---所有节点
cat > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes] 
name=Kubernetes 
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 
enabled=1 
gpgcheck=0 
repo_gpgcheck=0 
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
# 2.安装工具(所有节点)
yum install kubeadm-v1.23.4 kubelet-v1.23.4 kubectl-v1.23.4 -y
# 3.修改kubelet启动方式--
mkdir -p /var/lib/kubelet/
cat > /var/lib/kubelet/config.yaml <<EOF
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
EOF
# 3.开机自启kubelet
systemctl start kubelet && systemctl enable kubelet

6、master节点初始化

注意:这里192.168.10.64为k8s-master的IP地址,V1.23.4是k8s的版本号,10.96.X是service的IP段。10.244.X是pod的IP段

#1.master初始化
kubeadm init  --apiserver-advertise-address=192.168.10.64  --image-repository \
registry.aliyuncs.com/google_containers  --kubernetes-version=v1.23.4   \
--service-cidr=10.96.0.0/16  --pod-network-cidr=10.244.0.0/16
#2.按照说明创建
[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
#如果第一次失败回退命令
kubeadm reset; rm -rf ~/.kube; rm -rf /etc/kubernetes/; rm -rf ~/.kube/config

7、node节点加入集群

#1.在node节点编辑(所有node节点)
vim /etc/sysctl.conf
......

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

#2.所有node节点执行使他生效
[root@k8s-node01 ~]# sysctl -p

[root@k8s-node01 ~]# kubeadm join 192.168.10.64:6443 --token qsla1f.753kh7dofom2u5uv --discovery-token-ca-cert-hash sha256:80ebc2946431d509cf1b47697ef22deab9ad2b9e89952278364cce53bf7c2103
[root@k8s-node02 ~]# kubeadm join 192.168.10.64:6443 --token qsla1f.753kh7dofom2u5uv \
> --discovery-token-ca-cert-hash sha256:80ebc2946431d509cf1b47697ef22deab9ad2b9e89952278364cce53bf7c2103

#如果报错,删除配置项重新加入
kubeadm reset; rm -rf ~/.kube; rm -rf /etc/kubernetes/; rm -rf ~/.kube/config

#3.在master节点查看
[root@k8s-master ~]# kubectl get nodes
NAME         STATUS     ROLES                  AGE     VERSION
k8s-master   NotReady   control-plane,master   51m     v1.23.4
k8s-node01   NotReady   <none>                 17m     v1.23.4
k8s-node02   NotReady   <none>                 7m22s   v1.23.4

#4.在node节点无法执行kubectl
[root@k8s-node01 ~]# kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?

#出现这个问题的原因是kubectl命令需要使用kubernetes-admin来运行,解决方法如下,将主节点中的/etc/kubernetes/admin.conf文件拷贝到从node节点相同目录下,然后配置环境变量:

[root@k8s-master01 ~]# scp -rp /etc/kubernetes/admin.conf k8s-node01:/etc/kubernetes/admin.conf
admin.conf  
[root@k8s-master01 ~]# scp -rp /etc/kubernetes/admin.conf k8s-node02:/etc/kubernetes/admin.conf
admin.conf

[root@k8s-node01 ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
[root@k8s-node01 ~]# source ~/.bash_profile
[root@k8s-node01 ~]# kubectl get nodes
NAME           STATUS   ROLES                  AGE     VERSION
k8s-master01   Ready    control-plane,master   4h37m   v1.23.4
k8s-node01     Ready    <none>                 4h26m   v1.23.4
k8s-node02     Ready    <none>                 4h26m   v1.23.4

[root@k8s-node02 ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
[root@k8s-node02 ~]# source ~/.bash_profile
[root@k8s-node02 ~]# kubectl get nodes
NAME           STATUS   ROLES                  AGE     VERSION
k8s-master01   Ready    control-plane,master   4h38m   v1.23.4
k8s-node01     Ready    <none>                 4h26m   v1.23.4
k8s-node02     Ready    <none>                 4h26m   v1.23.4

8、部署flannel网络插件

# 部署flannel网络插件
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
#注:有时应网络问题不能访问,请浏览器打开下载上传上去

9、集群测试

#1.查看所有组件状态
[root@k8s-master ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE
kube-system   coredns-6d8c4cb4d-bg656              1/1     Running   0          54m
kube-system   coredns-6d8c4cb4d-bptsx              1/1     Running   0          54m
kube-system   etcd-k8s-master                      1/1     Running   0          54m
kube-system   kube-apiserver-k8s-master            1/1     Running   0          54m
kube-system   kube-controller-manager-k8s-master   1/1     Running   0          54m
kube-system   kube-flannel-ds-58559                1/1     Running   0          47s
kube-system   kube-flannel-ds-llw6q                1/1     Running   0          47s
kube-system   kube-flannel-ds-lmpw5                1/1     Running   0          47s
kube-system   kube-proxy-d9flc                     1/1     Running   0          20m
kube-system   kube-proxy-phd55                     1/1     Running   0          54m
kube-system   kube-proxy-w7llv                     1/1     Running   0          10m
kube-system   kube-scheduler-k8s-master            1/1     Running   0          54m
#2.查看node节点状态
[root@k8s-master ~]# kubectl get nodes
NAME         STATUS   ROLES                  AGE   VERSION
k8s-master   Ready    control-plane,master   55m   v1.23.4
k8s-node01   Ready    <none>                 22m   v1.23.4
k8s-node02   Ready    <none>                 11m   v1.23.4
#3.查看默认命名空间
[root@k8s-master ~]# kubectl get namespace
NAME              STATUS   AGE
default           Active   56m
kube-node-lease   Active   56m
kube-public       Active   56m
kube-system       Active   56m

10、验证集群

# 1.创建pod
[root@k8s-master ~]# kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
[root@k8s-master ~]# kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
nginx-85b98978db-tkn7v   1/1     Running   0          30s
# 2.修改pod副本数
[root@k8s-master ~]# kubectl scale deployment nginx --replicas=3
[root@k8s-master ~]# kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
nginx-85b98978db-fkxks   1/1     Running   0          32s
nginx-85b98978db-tkn7v   1/1     Running   0          113s
nginx-85b98978db-v5nzp   1/1     Running   0          32s
# 3.暴露端口
[root@k8s-master ~]# kubectl expose deployment nginx --port=80 --type=NodePort
[root@k8s-master ~]#  kubectl get services
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        60m
nginx        NodePort    10.96.129.45   <none>        80:31608/TCP   16s
# 4.测试
[root@k8s-master ~]# curl 10.96.129.45
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
# 5. 常用命令
kubectl cluster-info     #获取集群信息
kubectl api-resources    #  查看资源
kubectl get nodes        #  查看node节点
kubectl get pods         #  查看pod
kubectl get pods -n kube-system       #查看命名空间kube-system下的pod

11、安装监控组件 metrics-server

在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率。
#将Master节点的front-proxy-ca.crt复制到所有Node节点(注意:是node节点)
scp /etc/kubernetes/pki/front-proxy-ca.crt k8s-node01:/etc/kubernetes/pki/front-proxy-ca.crt

scp /etc/kubernetes/pki/front-proxy-ca.crt k8s-node02:/etc/kubernetes/pki/front-proxy-ca.crt
# 安装metrics server
[root@k8s-master ~]# wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
[root@k8s-master01 ~]# vim components.yaml #更换成国内镜像配置认证,如果这一步不做的话会报X509错误。
    spec:
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
        image: registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server:v0.6.1
#新增认证部分,不然会报错x509...........................................
        command:
        - /metrics-server
        - metrics-resolution=30s
        - --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP
        - --kubelet-insecure-tls
#...................................................................
        imagePullPolicy: IfNotPresent
[root@k8s-master01 ~]# kubectl create -f components.yaml
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
# 查看状态
[root@k8s-master ~]# kubectl get nodes
NAME         STATUS   ROLES                  AGE   VERSION
k8s-master   Ready    control-plane,master   18h   v1.23.4
k8s-node01   Ready    <none>                 18h   v1.23.4
k8s-node02   Ready    <none>                 18h   v1.23.4
[root@k8s-master ~]# kubectl top nodes
NAME         CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
k8s-master   185m         9%     1407Mi          36%       
k8s-node01   200m         20%    867Mi           46%       
k8s-node02   34m          3%     864Mi           45%  

12、安装dashboard

Dashboard用于展示集群中的各类资源

官方GitHub地址:https://github.com/kubernetes/dashboard
#1.安装Dashboard
[root@k8s-master ~]# kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
#2.查看Dashboard状态
[root@k8s-master ~]# kubectl get pod -n kubernetes-dashboard
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-799d786dbf-qbz47   1/1     Running   0          2m32s
kubernetes-dashboard-6b6b86c4c5-45j9v        1/1     Running   0          2m32s
#3更改dashboard的svc为NodePort:
[root@k8s-master01 ~]# kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
...........
  type: NodePort
#4.查看Dashboard
[root@k8s-master ~]# kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.96.244.185   <none>        8000/TCP        4m50s
kubernetes-dashboard        NodePort    10.96.212.43    <none>        443:32744/TCP   4m50s
#用https去登录
https://192.168.10.64:32744/
#将下面查出的token复制到ui界面进行登录
#5.查看tocken
[root@k8s-master ~]# kubectl -n kube-system describe $(kubectl -n kube-system get secret -n kube-system -o name | grep namespace) | grep token
Name:         namespace-controller-token-x9qjg
Type:  kubernetes.io/service-account-token
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IkJZOHlaMVRFd19WZDhmaVlDek5UaWRZcHI0UzhZUUVvbm1RbVM5bkZra1kifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJuYW1lc3BhY2UtY29udHJvbGxlci10b2tlbi14OXFqZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJuYW1lc3BhY2UtY29udHJvbGxlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjVhZDUyOTZhLWU0MzEtNDE3Ni1hNGZjLWQ5MDRlNjg5ODRlNiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpuYW1lc3BhY2UtY29udHJvbGxlciJ9.JeiOSSiudLCMaRglWSC9IJ0K-HyNIPwVpLvG_sCHqJlsaTolP75gOwS_oZfNsQqnvYqK0y7R5K5mRlRC82ZxHLPZA6AfPIE0Kf0-IquPma2KRXYV5I7DkHBxhzdlg91FZ3obLNsUpaFyLGjiTd0Y5ca7CjYHSCPPYABG0kXO8pUz_bmvZ9zMui4Ac2UfQbfj_PkpVb6Rk35id8svTSCBFBaJva4m2uDmvhjJQsqCOHtaMfjUYNS9UuM_Gbkq655OCQFrCiv41lU6lC5q83z3ripkBY89ymoV2z2tQBctzHyjpY0wsIOEbjzbC--XWRTZWeB9fq0G2n8tDAnBl5WAgA

在这里插入图片描述

在这里插入图片描述

13、Kubeadm证书过期配置

#1.查看证书的有效期(可以看到只有一年有效期)
[root@k8s-master01 ~]# cd /etc/kubernetes/pki/
[root@k8s-master01 pki]# for i in $(ls *.crt); do echo "===== $i ====="; openssl x509 -in $i -text -noout | grep -A 3 'Validity' ; done
===== apiserver.crt =====
        Validity
            Not Before: Mar 15 02:56:07 2022 GMT
            Not After : Mar 15 02:56:08 2023 GMT
        Subject: CN=kube-apiserver
===== apiserver-etcd-client.crt =====
        Validity
            Not Before: Mar 15 02:56:08 2022 GMT
            Not After : Mar 15 02:56:09 2023 GMT
        Subject: O=system:masters, CN=kube-apiserver-etcd-client
===== apiserver-kubelet-client.crt =====
        Validity
            Not Before: Mar 15 02:56:07 2022 GMT
            Not After : Mar 15 02:56:08 2023 GMT
        Subject: O=system:masters, CN=kube-apiserver-kubelet-client
===== ca.crt =====
        Validity
            Not Before: Mar 15 02:56:07 2022 GMT
            Not After : Mar 12 02:56:07 2032 GMT
        Subject: CN=kubernetes
===== front-proxy-ca.crt =====
        Validity
            Not Before: Mar 15 02:56:08 2022 GMT
            Not After : Mar 12 02:56:08 2032 GMT
        Subject: CN=front-proxy-ca
===== front-proxy-client.crt =====
        Validity
            Not Before: Mar 15 02:56:08 2022 GMT
            Not After : Mar 15 02:56:08 2023 GMT
        Subject: CN=front-proxy-client

#2.进行备份或者做快照
[root@k8s-master01 pki]# mkdir backup_key
[root@k8s-master01 pki]# cp -rp ./* backup_key/

#3.升级证书为10年有效期
[root@k8s-master01 pki]# git clone https://github.com/yuyicai/update-kube-cert.git
Cloning into 'update-kube-cert'...
remote: Enumerating objects: 105, done.
remote: Counting objects: 100% (65/65), done.
remote: Compressing objects: 100% (43/43), done.
remote: Total 105 (delta 35), reused 50 (delta 22), pack-reused 40
Receiving objects: 100% (105/105), 34.08 KiB | 0 bytes/s, done.
Resolving deltas: 100% (49/49), done.

[root@k8s-master01 pki]# cd update-kube-cert/
[root@k8s-master01 update-kube-cert]# chmod 755 update-kubeadm-cert.sh 
[root@k8s-master01 update-kube-cert]# ./update-kubeadm-cert.sh all
CERTIFICATE                                       EXPIRES                       
/etc/kubernetes/controller-manager.config         Mar 15 02:56:10 2023 GMT      
/etc/kubernetes/scheduler.config                  Mar 15 02:56:11 2023 GMT      
/etc/kubernetes/admin.config                      Mar 15 02:56:10 2023 GMT      
/etc/kubernetes/pki/ca.crt                        Mar 12 02:56:07 2032 GMT      
/etc/kubernetes/pki/apiserver.crt                 Mar 15 02:56:08 2023 GMT      
/etc/kubernetes/pki/apiserver-kubelet-client.crt  Mar 15 02:56:08 2023 GMT      
/etc/kubernetes/pki/front-proxy-ca.crt            Mar 12 02:56:08 2032 GMT      
/etc/kubernetes/pki/front-proxy-client.crt        Mar 15 02:56:08 2023 GMT      
/etc/kubernetes/pki/etcd/ca.crt                   Mar 12 02:56:08 2032 GMT      
/etc/kubernetes/pki/etcd/server.crt               Mar 15 02:56:08 2023 GMT      
/etc/kubernetes/pki/etcd/peer.crt                 Mar 15 02:56:09 2023 GMT      
/etc/kubernetes/pki/etcd/healthcheck-client.crt   Mar 15 02:56:09 2023 GMT      
/etc/kubernetes/pki/apiserver-etcd-client.crt     Mar 15 02:56:09 2023 GMT      
[2022-03-15T15:27:32.74+0800][INFO] backup /etc/kubernetes to /etc/kubernetes.old-20220315
[2022-03-15T15:27:32.74+0800][INFO] updating...
[2022-03-15T15:27:32.86+0800][INFO] updated /etc/kubernetes/pki/etcd/server.conf
[2022-03-15T15:27:32.91+0800][INFO] updated /etc/kubernetes/pki/etcd/peer.conf
[2022-03-15T15:27:32.95+0800][INFO] updated /etc/kubernetes/pki/etcd/healthcheck-client.conf
[2022-03-15T15:27:32.99+0800][INFO] updated /etc/kubernetes/pki/apiserver-etcd-client.conf
[2022-03-15T15:27:33.52+0800][INFO] restarted etcd
[2022-03-15T15:27:33.58+0800][INFO] updated /etc/kubernetes/pki/apiserver.crt
[2022-03-15T15:27:33.62+0800][INFO] updated /etc/kubernetes/pki/apiserver-kubelet-client.crt
[2022-03-15T15:27:33.68+0800][INFO] updated /etc/kubernetes/controller-manager.conf
[2022-03-15T15:27:33.74+0800][INFO] updated /etc/kubernetes/scheduler.conf
[2022-03-15T15:27:33.79+0800][INFO] updated /etc/kubernetes/admin.conf
[2022-03-15T15:27:33.79+0800][INFO] backup /root/.kube/config to /root/.kube/config.old-20220315
[2022-03-15T15:27:33.79+0800][INFO] copy the admin.conf to /root/.kube/config
[2022-03-15T15:27:33.80+0800][INFO] does not need to update kubelet.conf
[2022-03-15T15:27:33.84+0800][INFO] updated /etc/kubernetes/pki/front-proxy-client.crt
[2022-03-15T15:27:44.25+0800][INFO] restarted apiserver
[2022-03-15T15:27:44.62+0800][INFO] restarted controller-manager
[2022-03-15T15:27:45.56+0800][INFO] restarted scheduler
[2022-03-15T15:27:45.79+0800][INFO] restarted kubelet
[2022-03-15T15:27:45.80+0800][INFO] done!!!
CERTIFICATE                                       EXPIRES                       
/etc/kubernetes/controller-manager.config         Mar 12 07:27:33 2032 GMT      
/etc/kubernetes/scheduler.config                  Mar 12 07:27:33 2032 GMT      
/etc/kubernetes/admin.config                      Mar 12 07:27:33 2032 GMT      
/etc/kubernetes/pki/ca.crt                        Mar 12 02:56:07 2032 GMT      
/etc/kubernetes/pki/apiserver.crt                 Mar 12 07:27:33 2032 GMT      
/etc/kubernetes/pki/apiserver-kubelet-client.crt  Mar 12 07:27:33 2032 GMT      
/etc/kubernetes/pki/front-proxy-ca.crt            Mar 12 02:56:08 2032 GMT      
/etc/kubernetes/pki/front-proxy-client.crt        Mar 12 07:27:33 2032 GMT      
/etc/kubernetes/pki/etcd/ca.crt                   Mar 12 02:56:08 2032 GMT      
/etc/kubernetes/pki/etcd/server.crt               Mar 12 07:27:32 2032 GMT      
/etc/kubernetes/pki/etcd/peer.crt                 Mar 12 07:27:32 2032 GMT      
/etc/kubernetes/pki/etcd/healthcheck-client.crt   Mar 12 07:27:32 2032 GMT      
/etc/kubernetes/pki/apiserver-etcd-client.crt     Mar 12 07:27:32 2032 GMT

#4.查看证书有效期
[root@k8s-master01 update-kube-cert]# cd ../
[root@k8s-master01 pki]# for i in $(ls *.crt); do echo "===== $i ====="; openssl x509 -in $i -text -noout | grep -A 3 'Validity' ; done
===== apiserver.crt =====
        Validity
            Not Before: Mar 15 07:27:33 2022 GMT
            Not After : Mar 12 07:27:33 2032 GMT
        Subject: CN=kube-apiserver
===== apiserver-etcd-client.crt =====
        Validity
            Not Before: Mar 15 07:27:32 2022 GMT
            Not After : Mar 12 07:27:32 2032 GMT
        Subject: O=system:masters, CN=kube-apiserver-etcd-client
===== apiserver-kubelet-client.crt =====
        Validity
            Not Before: Mar 15 07:27:33 2022 GMT
            Not After : Mar 12 07:27:33 2032 GMT
        Subject: O=system:masters, CN=kube-apiserver-kubelet-client
===== ca.crt =====
        Validity
            Not Before: Mar 15 02:56:07 2022 GMT
            Not After : Mar 12 02:56:07 2032 GMT
        Subject: CN=kubernetes
===== front-proxy-ca.crt =====
        Validity
            Not Before: Mar 15 02:56:08 2022 GMT
            Not After : Mar 12 02:56:08 2032 GMT
        Subject: CN=front-proxy-ca
===== front-proxy-client.crt =====
        Validity
            Not Before: Mar 15 07:27:33 2022 GMT
            Not After : Mar 12 07:27:33 2032 GMT
        Subject: CN=front-proxy-client
#如上所示,证书有效期已经是10年的
  • 1
    点赞
  • 11
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值