第六章 KubeSphere3.3.0 安装部署 + KubeKey2.2.1(kk)创建集群

第六章 KubeSphere3.3.0 安装部署 + KubeKey2.2.1(kk)创建集群



前言

KubeSphere 是在 Kubernetes 之上构建的面向云原生应用的分布式操作系统,完全开源,支持多云与多集群管理,提供全栈的 IT 自动化运维能力,简化企业的 DevOps 工作流。它的架构可以非常方便地使第三方应用与云原生生态组件进行即插即用 (plug-and-play) 的集成。
作为全栈的多租户容器平台,KubeSphere 提供了运维友好的向导式操作界面,帮助企业快速构建一个强大和功能丰富的容器云平台。KubeSphere 为用户提供构建企业级 Kubernetes 环境所需的多项功能,例如多云与多集群管理、Kubernetes 资源管理、DevOps、应用生命周期管理、微服务治理(服务网格)、日志查询与收集、服务与网络、多租户管理、监控告警、事件与审计查询、存储管理、访问权限控制、GPU 支持、网络策略、镜像仓库管理以及安全管理等(来源于官网)。官网地址:https://kubesphere.com.cn/。本文采用Linux多节点部署KubeSphere3.3.0,使用KubeKey(由 Go 语言开发)是一种全新的安装工具一键式安装。


一、准备服务器及前置环境

1.1、端口放行

放行端口: 30000~32767
1、如果是云服务器,只需要在管理界面的安全组中放行端口即可
2、如果是自己的Linux服务器,放行如下:


firewall-cmd --zone=public --add-port=30000-32767/udp --permanen

1.2、设置hostname


[root@k8s-master01 ~]# hostnamectl set-hostname k8s-master01
[root@k8s-master01 ~]# hostname 
k8s-master01

提示:每个节点都需要提前设置hostname


二、准备 KubeKey

提示:安装方式可参考官网: https://kubesphere.com.cn/docs/v3.3/installing-on-linux/introduction/multioverview/

  1. 先执行以下命令以确保您从正确的区域下载 KubeKey:
export KKZONE=cn
  1. 执行以下命令下载 KubeKey:
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.1 sh -

备注:
下载 KubeKey 后,如果您将其传输至访问 Googleapis 同样受限的新机器,请您在执行以下步骤之前务必再次执行 export KKZONE=cn 命令

  1. 执行结果:
[root@k8s-master01 ~]# export KKZONE=cn
[root@k8s-master01 ~]# curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.1 sh -

Downloading kubekey v2.2.1 from https://kubernetes.pek3b.qingstor.com/kubekey/releases/download/v2.2.1/kubekey-v2.2.1-linux-amd64.tar.gz ...


Kubekey v2.2.1 Download Complete!
[root@k8s-master01 ~]# ll
-rwxr-xr-x. 1 1001 docker 54767616 623 14:05 kk
-rw-r--r--. 1 root root   16997207 82 14:25 kubekey-v2.2.1-linux-amd64.tar.gz

如果 kk不是绿色,执行一下命令为 kk 添加可执行权限:


[root@k8s-master01 ~]# chmod +x kk


三、使用KubeKey引导安装集群

3.1、创建示例配置文件


[root@k8s-master01 ~]# ./kk create config --with-kubesphere v3.3.0 --with-kubernetes v1.23.7 -f config-sample.yaml
Generate KubeKey config file successfully
[root@k8s-master01 ~]# ll
-rw-r--r--. 1 root root       4772 82 14:34 config-sample.yaml

3.2、修改配置文件

roleGroups

  • etcd:etcd 节点名称
  • control-plane:主节点名称
  • worker:工作节点名称

本次采用最小化安装,ClusterConfiguration 中配置都不做修改,如果觉得麻烦的,也可以将 所有的false改成true,安装的时候会同步安装相关的组件

apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: k8s-master01, address: 10.0.0.31, internalAddress: 10.0.0.31, user: root, password: "Qcloud@123"}
  - {name: k8s-node1, address: 10.0.0.33, internalAddress: 10.0.0.33, user: root, password: "Qcloud@123"}
  - {name: k8s-node2, address: 10.0.0.34, internalAddress: 10.0.0.34, user: root, password: "Qcloud@123"}
  roleGroups:
    etcd:
    - k8s-master01
    control-plane: 
    - k8s-master01
    worker:
    - k8s-node1
    - k8s-node2
  controlPlaneEndpoint:
    ## Internal loadbalancer for apiservers 
    # internalLoadbalancer: haproxy

    domain: lb.kubesphere.local
    address: ""
    port: 6443
  kubernetes:
    version: v1.23.7
    clusterName: cluster.local
    autoRenewCerts: true
    containerManager: docker
  etcd:
    type: kubekey
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
    ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
    multusCNI:
      enabled: false
  registry:
    privateRegistry: ""
    namespaceOverride: ""
    registryMirrors: []
    insecureRegistries: []
  addons: []



---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    version: v3.3.0
spec:
  persistence:
    storageClass: ""  
  authentication:
    jwtSecret: ""
  zone: ""
  local_registry: ""
  namespace_override: ""
  # dev_tag: ""
  etcd:
    monitoring: false
    endpointIps: localhost
    port: 2379
    tlsEnable: true
  common:
    core:
      console:
        enableMultiLogin: true
        port: 30880
        type: NodePort
    # apiserver:
    #  resources: {}
    # controllerManager:
    #  resources: {}
    redis:
      enabled: false
      volumeSize: 2Gi
    openldap:
      enabled: false
      volumeSize: 2Gi
    minio:
      volumeSize: 20Gi
    monitoring:
      # type: external
      endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
      GPUMonitoring:
        enabled: false
    gpu:
      kinds:
      - resourceName: "nvidia.com/gpu"
        resourceType: "GPU"
        default: true
    es:
      # master:
      #   volumeSize: 4Gi
      #   replicas: 1
      #   resources: {}
      # data:
      #   volumeSize: 20Gi
      #   replicas: 1
      #   resources: {}
      logMaxAge: 7
      elkPrefix: logstash
      basicAuth:
        enabled: false
        username: ""
        password: ""
      externalElasticsearchHost: ""
      externalElasticsearchPort: ""
  alerting:
    enabled: false
    # thanosruler:
    #   replicas: 1
    #   resources: {}
  auditing:
    enabled: false
    # operator:
    #   resources: {}
    # webhook:
    #   resources: {}
  devops:
    enabled: false
    # resources: {}
    jenkinsMemoryLim: 2Gi
    jenkinsMemoryReq: 1500Mi
    jenkinsVolumeSize: 8Gi
    jenkinsJavaOpts_Xms: 1200m
    jenkinsJavaOpts_Xmx: 1600m
    jenkinsJavaOpts_MaxRAM: 2g
  events:
    enabled: false
    # operator:
    #   resources: {}
    # exporter:
    #   resources: {}
    # ruler:
    #   enabled: true
    #   replicas: 2
    #   resources: {}
  logging:
    enabled: false
    logsidecar:
      enabled: true
      replicas: 2
      # resources: {}
  metrics_server:
    enabled: false
  monitoring:
    storageClass: ""
    node_exporter:
      port: 9100
      # resources: {}
    # kube_rbac_proxy:
    #   resources: {}
    # kube_state_metrics:
    #   resources: {}
    # prometheus:
    #   replicas: 1
    #   volumeSize: 20Gi
    #   resources: {}
    #   operator:
    #     resources: {}
    # alertmanager:
    #   replicas: 1
    #   resources: {}
    # notification_manager:
    #   resources: {}
    #   operator:
    #     resources: {}
    #   proxy:
    #     resources: {}
    gpu:
      nvidia_dcgm_exporter:
        enabled: false
        # resources: {}
  multicluster:
    clusterRole: none
  network:
    networkpolicy:
      enabled: false
    ippool:
      type: none
    topology:
      type: none
  openpitrix:
    store:
      enabled: false
  servicemesh:
    enabled: false
    istio:
      components:
        ingressGateways:
        - name: istio-ingressgateway
          enabled: false
        cni:
          enabled: false
  edgeruntime:
    enabled: false
    kubeedge:
      enabled: false
      cloudCore:
        cloudHub:
          advertiseAddress:
            - ""
        service:
          cloudhubNodePort: "30000"
          cloudhubQuicNodePort: "30001"
          cloudhubHttpsNodePort: "30002"
          cloudstreamNodePort: "30003"
          tunnelNodePort: "30004"
        # resources: {}
        # hostNetWork: false
      iptables-manager:
        enabled: true
        mode: "external"
        # resources: {}
      # edgeService:
      #   resources: {}
  terminal:
    timeout: 600

3.3、创建集群

3.3.1、开始创建

这里可以参照官网中的容器运行时要求、依赖项要求传送门

[root@k8s-master01 ~]# ./kk create cluster -f config-sample.yaml


 _   __      _          _   __           
| | / /     | |        | | / /           
| |/ / _   _| |__   ___| |/ /  ___ _   _ 
|    \| | | | '_ \ / _ \    \ / _ \ | | |
| |\  \ |_| | |_) |  __/ |\  \  __/ |_| |
\_| \_/\__,_|_.__/ \___\_| \_/\___|\__, |
                                    __/ |
                                   |___/

15:37:50 CST [GreetingsModule] Greetings
15:37:52 CST message: [k8s-node2]
Greetings, KubeKey!
15:37:52 CST message: [k8s-master01]
Greetings, KubeKey!
15:37:52 CST message: [k8s-node1]
Greetings, KubeKey!
15:37:52 CST success: [k8s-node2]
15:37:52 CST success: [k8s-master01]
15:37:52 CST success: [k8s-node1]
15:37:52 CST [NodePreCheckModule] A pre-check on nodes
15:37:57 CST success: [k8s-master01]
15:37:57 CST success: [k8s-node1]
15:37:57 CST success: [k8s-node2]
15:37:57 CST [ConfirmModule] Display confirmation form
+--------------+------+------+---------+----------+-------+-------+---------+-----------+--------+----------+------------+------------+-------------+------------------+--------------+
| name         | sudo | curl | openssl | ebtables | socat | ipset | ipvsadm | conntrack | chrony | docker   | containerd | nfs client | ceph client | glusterfs client | time         |
+--------------+------+------+---------+----------+-------+-------+---------+-----------+--------+----------+------------+------------+-------------+------------------+--------------+
| k8s-master01 | y    | y    | y       | y        | y     | y     | y       | y         | y      | 20.10.17 | 1.6.6      | y          |             |                  | CST 15:37:53 |
| k8s-node1    | y    | y    | y       | y        | y     | y     | y       | y         | y      | 20.10.17 | 1.6.6      | y          |             |                  | CST 15:37:56 |
| k8s-node2    | y    | y    | y       | y        | y     | y     | y       | y         | y      | 20.10.17 | 1.6.6      | y          |             |                  | CST 15:37:57 |
+--------------+------+------+---------+----------+-------+-------+---------+-----------+--------+----------+------------+------------+-------------+------------------+--------------+

This is a simple check of your environment.
Before installation, ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations

Continue this installation? [yes/no]: yes

提示:安装需要一点时间,需要耐心等待

安装过程中会出现以下人性化的提示,箭头一直在运动(这效果比较不错):

在这里插入图片描述

3.3.2、检查组件安装过程

在 kubectl 中执行以下命令检查安装过程

[root@k8s-master01 ~]# kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f

3.3.3、kubesphere安装完成

1. 出现以下信息,则安装完成

16:11:34 CST success: [k8s-master01]
#####################################################
###              Welcome to KubeSphere!           ###
#####################################################

Console: http://10.0.0.31:30880
Account: admin
Password: P@88w0rd

NOTES:
  1. After you log into the console, please check the
     monitoring status of service components in
     "Cluster Management". If any service is not
     ready, please wait patiently until all components 
     are up and running.
  2. Please change the default password after login.

#####################################################
https://kubesphere.io             2022-08-02 16:19:08
#####################################################
16:19:10 CST success: [k8s-master01]
16:19:10 CST Pipeline[CreateClusterPipeline] execute successfully
Installation is complete.

Please check the result using the command:

	kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-installer -o jsonpath='{.items[0].metadata.name}') -f

2. 检查集群的node节点

[root@k8s-master01 ~]# kubectl get nodes
NAME           STATUS   ROLES                  AGE   VERSION
k8s-master01   Ready    control-plane,master   56m   v1.23.7
k8s-node1      Ready    worker                 56m   v1.23.7
k8s-node2      Ready    worker                 56m   v1.23.7

在这里插入图片描述

3. 检查我们的pod是否都是Running状态,都为Running状态下就可以登录kubesphere平台

[root@k8s-master01 ~]# kubectl get pod -A
NAMESPACE                      NAME                                             READY   STATUS    RESTARTS      AGE
kube-system                    calico-kube-controllers-785fcf8454-2r4z5         1/1     Running   0             11m
kube-system                    calico-node-5bxfd                                1/1     Running   0             11m
kube-system                    calico-node-8g9vm                                1/1     Running   0             11m
kube-system                    calico-node-h4bpn                                1/1     Running   0             11m
kube-system                    coredns-757cd945b-9f8z4                          1/1     Running   0             11m
kube-system                    coredns-757cd945b-vsg2j                          1/1     Running   0             11m
kube-system                    kube-apiserver-k8s-master01                      1/1     Running   0             12m
kube-system                    kube-controller-manager-k8s-master01             1/1     Running   0             12m
kube-system                    kube-proxy-78429                                 1/1     Running   0             11m
kube-system                    kube-proxy-88gvh                                 1/1     Running   0             11m
kube-system                    kube-proxy-vtvl6                                 1/1     Running   0             11m
kube-system                    kube-scheduler-k8s-master01                      1/1     Running   0             12m
kube-system                    nodelocaldns-2xmsg                               1/1     Running   0             11m
kube-system                    nodelocaldns-6fl4v                               1/1     Running   0             11m
kube-system                    nodelocaldns-7k54p                               1/1     Running   0             11m
kube-system                    openebs-localpv-provisioner-7bbb56d7dc-wr5qx     1/1     Running   1 (10m ago)   11m
kube-system                    snapshot-controller-0                            1/1     Running   0             10m
kubesphere-controls-system     default-http-backend-659cc67b6b-z5jrk            1/1     Running   0             8m40s
kubesphere-controls-system     kubectl-admin-7966644f4b-8r2mm                   1/1     Running   0             4m21s
kubesphere-monitoring-system   alertmanager-main-0                              2/2     Running   0             6m50s
kubesphere-monitoring-system   alertmanager-main-1                              2/2     Running   0             6m49s
kubesphere-monitoring-system   alertmanager-main-2                              2/2     Running   0             6m48s
kubesphere-monitoring-system   kube-state-metrics-75f7c75f86-5n6hl              3/3     Running   0             6m55s
kubesphere-monitoring-system   node-exporter-2cl5v                              2/2     Running   0             6m55s
kubesphere-monitoring-system   node-exporter-6bqj2                              2/2     Running   0             6m55s
kubesphere-monitoring-system   node-exporter-j9x5t                              2/2     Running   0             6m56s
kubesphere-monitoring-system   notification-manager-deployment-cdd656fd-5v6fr   2/2     Running   0             6m19s
kubesphere-monitoring-system   notification-manager-deployment-cdd656fd-wrk5l   2/2     Running   0             6m20s
kubesphere-monitoring-system   notification-manager-operator-7f7c564948-smcg2   2/2     Running   0             6m29s
kubesphere-monitoring-system   prometheus-k8s-0                                 2/2     Running   0             6m51s
kubesphere-monitoring-system   prometheus-k8s-1                                 2/2     Running   0             6m49s
kubesphere-monitoring-system   prometheus-operator-684988fc5c-gccj5             2/2     Running   0             6m57s
kubesphere-system              ks-apiserver-5bc97d4496-frldl                    1/1     Running   0             8m40s
kubesphere-system              ks-console-5ff9d8f9d-8h4sm                       1/1     Running   0             8m40s
kubesphere-system              ks-controller-manager-7647fb7bf-jpcdw            1/1     Running   0             8m40s
kubesphere-system              ks-installer-65bc964898-2pj5m                    1/1     Running   0             11m

3.4、安装前置环境

3.4.1、安装nfs文件系统

3.4.1.1、安装nfs-server
# 在每个机器。
[root@k8s-master01 ~]# yum install -y nfs-utils


# 在master 执行以下命令 
[root@k8s-master01 ~]# echo "/nfs/data/ *(insecure,rw,sync,no_root_squash)" > /etc/exports


# 执行以下命令,启动 nfs 服务;创建共享目录
[root@k8s-master01 ~]# mkdir -p /nfs/data


# 在master执行
[root@k8s-master01 ~]# systemctl enable rpcbind
[root@k8s-master01 ~]# systemctl enable nfs-server
[root@k8s-master01 ~]# systemctl start rpcbind
[root@k8s-master01 ~]# systemctl start nfs-server

# 使配置生效
[root@k8s-master01 ~]# exportfs -r


#检查配置是否生效
[root@k8s-master01 ~]# exportfs

3.4.1.2、配置nfs-client (在node节点)

IP地址为master的IP地址

[root@k8s-node1 ~]# showmount -e 10.0.0.31

[root@k8s-node1 ~]# mkdir -p /nfs/data

[root@k8s-node1 ~]# mount -t nfs 10.0.0.31:/nfs/data /nfs/data

3.4.1.3、配置默认存储

[root@k8s-master01 ~]# vi sc.yaml 

## 创建了一个存储类
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-storage ## 可以改成自己的存储名称
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"  ## true 默认空间,false 不是默认空间
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:
  archiveOnDelete: "true"  ## 删除pv的时候,pv的内容是否要备份

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/nfs-subdir-external-provisioner:v4.0.2
          # resources:
          #    limits:
          #      cpu: 10m
          #    requests:
          #      cpu: 10m
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: NFS_SERVER
              value: 10.0.0.31 ## 指定自己nfs服务器地址
            - name: NFS_PATH  
              value: /nfs/data  ## nfs服务器共享的目录
      volumes:
        - name: nfs-client-root
          nfs:
            server: 10.0.0.31
            path: /nfs/data
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

执行创建命令

[root@k8s-master01 ~]# kubectl apply -f sc.yaml 

检查是否安装成功

[root@k8s-master01 ~]# kubectl get sc

创建一个空间,检查默认存储空间是否生效

[root@k8s-master01 ~]# vi pvc.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: nginx-pvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 200Mi
  storageClassName:  nfs-storage   ## 这里不加就是默认空间

执行创建pvc及查看

[root@k8s-master01 ~]# kubectl apply -f pvc.yaml 
[root@k8s-master01 ~]# kubectl get pvc 
[root@k8s-master01 ~]# kubectl get pv

3.4.2、安装metrics-server

创建了一个存储类,本文配置文件提供了 0.4.3 和 0.6.1 版本,按需选择

v0.4.3 的yaml 配置文件如下

[root@k8s-master01 ~]# vi metrics-server.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-view: "true"
  name: system:aggregated-metrics-reader
rules:
- apiGroups:
  - metrics.k8s.io
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  - nodes/stats
  - namespaces
  - configmaps
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 0
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      containers:
      - args:
        - --cert-dir=/tmp
        - --kubelet-insecure-tls
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/metrics-server:v0.4.3
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /livez
            port: https
            scheme: HTTPS
          periodSeconds: 10
        name: metrics-server
        ports:
        - containerPort: 4443
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readyz
            port: https
            scheme: HTTPS
          periodSeconds: 10
        securityContext:
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      volumes:
      - emptyDir: {}
        name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  labels:
    k8s-app: metrics-server
  name: v1beta1.metrics.k8s.io
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
  version: v1beta1
  versionPriority: 100

v0.6.1 的yaml 配置文件如下

[root@k8s-master01 ~]# vi metrics-server.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-view: "true"
  name: system:aggregated-metrics-reader
rules:
- apiGroups:
  - metrics.k8s.io
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - nodes/metrics
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 0
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
        - --kubelet-insecure-tls
        image: bitnami/metrics-server:0.6.1
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /livez
            port: https
            scheme: HTTPS
          periodSeconds: 10
        name: metrics-server
        ports:
        - containerPort: 4443
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readyz
            port: https
            scheme: HTTPS
          initialDelaySeconds: 20
          periodSeconds: 10
        resources:
          requests:
            cpu: 100m
            memory: 200Mi
        securityContext:
          allowPrivilegeEscalation: false
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      volumes:
      - emptyDir: {}
        name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  labels:
    k8s-app: metrics-server
  name: v1beta1.metrics.k8s.io
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
  version: v1beta1
  versionPriority: 100

执行命令:

[root@k8s-master01 ~]# kubectl apply -f metrics-server.yaml
[root@k8s-master01 ~]# kubectl top nodes
NAME           CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
k8s-master01   393m         5%     3917Mi          13%       
k8s-node1      566m         7%     5458Mi          18%       
k8s-node2      1705m        22%    6885Mi          23%    
[root@k8s-master01 ~]# kubectl top pods -A
NAMESPACE                      NAME                                                       CPU(cores)   MEMORY(bytes)   
argocd                         devops-argocd-application-controller-0                     4m           16Mi            
argocd                         devops-argocd-applicationset-controller-7978f69d5f-rlxgv   1m           17Mi            
argocd                         devops-argocd-dex-server-6c8494d7c8-mzqf9                  1m           19Mi            
argocd                         devops-argocd-notifications-controller-76ffc9bfd7-spzcs    1m           18Mi            
argocd                         devops-argocd-redis-855b65f5c4-vnqmj                       2m           6Mi      

3.5、安装组件

  • 1、以 admin 用户登录控制台,点击左上角的平台管理,选择集群管理

  • 2、点击定制资源定义,在搜索栏中输入 ClusterConfiguration,点击搜索结果查看其详细页面。

在这里插入图片描述

  • 3、在自定义资源中,点击 ks-installer 右侧的操作,选择编辑 YAML
    在这里插入图片描述
  • 4、在该 YAML 文件中,修改配置内容完成后,点击右下角的确定,保存配置,组件就开始安装,安装需要一定的时间,需要耐心等待。

1、kubesphere平台上操作:再逐一开启将false改成true,
2、kubesphere平台上操作:metrics_server不用开,我们已经提前安装(kubesphere下载官方镜像有可能下载失败),
3、kubesphere平台上操作:es也不用开,
4、kubesphere平台上操作:建议将存储配置进行修改: storageClass: "nfs-storage" ,不修改会默认创建一个存储空间 local (default) ,
5、kubesphere平台上操作:ippool: type: none 改成 ippool: type: calico

修改后的yaml配置文件


apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: >
      {"apiVersion":"installer.kubesphere.io/v1alpha1","kind":"ClusterConfiguration","metadata":{"annotations":{},"labels":{"version":"v3.3.0"},"name":"ks-installer","namespace":"kubesphere-system"},"spec":{"alerting":{"enabled":false},"auditing":{"enabled":false},"authentication":{"jwtSecret":""},"common":{"core":{"console":{"enableMultiLogin":true,"port":30880,"type":"NodePort"}},"es":{"basicAuth":{"enabled":false,"password":"","username":""},"elkPrefix":"logstash","externalElasticsearchHost":"","externalElasticsearchPort":"","logMaxAge":7},"gpu":{"kinds":[{"default":true,"resourceName":"nvidia.com/gpu","resourceType":"GPU"}]},"minio":{"volumeSize":"20Gi"},"monitoring":{"GPUMonitoring":{"enabled":false},"endpoint":"http://prometheus-operated.kubesphere-monitoring-system.svc:9090"},"openldap":{"enabled":false,"volumeSize":"2Gi"},"redis":{"enabled":false,"volumeSize":"2Gi"}},"devops":{"enabled":false,"jenkinsJavaOpts_MaxRAM":"2g","jenkinsJavaOpts_Xms":"1200m","jenkinsJavaOpts_Xmx":"1600m","jenkinsMemoryLim":"2Gi","jenkinsMemoryReq":"1500Mi","jenkinsVolumeSize":"8Gi"},"edgeruntime":{"enabled":false,"kubeedge":{"cloudCore":{"cloudHub":{"advertiseAddress":[""]},"service":{"cloudhubHttpsNodePort":"30002","cloudhubNodePort":"30000","cloudhubQuicNodePort":"30001","cloudstreamNodePort":"30003","tunnelNodePort":"30004"}},"enabled":false,"iptables-manager":{"enabled":true,"mode":"external"}}},"etcd":{"endpointIps":"10.0.0.31","monitoring":false,"port":2379,"tlsEnable":true},"events":{"enabled":false},"logging":{"enabled":false,"logsidecar":{"enabled":true,"replicas":2}},"metrics_server":{"enabled":false},"monitoring":{"gpu":{"nvidia_dcgm_exporter":{"enabled":false}},"node_exporter":{"port":9100},"storageClass":""},"multicluster":{"clusterRole":"none"},"network":{"ippool":{"type":"none"},"networkpolicy":{"enabled":false},"topology":{"type":"none"}},"openpitrix":{"store":{"enabled":false}},"persistence":{"storageClass":""},"servicemesh":{"enabled":false,"istio":{"components":{"cni":{"enabled":false},"ingressGateways":[{"enabled":false,"name":"istio-ingressgateway"}]}}},"terminal":{"timeout":600},"zone":"cn"}}
  labels:
    version: v3.3.0
  name: ks-installer
  namespace: kubesphere-system
spec:
  alerting:
    enabled: true
  auditing:
    enabled: true
  authentication:
    jwtSecret: ''
  common:
    core:
      console:
        enableMultiLogin: true
        port: 30880
        type: NodePort
    es:
      basicAuth:
        enabled: false
        password: ''
        username: ''
      elkPrefix: logstash
      externalElasticsearchHost: ''
      externalElasticsearchPort: ''
      logMaxAge: 7
    gpu:
      kinds:
        - default: true
          resourceName: nvidia.com/gpu
          resourceType: GPU
    minio:
      volumeSize: 20Gi
    monitoring:
      GPUMonitoring:
        enabled: true
      endpoint: 'http://prometheus-operated.kubesphere-monitoring-system.svc:9090'
    openldap:
      enabled: true
      volumeSize: 2Gi
    redis:
      enabled: true
      volumeSize: 2Gi
  devops:
    enabled: true
    jenkinsJavaOpts_MaxRAM: 2g
    jenkinsJavaOpts_Xms: 1200m
    jenkinsJavaOpts_Xmx: 1600m
    jenkinsMemoryLim: 2Gi
    jenkinsMemoryReq: 1500Mi
    jenkinsVolumeSize: 8Gi
  edgeruntime:
    enabled: true
    kubeedge:
      cloudCore:
        cloudHub:
          advertiseAddress:
            - ''
        service:
          cloudhubHttpsNodePort: '30002'
          cloudhubNodePort: '30000'
          cloudhubQuicNodePort: '30001'
          cloudstreamNodePort: '30003'
          tunnelNodePort: '30004'
      enabled: true
      iptables-manager:
        enabled: true
        mode: external
  etcd:
    endpointIps: 10.0.0.31
    monitoring: true
    port: 2379
    tlsEnable: true
  events:
    enabled: true
  logging:
    enabled: true
    logsidecar:
      enabled: true
      replicas: 2
  metrics_server:
    enabled: false
  monitoring:
    gpu:
      nvidia_dcgm_exporter:
        enabled: true
    node_exporter:
      port: 9100
    storageClass: 'nfs-storage'
  multicluster:
    clusterRole: none
  network:
    ippool:
      type: calico
    networkpolicy:
      enabled: true
    topology:
      type: none
  openpitrix:
    store:
      enabled: true
  persistence:
    storageClass: 'nfs-storage'
  servicemesh:
    enabled: true
    istio:
      components:
        cni:
          enabled: true
        ingressGateways:
          - enabled: true
            name: istio-ingressgateway
  terminal:
    timeout: 600
  zone: cn


  • 5、在 kubectl 中执行以下命令检查安装过程

[root@k8s-master01 ~]# kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f

也可不用通过xshell工具查看,在界面通过kubectl查看

在这里插入图片描述
在这里插入图片描述

  • 6、也可以在平台的系统组件中刷新查看是否安装完成
    在这里插入图片描述

  • 7、组件安装完成
    在这里插入图片描述


四、添加和删除节点

添加节点官网参考地址 :传送门
删除节点官网参考地址:传送门

4.1、使用 KubeKey 检索集群信息

以下命令会创建配置文件 (sample.yaml)

[root@k8s-master01 ~]# ./kk create config --from-cluster
Notice: /root/sample.yaml has been created. Some parameters need to be filled in by yourself, please complete it.
[root@k8s-master01 ~]# ll
-rw-r--r--.  1 root root       1037 82 16:59 sample.yaml

4.2、添加新节点

  1. 我这里新添加了 k8s-node3节点:
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: sample
spec:
  hosts: 
  ##You should complete the ssh information of the hosts
  - {name: k8s-master01, address: 10.0.0.31, internalAddress: 10.0.0.31 , user: root, password: "Qcloud@123"}
  - {name: k8s-node1, address: 10.0.0.33, internalAddress: 10.0.0.33 , user: root, password: "Qcloud@123"}
  - {name: k8s-node2, address: 10.0.0.34, internalAddress: 10.0.0.34 , user: root, password: "Qcloud@123"}
  - {name: k8s-node3, address: 10.0.0.32, internalAddress: 10.0.0.32 , user: root, password: "Qcloud@123"}
  roleGroups:
    etcd:
    - k8s-master01
    master:
    - k8s-master01
    worker:
    - k8s-node1
    - k8s-node2
    - k8s-node3
  controlPlaneEndpoint:
    ##Internal loadbalancer for apiservers
    #internalLoadbalancer: haproxy

    ##If the external loadbalancer was used, 'address' should be set to loadbalancer's ip.
    domain: lb.kubesphere.local
    address: ""
    port: 6443
  kubernetes:
    version: v1.23.7
    clusterName: cluster.local
    proxyMode: ipvs
    masqueradeAll: false
    maxPods: 110
    nodeCidrMaskSize: 24
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
  registry:
    privateRegistry: ""

  1. 执行以下命令:
[root@k8s-master01 ~]# export KKZONE=cn
[root@k8s-master01 ~]# ./kk add nodes -f sample.yaml


 _   __      _          _   __           
| | / /     | |        | | / /           
| |/ / _   _| |__   ___| |/ /  ___ _   _ 
|    \| | | | '_ \ / _ \    \ / _ \ | | |
| |\  \ |_| | |_) |  __/ |\  \  __/ |_| |
\_| \_/\__,_|_.__/ \___\_| \_/\___|\__, |
                                    __/ |
                                   |___/

17:29:28 CST [GreetingsModule] Greetings
17:29:29 CST message: [k8s-node3]
Greetings, KubeKey!
17:29:30 CST message: [k8s-node1]
Greetings, KubeKey!
17:29:30 CST message: [k8s-master01]
Greetings, KubeKey!
17:29:32 CST message: [k8s-node2]
Greetings, KubeKey!
17:29:32 CST success: [k8s-node3]
17:29:32 CST success: [k8s-node1]
17:29:32 CST success: [k8s-master01]
17:29:32 CST success: [k8s-node2]
17:29:32 CST [NodePreCheckModule] A pre-check on nodes
17:29:36 CST success: [k8s-master01]
17:29:36 CST success: [k8s-node3]
17:29:36 CST success: [k8s-node1]
17:29:36 CST success: [k8s-node2]
17:29:36 CST [ConfirmModule] Display confirmation form
+--------------+------+------+---------+----------+-------+-------+---------+-----------+--------+----------+------------+------------+-------------+------------------+--------------+
| name         | sudo | curl | openssl | ebtables | socat | ipset | ipvsadm | conntrack | chrony | docker   | containerd | nfs client | ceph client | glusterfs client | time         |
+--------------+------+------+---------+----------+-------+-------+---------+-----------+--------+----------+------------+------------+-------------+------------------+--------------+
| k8s-master01 | y    | y    | y       | y        | y     | y     | y       | y         | y      | 20.10.17 | 1.6.6      | y          |             |                  | CST 17:29:33 |
| k8s-node1    | y    | y    | y       | y        | y     | y     | y       | y         | y      | 20.10.17 | 1.6.6      | y          |             |                  | CST 17:29:36 |
| k8s-node2    | y    | y    | y       | y        | y     | y     | y       | y         | y      | 20.10.17 | 1.6.6      | y          |             |                  | CST 17:29:36 |
| k8s-node3    | y    | y    | y       | y        | y     | y     | y       | y         | y      |          | 1.6.6      | y          |             |                  | CST 17:29:36 |
+--------------+------+------+---------+----------+-------+-------+---------+-----------+--------+----------+------------+------------+-------------+------------------+--------------+

This is a simple check of your environment.
Before installation, ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations

Continue this installation? [yes/no]: yes
  1. 节点添加完成:
[root@k8s-master01 ~]# kubectl get nodes
NAME           STATUS   ROLES                  AGE   VERSION
k8s-master01   Ready    control-plane,master   86m   v1.23.7
k8s-node1      Ready    worker                 86m   v1.23.7
k8s-node2      Ready    worker                 86m   v1.23.7
k8s-node3      Ready    worker                 74s   v1.23.7

在这里插入图片描述

4.3、删除节点

1、停止调度节点
在这里插入图片描述
在这里插入图片描述

2、若要删除节点,您需要首先准备集群的配置文件(即在设置集群时所用的配置文件)。如果您没有该配置文件,请使用 KubeKey 检索集群信息(将默认创建文件 sample.yaml)。


[root@k8s-master01 ~]# ./kk create config --from-cluster

3、请确保在该配置文件中提供主机的所有信息,然后运行以下命令以删除节点。

./kk delete node <nodeName> -f sample.yaml

[root@k8s-master01 ~]# ./kk delete node k8s-node3 -f sample.yaml


 _   __      _          _   __           
| | / /     | |        | | / /           
| |/ / _   _| |__   ___| |/ /  ___ _   _ 
|    \| | | | '_ \ / _ \    \ / _ \ | | |
| |\  \ |_| | |_) |  __/ |\  \  __/ |_| |
\_| \_/\__,_|_.__/ \___\_| \_/\___|\__, |
                                    __/ |
                                   |___/

17:43:13 CST [GreetingsModule] Greetings
17:43:19 CST message: [k8s-node3]
Greetings, KubeKey!
17:43:22 CST message: [k8s-node1]
Greetings, KubeKey!
17:43:24 CST message: [k8s-node2]
Greetings, KubeKey!
17:43:25 CST message: [k8s-master01]
Greetings, KubeKey!
17:43:25 CST success: [k8s-node3]
17:43:25 CST success: [k8s-node1]
17:43:25 CST success: [k8s-node2]
17:43:25 CST success: [k8s-master01]
17:43:25 CST [DeleteNodeConfirmModule] Display confirmation form
Are you sure to delete this node? [yes/no]: yes

在这里插入图片描述

3、节点删除成功

[root@k8s-master01 ~]# kubectl get nodes
NAME           STATUS   ROLES                  AGE    VERSION
k8s-master01   Ready    control-plane,master   107m   v1.23.7
k8s-node1      Ready    worker                 107m   v1.23.7
k8s-node2      Ready    worker                 107m   v1.23.7

在这里插入图片描述


五、卸载kubesphere和kubernetes

4.1、卸载kubesphere

[root@k8s-master01 ~]# ./kk delete cluster -f config-sample.yaml


 _   __      _          _   __           
| | / /     | |        | | / /           
| |/ / _   _| |__   ___| |/ /  ___ _   _ 
|    \| | | | '_ \ / _ \    \ / _ \ | | |
| |\  \ |_| | |_) |  __/ |\  \  __/ |_| |
\_| \_/\__,_|_.__/ \___\_| \_/\___|\__, |
                                    __/ |
                                   |___/

15:58:28 CST [GreetingsModule] Greetings
15:58:28 CST message: [k8s-master01]
Greetings, KubeKey!
15:58:29 CST message: [k8s-node2]
Greetings, KubeKey!
15:58:29 CST message: [k8s-node1]
Greetings, KubeKey!
15:58:29 CST success: [k8s-master01]
15:58:29 CST success: [k8s-node2]
15:58:29 CST success: [k8s-node1]
15:58:29 CST [DeleteClusterConfirmModule] Display confirmation form
Are you sure to delete this cluster? [yes/no]: yes

....................
....................
....................
....................

10:12:39 CST stdout: [k8s-master01]
Cannot find device "nodelocaldns"
10:12:39 CST stdout: [k8s-node2]
Cannot find device "nodelocaldns"
10:12:39 CST stdout: [k8s-node1]
Cannot find device "nodelocaldns"
10:12:39 CST success: [k8s-master01]
10:12:39 CST success: [k8s-node2]
10:12:39 CST success: [k8s-node1]
10:12:39 CST [ClearOSModule] Uninstall etcd
10:12:40 CST success: [k8s-master01]
10:12:40 CST [ClearOSModule] Remove cluster files
10:12:42 CST success: [k8s-master01]
10:12:42 CST success: [k8s-node1]
10:12:42 CST success: [k8s-node2]
10:12:42 CST [ClearOSModule] Systemd daemon reload
10:12:58 CST success: [k8s-master01]
10:12:58 CST success: [k8s-node2]
10:12:58 CST success: [k8s-node1]
10:12:58 CST [UninstallAutoRenewCertsModule] UnInstall auto renew control-plane certs
10:12:58 CST success: [k8s-master01]
10:12:58 CST Pipeline[DeleteClusterPipeline] execute successfully

5.2、卸载清空kubernetes

每个节点都需要执行
modprobe -r ipip
lsmod
rm -rf ~/.kube/
rm -rf /etc/kubernetes/
rm -rf /etc/systemd/system/kubelet.service.d
rm -rf /etc/systemd/system/kubelet.service
rm -rf /usr/bin/kube*
rm -rf /etc/cni
rm -rf /var/lib/etcd
rm -rf /var/etcd
rm -rf kubkey
umount /nfs/data
rm -rf /nfs      #这里是删除存储的pvc文件内容
yum -y remove kubectl* kubelet* docker*
reboot

[root@k8s-master01 ~]# modprobe -r ipip
[root@k8s-master01 ~]# lsmod
[root@k8s-master01 ~]# rm -rf ~/.kube/
[root@k8s-master01 ~]# rm -rf /etc/kubernetes/
[root@k8s-master01 ~]# rm -rf /etc/systemd/system/kubelet.service.d
[root@k8s-master01 ~]# rm -rf /etc/systemd/system/kubelet.service
[root@k8s-master01 ~]# rm -rf /usr/bin/kube*
[root@k8s-master01 ~]# rm -rf /etc/cni
[root@k8s-master01 ~]# rm -rf /opt/cni
[root@k8s-master01 ~]# rm -rf /var/lib/etcd
[root@k8s-master01 ~]# rm -rf /var/etcd
[root@k8s-master01 ~]# rm -rf kubkey
[root@k8s-master01 ~]# umount /nfs/data
[root@k8s-master01 ~]# rm -rf /nfs
[root@k8s-master01 ~]#  yum -y remove  kubectl* kubelet* docker*
[root@k8s-master01 ~]# reboot

六、创建应用仓库

选择一个企业空间,添加应用仓库
在这里插入图片描述

添加 bitnami 官方仓库

名称:bitnami
URL:https://charts.bitnami.com/bitnami
同步间隔:24小时

在这里插入图片描述


七、补充重点

重点:创建默认角色权限
重点:创建默认角色权限
重点:创建默认角色权限


[root@k8s-master01 ~]# kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/release-3.3/roles/ks-core/prepare/files/ks-init/role-templates.yaml


八、总结

至此,关于使用KubeSphere3.3.0管理平台搭建到这里就结束了。作者制作不易,别忘了点赞、关注、加收藏哦,我们下期见。。。


九、其他文章传送门

第一章 KubeSphere 3.3.0 + Seata 1.5.2 + Nacos 2.1.0 (nacos集群模式)
第二章 KubeSphere3.3.0 + Nacos 2.1.0 (集群部署)
第三章 KubeSphere3.3.0 + Sentinel 1.8.4 + Nacos 2.1.0 集群部署
第四章 KubeSphere3.3.0 + Redis7.0.4 + Redis-Cluster 集群部署
第五章 KubeSphere3.3.0 + MySQL8.0.25 集群部署
第六章 KubeSphere3.3.0 安装部署 + KubeKey2.2.1(kk)创建集群
第七章 KubeSphere3.3.0 + MySQL8.0 单节点部署
第八章 KubeSphere3.3.0 + Redis7.0.4 单节点部署
第九章 KubeSphere3.3.0 + Nacos2.1.0 单节点部署
第十章 KubeSphere3.3.0 + FastDFS6.0.8 部署
第十一章 KubeSphere3.4.1 + MinIO:2024.3.15 部署

  • 2
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值