香橙派4和树莓派4B构建K8S集群实践之三:kubesphere

本文详细介绍了如何在Kubernetes集群上安装KubeSphere,并使用NFS作为动态存储插件的过程。包括安装NFS服务,配置挂载目录,安装nfs-subdir-external-provisioner插件,以及处理可能出现的问题和错误。此外,还提供了安装和卸载的脚本以及相关命令参考。
摘要由CSDN通过智能技术生成

目录

1. 说明

2. 安装前准备

2.1 安装nfs服务,设置/data0/nfs为挂载目录

2.2 安装动态供给的存储插件(SC),并选择nfs做存储

3. 安装

3.1 步骤

 3.2 卸载脚本:

4. 遇到问题

4.1 Tips:

5. 相关命令

6. 参考


1. 说明

KubeSphere 是在 Kubernetes 之上构建的以应用为中心的多租户容器平台,提供全栈的 IT 自动化运维的能力 (官网), 主要功能:多云与多集群管理、Kubernetes 资源管理、DevOps、应用生命周期管理、微服务治理(服务网格)、日志查询与收集、服务与网络、多租户管理、监控告警、事件与审计查询、存储管理、访问权限控制、GPU 支持、网络策略、镜像仓库管理以及安全管理等。

2. 安装前准备

安装前注意检查集群里是否已有 StorageClass支持
kubectl get sc 

2.1 安装nfs服务,设置/data0/nfs为挂载目录

#安装 NFS 服务
apt install nfs-kernel-server
#安装客户端工具 **(每个节点都需要装)
sudo apt install nfs-common

#NFS 服务器配置选项在 /etc/default/nfs-kernel-server 和 /etc/default/nfs-common 文件中
mkdir -p /data0/nfs
echo "/data0/nfs *(rw,sync,no_subtree_check,no_root_squash)" >> /etc/exports

#启用并enable该服务
systemctl start nfs-server
systemctl enable nfs-server.service

nfs相关命令:
sudo showmount -e localhost #显示已经 mount 到本机 nfs 目录的客户端机器
sudo exportfs -rv #查看本机共享的文件或目录

2.2 安装动态供给的存储插件(SC),并选择nfs做存储

这里要注意了,大多数安装失败都是由于内外有别所致,花时间精力三分一,需选代理或者其他源来解决,不然又是ImagePullBackOff错误。

添加helm源

helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner

helm repo update

 先试试这个安装命令是否成功,不行就要拿文件下来改

helm upgrade --install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
    --set storageClass.name=nfs-client \
    --set nfs.server=192.168.0.106 \
    --set nfs.path=/data0/nfs \
    --set image.repository=dyrnq/nfs-subdir-external-provisioner\
    --namespace default

还是有错误的话,用以下办法

#拿包并解压
helm pull  nfs-subdir-external-provisioner/nfs-subdir-external-provisioner --untar

vi values.yaml
'''
image:
  #repository: registry.k8s.io/sig-storage/nfs-subdir-external-provisioner
  repository: dyrnq/nfs-subdir-external-provisioner
  tag: v4.0.2
  pullPolicy: IfNotPresent
imagePullSecrets: []

nfs:
  server: 192.168.0.106
  path: /data0/nfs
  ...
'''

helm install nfs-subdir-external-provisioner ./nfs-subdir-external-provisioner

 其中的dyrnq/nfs-subdir-external-provisioner是寻自hub.docker.com, 但有点奇怪的是搜nfs-subdir-external-provisioner是找不到的,可能限制了keyword,搜nfs-subdir-external吧,有很多...以上都是坑!

3. 安装

3.1 步骤

为防不测,先试试能否下ks-installer的image,顺便缓存

ctr images pull docker.io/kubesphere/ks-installer:v3.3.2

去 Release v3.3.2 · kubesphere/ks-installer · GitHub 把yaml文件下到kubesphere目录, 然后分别运行命令

kubectl apply -f ./kubesphere-installer.yaml
kubectl apply -f ./cluster-configuration.yaml

# 然后耐心等待相关的services, pods自动建立
# 可用下面命令查看安装情况,如有问题,注意先运行卸载脚本后再安装

kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f

# 结果:
localhost                  : ok=30   changed=22   unreachable=0    failed=0    skipped=17   rescued=0    ignored=0
Start installing monitoring
Start installing multicluster
Start installing openpitrix
Start installing network
**************************************************
Waiting for all tasks to be completed ...
task network status is successful  (1/4)
task openpitrix status is successful  (2/4)
task multicluster status is successful  (3/4)

使用 kubectl get pod --all-namespaces 查看所有 Pod 是否在 KubeSphere 的相关命名空间中正常运行。如果是,请通过以下命令检查控制台的端口(默认为 30880):

orangepi@k8s-master-1:/k8s_apps/kubesphere$ kubectl get svc/ks-console -n kubesphere-system
NAME         TYPE       CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE
ks-console   NodePort   10.107.5.8   <none>        80:30880/TCP   62m

正常运行的状况如下:

kubectl get ingress,services,pods,sc -A -owide
NAMESPACE   NAME                                        CLASS   HOSTS          ADDRESS   PORTS   AGE
default     ingress.networking.k8s.io/ia-web-service1   nginx   *.k8s-t1.com             80      11d

NAMESPACE                      NAME                                         TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                        AGE     SELECTOR
default                        service/kubernetes                           ClusterIP      10.96.0.1        <none>        443/TCP                        17d     <none>
default                        service/nginx                                NodePort       10.97.211.51     <none>        31080:30080/TCP                13d     app=nginx
ingress-nginx                  service/ingress-nginx-controller             LoadBalancer   10.104.65.8      <pending>     80:30030/TCP,443:31858/TCP     11d     app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
ingress-nginx                  service/ingress-nginx-controller-admission   ClusterIP      10.109.190.198   <none>        443/TCP                        11d     app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
kube-system                    service/kube-dns                             ClusterIP      10.96.0.10       <none>        53/UDP,53/TCP,9153/TCP         17d     k8s-app=kube-dns
kube-system                    service/kubelet                              ClusterIP      None             <none>        10250/TCP,10255/TCP,4194/TCP   9d      <none>
kubesphere-controls-system     service/default-http-backend                 ClusterIP      10.110.248.89    <none>        80/TCP                         5h12m   app=kubesphere,component=kubesphere-router
kubesphere-monitoring-system   service/kube-state-metrics                   ClusterIP      None             <none>        8443/TCP,9443/TCP              5h2m    app.kubernetes.io/component=exporter,app.kubernetes.io/name=kube-state-metrics,app.kubernetes.io/part-of=kube-prometheus
kubesphere-monitoring-system   service/node-exporter                        ClusterIP      None             <none>        9100/TCP                       5h2m    app.kubernetes.io/component=exporter,app.kubernetes.io/name=node-exporter,app.kubernetes.io/part-of=kube-prometheus
kubesphere-monitoring-system   service/prometheus-k8s                       ClusterIP      10.101.180.101   <none>        9090/TCP,8080/TCP              5h2m    app.kubernetes.io/component=prometheus,app.kubernetes.io/instance=k8s,app.kubernetes.io/name=prometheus,app.kubernetes.io/part-of=kube-prometheus
kubesphere-monitoring-system   service/prometheus-operated                  ClusterIP      None             <none>        9090/TCP                       5h2m    app.kubernetes.io/name=prometheus
kubesphere-monitoring-system   service/prometheus-operator                  ClusterIP      None             <none>        8443/TCP                       5h3m    app.kubernetes.io/component=controller,app.kubernetes.io/name=prometheus-operator,app.kubernetes.io/part-of=kube-prometheus
kubesphere-system              service/ks-apiserver                         ClusterIP      10.96.242.29     <none>        80/TCP                         5h12m   app=ks-apiserver,tier=backend
kubesphere-system              service/ks-console                           NodePort       10.107.5.8       <none>        80:30880/TCP                   5h12m   app=ks-console,tier=frontend
kubesphere-system              service/ks-controller-manager                ClusterIP      10.100.123.130   <none>        443/TCP                        5h12m   app=ks-controller-manager,tier=backend

NAMESPACE                      NAME                                                   READY   STATUS    RESTARTS      AGE     IP              NODE           NOMINATED NODE   READINESS GATES
default                        pod/nfs-subdir-external-provisioner-74dd47b4bd-9vkrt   1/1     Running   3 (8h ago)    9d      10.244.2.76     k8s-node-1     <none>           <none>
default                        pod/nginx-77b4fdf86c-fwqjk                             1/1     Running   6 (9d ago)    14d     10.244.0.35     k8s-master-1   <none>           <none>
default                        pod/nginx-77b4fdf86c-rwzn7                             1/1     Running   6 (9d ago)    14d     10.244.2.78     k8s-node-1     <none>           <none>
ingress-nginx                  pod/ingress-nginx-controller-rbdnd                     1/1     Running   3             11d     192.168.0.106   k8s-master-1   <none>           <none>
kube-flannel                   pod/kube-flannel-ds-fkq8h                              1/1     Running   13 (9d ago)   17d     192.168.0.106   k8s-master-1   <none>           <none>
kube-flannel                   pod/kube-flannel-ds-g4csj                              1/1     Running   28 (9d ago)   15d     192.168.0.104   k8s-node-1     <none>           <none>
kube-system                    pod/coredns-7bdc4cb885-mfnvv                           1/1     Running   12 (9d ago)   17d     10.244.0.37     k8s-master-1   <none>           <none>
kube-system                    pod/coredns-7bdc4cb885-rz7zk                           1/1     Running   12 (9d ago)   17d     10.244.0.36     k8s-master-1   <none>           <none>
kube-system                    pod/etcd-k8s-master-1                                  1/1     Running   14 (9d ago)   17d     192.168.0.106   k8s-master-1   <none>           <none>
kube-system                    pod/kube-apiserver-k8s-master-1                        1/1     Running   14 (9d ago)   17d     192.168.0.106   k8s-master-1   <none>           <none>
kube-system                    pod/kube-controller-manager-k8s-master-1               1/1     Running   16 (8h ago)   17d     192.168.0.106   k8s-master-1   <none>           <none>
kube-system                    pod/kube-proxy-jqlw4                                   1/1     Running   13 (9d ago)   17d     192.168.0.106   k8s-master-1   <none>           <none>
kube-system                    pod/kube-proxy-nxr68                                   1/1     Running   24 (9d ago)   15d     192.168.0.104   k8s-node-1     <none>           <none>
kube-system                    pod/kube-scheduler-k8s-master-1                        1/1     Running   16 (8h ago)   17d     192.168.0.106   k8s-master-1   <none>           <none>
kube-system                    pod/snapshot-controller-0                              1/1     Running   0             5h19m   10.244.2.89     k8s-node-1     <none>           <none>
kubesphere-controls-system     pod/default-http-backend-685f67d874-zq8tx              1/1     Running   0             33s     10.244.2.97     k8s-node-1     <none>           <none>
kubesphere-controls-system     pod/k8s-node-1-shell-access                            1/1     Running   0             4m31s   192.168.0.104   k8s-node-1     <none>           <none>
kubesphere-controls-system     pod/kubectl-admin-685dbc7f7b-2m9bw                     1/1     Running   0             4h58m   10.244.2.95     k8s-node-1     <none>           <none>
kubesphere-monitoring-system   pod/kube-state-metrics-5895cbdfb4-qrb2t                3/3     Running   0             5h2m    10.244.0.40     k8s-master-1   <none>           <none>
kubesphere-monitoring-system   pod/node-exporter-knkzd                                2/2     Running   0             5h2m    192.168.0.106   k8s-master-1   <none>           <none>
kubesphere-monitoring-system   pod/node-exporter-pnb4n                                2/2     Running   0             5h2m    192.168.0.104   k8s-node-1     <none>           <none>
kubesphere-monitoring-system   pod/prometheus-k8s-0                                   2/2     Running   0             5h2m    10.244.0.41     k8s-master-1   <none>           <none>
kubesphere-monitoring-system   pod/prometheus-operator-56f97cc849-69mhm               2/2     Running   0             5h2m    10.244.2.92     k8s-node-1     <none>           <none>
kubesphere-system              pod/ks-apiserver-594575c468-pt8dw                      1/1     Running   0             5h12m   10.244.2.94     k8s-node-1     <none>           <none>
kubesphere-system              pod/ks-console-67dc85f5d-4tjbk                         1/1     Running   0             5h12m   10.244.2.90     k8s-node-1     <none>           <none>
kubesphere-system              pod/ks-controller-manager-6fd4c656bc-qpxlx             1/1     Running   0             5h12m   10.244.2.93     k8s-node-1     <none>           <none>
kubesphere-system              pod/ks-installer-855b986fdb-tf2k7                      1/1     Running   0             5h21m   10.244.2.88     k8s-node-1     <none>           <none>

NAMESPACE   NAME                                               PROVISIONER                                     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
            storageclass.storage.k8s.io/nfs-client (default)   cluster.local/nfs-subdir-external-provisioner   Delete          Immediate           true                   9d

 可见已安装了NodePort,端口为30880,可通过内网ip访问

 确保在安全组中打开了端口 30880,并通过 NodePort (IP:30880) 使用默认帐户和密码 (admin/P@88w0rd) 访问 Web 控制台。

 进入管理后台,如果发现有pod异常,可在此直接编辑其yaml,重新创建运行,非常方便。

 3.2 卸载脚本:

 https://github.com/kubesphere/ks-installer/blob/master/scripts/kubesphere-delete.sh

4. 遇到问题

"KubeSphere | Stopping if default StorageClass was not found"

解决方法:卸载部署,安装SC,运行下面命令
kubectl patch storageclass nfs-client -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
 

 Image:          mirrorgooglecontainers/defaultbackend-amd64:1.4
    Image ID:       docker.io/mirrorgooglecontainers/defaultbackend-amd64@sha256:05cb942c5ff93ebb6c63d48737cd39d4fa1c6fa9dc7a4d53b2709f5b3c8333e8
    Port:           8080/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error

解决方法: 

修改 kubesphere-installer.yaml

image: kubesphere/ks-installer:v3.3.2

用ctr命令手动获取镜像 

修改  cluster-configuration.yaml

endpointIps: localhost
=>
endpointIps: 192.168.0.106
service \"ks-controller-manager\" not found"

解决办法:

https://github.com/kubesphere/ks-installer/blob/master/scripts/kubesphere-delete.sh 将sh文件下载到master节点,然后删除后重新安装

kubesphere-controls-system ks-controller-manager

exec /server: exec format error

原因:默认的image是amd64体系,而这里需要的是arm64

解决办法: 

https://hub.docker.com/r/kubesphere/ks-installer/tags, 查看对应版本的digest(如sha256:43d9edaab0bf66703c75fb85dc650da7557c62b2a8dce0161ddf73980fa2f31a), 修改kubesphere-installer.yaml

image: kubesphere/ks-installer:v3.3.2@sha256:43d9edaab0bf66703c75fb85dc650da7557c62b2a8dce0161ddf73980fa2f31a

4.1 Tips:

如果发现有些pods由于arch未符合而不能启动,不用着急,可进入kubesphere后台,编辑yaml后重新创建。 

5. 相关命令

kubectl apply -f ./kubesphere-installer.yaml
kubectl apply -f ./cluster-configuration.yaml

kubectl delete -f ./cluster-configuration.yaml
kubectl delete -f ./kubesphere-installer.yaml
kubectl delete all  --all -n kubesphere-controls-system
kubectl delete all  --all -n kubesphere-monitoring-system

# 检查安装进度
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f

6. 参考

使用kubeSphere管理你的k8s集群_kubectl get sc_Root567的博客-CSDN博客

在 Kubernetes 上最小化安装 KubeSphere

 Kubernetes 部署 Harbor_赵承胜博客的技术博客_51CTO博客

鲲鹏arm64架构下安装KubeSphere_request to http://ks-apiserver/oauth/token failed,_beyond阿亮的博客-CSDN博客 

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

bennybi

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值