k8s安装kubesphere遇到的问题

背景

这里只是我安装时遇到的问题,安装文档详细步骤直接参考官网的即可。链接:https://kubesphere.io/zh/docs/quick-start/minimal-kubesphere-on-k8s/

storageclass

在这里插入图片描述
发现报错:

[root@emr-header-1 nfs]# kubectl logs nfs-client-provisioner-6645cb5596-z4nlm
I0809 02:24:57.206834       1 leaderelection.go:185] attempting to acquire leader lease  default/qgg-nfs-storage...
E0809 02:25:14.631454       1 event.go:259] Could not construct reference to: '&v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"qgg-nfs-storage", GenerateName:"", Namespace:"default", SelfLink:"", UID:"09a3b2f7-215b-4033-ab05-33cf738085f9", ResourceVersion:"2435701", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764022967, loc:(*time.Location)(0x1956800)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"nfs-client-provisioner-6645cb5596-z4nlm_fd49a591-f8b8-11eb-b433-e605423bd389\",\"leaseDurationSeconds\":15,\"acquireTime\":\"2021-08-09T02:25:14Z\",\"renewTime\":\"2021-08-09T02:25:14Z\",\"leaderTransitions\":1}"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Subsets:[]v1.EndpointSubset(nil)}' due to: 'selfLink was empty, can't make reference'. Will not report event: 'Normal' 'LeaderElection' 'nfs-client-provisioner-6645cb5596-z4nlm_fd49a591-f8b8-11eb-b433-e605423bd389 became leader'
I0809 02:25:14.631567       1 leaderelection.go:194] successfully acquired lease default/qgg-nfs-storage
I0809 02:25:14.631640       1 controller.go:631] Starting provisioner controller qgg-nfs-storage_nfs-client-provisioner-6645cb5596-z4nlm_fd49a591-f8b8-11eb-b433-e605423bd389!
I0809 02:25:14.731830       1 controller.go:680] Started provisioner controller qgg-nfs-storage_nfs-client-provisioner-6645cb5596-z4nlm_fd49a591-f8b8-11eb-b433-e605423bd389!
I0809 02:25:14.731943       1 controller.go:987] provision "default/test-claim" class "managed-nfs-storage": started
E0809 02:25:14.734594       1 controller.go:1004] provision "default/test-claim" class "managed-nfs-storage": unexpected error getting claim reference: selfLink was empty, can't make reference

由于Kubernetes 1.20禁用了selfLink,创建的时候会报错
因为我是二进制安装的kubernetes集群,可以直接修改kube-apiserver文件,添加 --feature-gates=RemoveSelfLink=false

重新加载kube-apiserver.service
这里在插个小话题,我是用nfs做的sc。需要所有节点都安装nfs-utils。不然会遇到报错。

systemctl daemon-reload && systemctl restart kube-apiserver

Prometheus Operator监控k8s冲突

在这里插入图片描述

[root@emr-header-1 ~]# kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
2021-08-09T11:08:35+08:00 INFO     : shell-operator latest
2021-08-09T11:08:35+08:00 INFO     : HTTP SERVER Listening on 0.0.0.0:9115
2021-08-09T11:08:35+08:00 INFO     : Use temporary dir: /tmp/shell-operator
2021-08-09T11:08:35+08:00 INFO     : Initialize hooks manager ...
2021-08-09T11:08:35+08:00 INFO     : Search and load hooks ...
2021-08-09T11:08:35+08:00 INFO     : Load hook config from '/hooks/kubesphere/installRunner.py'
2021-08-09T11:08:36+08:00 INFO     : Load hook config from '/hooks/kubesphere/schedule.sh'
2021-08-09T11:08:36+08:00 INFO     : Initializing schedule manager ...
2021-08-09T11:08:36+08:00 INFO     : KUBE Init Kubernetes client
2021-08-09T11:08:36+08:00 INFO     : KUBE-INIT Kubernetes client is configured successfully
2021-08-09T11:08:36+08:00 INFO     : MAIN: run main loop
2021-08-09T11:08:36+08:00 INFO     : MAIN: add onStartup tasks
2021-08-09T11:08:36+08:00 INFO     : MSTOR Create new metric shell_operator_live_ticks
2021-08-09T11:08:36+08:00 INFO     : MSTOR Create new metric shell_operator_tasks_queue_length
2021-08-09T11:08:36+08:00 INFO     : QUEUE add all HookRun@OnStartup
2021-08-09T11:08:36+08:00 INFO     : Running schedule manager ...
2021-08-09T11:08:36+08:00 ERROR    : error getting GVR for kind 'ClusterConfiguration': unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
2021-08-09T11:08:36+08:00 ERROR    : Enable kube events for hooks error: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
2021-08-09T11:08:39+08:00 INFO     : TASK_RUN Exit: program halts.

然后我把Prometheus Operator都delete了。之后重新执行安装kubesphere。
在这里插入图片描述
发现没什么问题就继续下一步了。

[root@emr-header-1 ~]# kubectl apply -f cluster-configuration.yaml 
clusterconfiguration.installer.kubesphere.io/ks-installer created

又看到报错了。

[root@emr-header-1 ~]# kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
2021-08-09T11:14:10+08:00 INFO     : shell-operator latest
2021-08-09T11:14:10+08:00 INFO     : HTTP SERVER Listening on 0.0.0.0:9115
2021-08-09T11:14:10+08:00 INFO     : Use temporary dir: /tmp/shell-operator
2021-08-09T11:14:10+08:00 INFO     : Initialize hooks manager ...
2021-08-09T11:14:10+08:00 INFO     : Search and load hooks ...
2021-08-09T11:14:10+08:00 INFO     : Load hook config from '/hooks/kubesphere/installRunner.py'
2021-08-09T11:14:11+08:00 INFO     : Load hook config from '/hooks/kubesphere/schedule.sh'
2021-08-09T11:14:11+08:00 INFO     : Initializing schedule manager ...
2021-08-09T11:14:11+08:00 INFO     : KUBE Init Kubernetes client
2021-08-09T11:14:11+08:00 INFO     : KUBE-INIT Kubernetes client is configured successfully
2021-08-09T11:14:11+08:00 INFO     : MAIN: run main loop
2021-08-09T11:14:11+08:00 INFO     : MAIN: add onStartup tasks
2021-08-09T11:14:11+08:00 INFO     : QUEUE add all HookRun@OnStartup
2021-08-09T11:14:11+08:00 INFO     : MSTOR Create new metric shell_operator_live_ticks
2021-08-09T11:14:11+08:00 INFO     : MSTOR Create new metric shell_operator_tasks_queue_length
2021-08-09T11:14:11+08:00 INFO     : Running schedule manager ...
2021-08-09T11:14:11+08:00 INFO     : GVR for kind 'ClusterConfiguration' is installer.kubesphere.io/v1alpha1, Resource=clusterconfigurations
2021-08-09T11:15:29+08:00 INFO     : EVENT Kube event 'd6792aaa-5a68-4745-8d03-0474f5489c76'
2021-08-09T11:15:29+08:00 INFO     : QUEUE add TASK_HOOK_RUN@KUBE_EVENTS kubesphere/installRunner.py
2021-08-09T11:15:29+08:00 INFO     : TASK_RUN HookRun@KUBE_EVENTS kubesphere/installRunner.py
2021-08-09T11:15:29+08:00 INFO     : Running hook 'kubesphere/installRunner.py' binding 'KUBE_EVENTS' ...
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match 'all'

PLAY [localhost] ***************************************************************

TASK [download : include_tasks] ************************************************
skipping: [localhost]

TASK [download : Downloading items] ********************************************
skipping: [localhost]

TASK [download : Synchronizing container] **************************************
skipping: [localhost]

TASK [kubesphere-defaults : KubeSphere | Setting images' namespace override] ***
skipping: [localhost]

TASK [kubesphere-defaults : KubeSphere | Configuring defaults] *****************
ok: [localhost] => {
    "msg": "Check roles/kubesphere-defaults/defaults/main.yml"
}

TASK [preinstall : KubeSphere | Checking Kubernetes version] *******************
changed: [localhost]

TASK [preinstall : KubeSphere | Initing Kubernetes version] ********************
ok: [localhost]

TASK [preinstall : KubeSphere | Stopping if Kubernetes version is nonsupport] ***
ok: [localhost] => {
    "changed": false,
    "msg": "All assertions passed"
}

TASK [preinstall : KubeSphere | Checking StorageClass] *************************
changed: [localhost]

TASK [preinstall : KubeSphere | Stopping if StorageClass was not found] ********
skipping: [localhost]

TASK [preinstall : KubeSphere | Checking default StorageClass] *****************
changed: [localhost]

TASK [preinstall : KubeSphere | Stopping if default StorageClass was not found] ***
fatal: [localhost]: FAILED! => {
    "assertion": "\"(default)\" in default_storage_class_check.stdout",
    "changed": false,
    "evaluated_to": false,
    "msg": "Default StorageClass was not found !"
}

PLAY RECAP *********************************************************************
localhost                  : ok=6    changed=3    unreachable=0    failed=1    skipped=5    rescued=0    ignored=0   

根据报错设置default sc。

[root@emr-header-1 ~]# kubectl patch storageclass managed-nfs-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
storageclass.storage.k8s.io/managed-nfs-storage patched
[root@emr-header-1 ~]# kubectl get sc
NAME                            PROVISIONER       RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
managed-nfs-storage (default)   qgg-nfs-storage   Delete          Immediate           false                  33m

再重试一遍。

[root@emr-header-1 ~]# kubectl delete -f cluster-configuration.yaml 
clusterconfiguration.installer.kubesphere.io "ks-installer" deleted
[root@emr-header-1 ~]# kubectl apply -f cluster-configuration.yaml 
clusterconfiguration.installer.kubesphere.io/ks-installer created

然后看日志又有报错如下。。

Task 'monitoring' failed:
******************************************************************************************************************************************************
{
  "counter": 75,
  "created": "2021-08-09T03:24:48.025195",
  "end_line": 74,
  "event": "runner_on_failed",
  "event_data": {
    "duration": 1.516203,
    "end": "2021-08-09T03:24:48.024895",
    "event_loop": null,
    "host": "localhost",
    "ignore_errors": null,
    "play": "localhost",
    "play_pattern": "localhost",
    "play_uuid": "faad789b-75c9-b1f8-b094-000000000005",
    "playbook": "/kubesphere/playbooks/monitoring.yaml",
    "playbook_uuid": "f0824cc8-a8cc-487f-9922-73ece3ba017a",
    "remote_addr": "127.0.0.1",
    "res": {
      "_ansible_no_log": false,
      "changed": true,
      "cmd": "/usr/local/bin/kubectl apply -f /kubesphere/kubesphere/prometheus/node-exporter --force",
      "delta": "0:00:00.901010",
      "end": "2021-08-09 11:24:47.994531",
      "invocation": {
        "module_args": {
          "_raw_params": "/usr/local/bin/kubectl apply -f /kubesphere/kubesphere/prometheus/node-exporter --force",
          "_uses_shell": true,
          "argv": null,
          "chdir": null,
          "creates": null,
          "executable": null,
          "removes": null,
          "stdin": null,
          "stdin_add_newline": true,
          "strip_empty_ends": true,
          "warn": true
        }
      },
      "msg": "non-zero return code",
      "rc": 1,
      "start": "2021-08-09 11:24:47.093521",
      "stderr": "error: unable to recognize \"/kubesphere/kubesphere/prometheus/node-exporter/node-exporter-serviceMonitor.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\"",
      "stderr_lines": [
        "error: unable to recognize \"/kubesphere/kubesphere/prometheus/node-exporter/node-exporter-serviceMonitor.yaml\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com/v1\""
      ],
      "stdout": "clusterrole.rbac.authorization.k8s.io/kubesphere-node-exporter created\nclusterrolebinding.rbac.authorization.k8s.io/kubesphere-node-exporter created\ndaemonset.apps/node-exporter created\nservice/node-exporter created\nserviceaccount/node-exporter created",
      "stdout_lines": [
        "clusterrole.rbac.authorization.k8s.io/kubesphere-node-exporter created",
        "clusterrolebinding.rbac.authorization.k8s.io/kubesphere-node-exporter created",
        "daemonset.apps/node-exporter created",
        "service/node-exporter created",
        "serviceaccount/node-exporter created"
      ]
    },
    "role": "ks-monitor",
    "start": "2021-08-09T03:24:46.508692",
    "task": "Monitoring | Installing node-exporter",
    "task_action": "command",
    "task_args": "",
    "task_path": "/kubesphere/installer/roles/ks-monitor/tasks/node-exporter.yaml:2",
    "task_uuid": "faad789b-75c9-b1f8-b094-000000000036",
    "uuid": "ba831a32-d1f8-43d3-a4e8-214b685ad3b8"
  },
  "parent_uuid": "faad789b-75c9-b1f8-b094-000000000036",
  "pid": 4551,
  "runner_ident": "monitoring",
  "start_line": 73,
  "stdout": "fatal: [localhost]: FAILED! => {\"changed\": true, \"cmd\": \"/usr/local/bin/kubectl apply -f /kubesphere/kubesphere/prometheus/node-exporter --force\", \"delta\": \"0:00:00.901010\", \"end\": \"2021-08-09 11:24:47.994531\", \"msg\": \"non-zero return code\", \"rc\": 1, \"start\": \"2021-08-09 11:24:47.093521\", \"stderr\": \"error: unable to recognize \\\"/kubesphere/kubesphere/prometheus/node-exporter/node-exporter-serviceMonitor.yaml\\\": no matches for kind \\\"ServiceMonitor\\\" in version \\\"monitoring.coreos.com/v1\\\"\", \"stderr_lines\": [\"error: unable to recognize \\\"/kubesphere/kubesphere/prometheus/node-exporter/node-exporter-serviceMonitor.yaml\\\": no matches for kind \\\"ServiceMonitor\\\" in version \\\"monitoring.coreos.com/v1\\\"\"], \"stdout\": \"clusterrole.rbac.authorization.k8s.io/kubesphere-node-exporter created\\nclusterrolebinding.rbac.authorization.k8s.io/kubesphere-node-exporter created\\ndaemonset.apps/node-exporter created\\nservice/node-exporter created\\nserviceaccount/node-exporter created\", \"stdout_lines\": [\"clusterrole.rbac.authorization.k8s.io/kubesphere-node-exporter created\", \"clusterrolebinding.rbac.authorization.k8s.io/kubesphere-node-exporter created\", \"daemonset.apps/node-exporter created\", \"service/node-exporter created\", \"serviceaccount/node-exporter created\"]}",
  "uuid": "ba831a32-d1f8-43d3-a4e8-214b685ad3b8"
}

这里看就是node_export的问题。盲猜就是冲突问题。因为之前机器自己安装过node_exporter。所以直接干掉所有机器的node_exporter。
在这里插入图片描述
然后再重试一遍就成功了。
在这里插入图片描述
在这里插入图片描述

  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值