记录一下k8s ks-apiserver 一直启动失败问题invalid memory address or nil pointer dereference

报错信息如下:

[root@master1 ~]# kubectl logs -f --tail 200 -n kubesphere-system   ks-apiserver-5db774f4f-vcmv5
W0821 00:57:54.169749       1 client_config.go:615] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
W0821 00:57:54.172005       1 client_config.go:615] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
W0821 00:57:54.179111       1 options.go:191] ks-apiserver starts without redis provided, it will use in memory cache. This may cause inconsistencies when running ks-apiserver with multiple replicas.
I0821 00:57:54.179199       1 interface.go:50] start helm repo informer
I0821 00:57:54.501010       1 apiserver.go:417] Start cache objects
E0821 00:57:55.074410       1 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 1042 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic(0x35ef560, 0x5d6c200)
        /workspace/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
        /workspace/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86
panic(0x35ef560, 0x5d6c200)
        /usr/local/go/src/runtime/panic.go:965 +0x1b9
kubesphere.io/kubesphere/pkg/simple/client/openpitrix/helmrepoindex.HelmVersionWrapper.GetName(...)
        /workspace/pkg/simple/client/openpitrix/helmrepoindex/load_package.go:48
kubesphere.io/kubesphere/pkg/utils/reposcache.(*cachedRepos).addRepo(0xc00069eaa0, 0xc0045bb300, 0x0, 0xc000e80420, 0x7f856c71f500)
        /workspace/pkg/utils/reposcache/repo_cahes.go:290 +0x47d
kubesphere.io/kubesphere/pkg/utils/reposcache.(*cachedRepos).AddRepo(0xc00069eaa0, 0xc0045bb300, 0x0, 0x0)
        /workspace/pkg/utils/reposcache/repo_cahes.go:212 +0x79
kubesphere.io/kubesphere/pkg/models/openpitrix.NewOpenpitrixOperator.func1(0x3b102a0, 0xc0045bb300)
        /workspace/pkg/models/openpitrix/interface.go:56 +0x4a
k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnAdd(...)
        /workspace/vendor/k8s.io/client-go/tools/cache/controller.go:231
k8s.io/client-go/tools/cache.(*processorListener).run.func1()
        /workspace/vendor/k8s.io/client-go/tools/cache/shared_informer.go:777 +0xc2
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc001ab1f60)
        /workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001e65f60, 0x4164f00, 0xc002b4a000, 0x3554001, 0xc001620180)
        /workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0x9b
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001ab1f60, 0x3b9aca00, 0x0, 0xc003500001, 0xc001620180)
        /workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
k8s.io/apimachinery/pkg/util/wait.Until(...)
        /workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
k8s.io/client-go/tools/cache.(*processorListener).run(0xc000250480)
        /workspace/vendor/k8s.io/client-go/tools/cache/shared_informer.go:771 +0x95
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000fb4d80, 0xc003500030)
        /workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
        /workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
        panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x215135d]

goroutine 1042 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
        /workspace/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 +0x109
panic(0x35ef560, 0x5d6c200)
        /usr/local/go/src/runtime/panic.go:965 +0x1b9
kubesphere.io/kubesphere/pkg/simple/client/openpitrix/helmrepoindex.HelmVersionWrapper.GetName(...)
        /workspace/pkg/simple/client/openpitrix/helmrepoindex/load_package.go:48
kubesphere.io/kubesphere/pkg/utils/reposcache.(*cachedRepos).addRepo(0xc00069eaa0, 0xc0045bb300, 0x0, 0xc000e80420, 0x7f856c71f500)
        /workspace/pkg/utils/reposcache/repo_cahes.go:290 +0x47d
kubesphere.io/kubesphere/pkg/utils/reposcache.(*cachedRepos).AddRepo(0xc00069eaa0, 0xc0045bb300, 0x0, 0x0)
        /workspace/pkg/utils/reposcache/repo_cahes.go:212 +0x79
kubesphere.io/kubesphere/pkg/models/openpitrix.NewOpenpitrixOperator.func1(0x3b102a0, 0xc0045bb300)
        /workspace/pkg/models/openpitrix/interface.go:56 +0x4a
k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnAdd(...)
        /workspace/vendor/k8s.io/client-go/tools/cache/controller.go:231
k8s.io/client-go/tools/cache.(*processorListener).run.func1()
        /workspace/vendor/k8s.io/client-go/tools/cache/shared_informer.go:777 +0xc2
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc001ab1f60)
        /workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001e65f60, 0x4164f00, 0xc002b4a000, 0x3554001, 0xc001620180)
        /workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0x9b
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001ab1f60, 0x3b9aca00, 0x0, 0xc003500001, 0xc001620180)
        /workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
k8s.io/apimachinery/pkg/util/wait.Until(...)
        /workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
k8s.io/client-go/tools/cache.(*processorListener).run(0xc000250480)
        /workspace/vendor/k8s.io/client-go/tools/cache/shared_informer.go:771 +0x95
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000fb4d80, 0xc003500030)
        /workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
        /workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65

未知原因apiserver起不来,查了很多资料原来是helm源问题导致
**

解决方法:

**

进入master机器终端,输入命令:
kubectl get helmrepos.application.kubesphere.io
查看所有helm(下面的图)
清理命令:
kubectl delete helmrepos.application.kubesphere.io xxx(bitnami源的名称)
然后删除 ks-apiserver pod 触发重建即可,随后发现apiserver服务起来了。

如图,这是我清理过的:(未清理的时候,有2个源的状态为”failed“)
在这里插入图片描述

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值