问题:K8s集群由于资源不够用,新增一个节点,我重启了所有服务,想让资源均衡使用节点,然后使用kubectl top nodes提示error: metrics not available yet
经过网上查找后发现APIservice有问题,修改后解决。
检查APIservice
[root@master1 ~]# kubectl get APIService
NAME SERVICE AVAILABLE AGE
v1. Local True 131d
v1.admissionregistration.k8s.io Local True 131d
v1.apiextensions.k8s.io Local True 131d
v1.apps Local True 131d
v1.authentication.k8s.io Local True 131d
v1.authorization.k8s.io Local True 131d
v1.autoscaling Local True 131d
v1.batch Local True 131d
v1.certificates.k8s.io Local True 131d
v1.coordination.k8s.io Local True 131d
v1.discovery.k8s.io Local True 131d
v1.events.k8s.io Local True 131d
v1.kuboard.cn Local True 78d
v1.monitoring.coreos.com Local True 78d
v1.networking.k8s.io Local True 131d
v1.node.k8s.io Local True 131d
v1.policy Local True 131d
v1.rbac.authorization.k8s.io Local True 131d
v1.scheduling.k8s.io Local True 131d
v1.storage.k8s.io Local True 131d
v1alpha1.monitoring.coreos.com Local True 78d
v1beta1.batch Local True 131d
v1beta1.discovery.k8s.io Local True 131d
v1beta1.events.k8s.io Local True 131d
v1beta1.flowcontrol.apiserver.k8s.io Local True 131d
v1beta1.metrics.k8s.io kuboard/prometheus-adapter True 131d
v1beta1.node.k8s.io Local True 131d
v1beta1.policy Local True 131d
将v1beta1.metrics.k8s.io kuboard/prometheus-adapter
kubectl edit apiservices v1beta1.metrics.k8s.io
修改为
v1beta1.metrics.k8s.io kube-system/metrics-server
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
creationTimestamp: "2022-03-15T08:48:42Z"
labels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/part-of: kube-prometheus
app.kubernetes.io/version: 0.9.0
name: v1beta1.metrics.k8s.io
resourceVersion: "24322247"
uid: 724e4877-547e-4cff-a0c2-34ebdd9db999
spec:
group: metrics.k8s.io
groupPriorityMinimum: 100
insecureSkipTLSVerify: true
service:
name: metrics-server
namespace: kube-system
port: 443
version: v1beta1
versionPriority: 100
status:
conditions:
- lastTransitionTime: "2022-07-22T02:47:01Z"
message: all checks passed
reason: Passed
status: "True"
type: Available
[root@master1 ~]# kubectl get APIService
NAME SERVICE AVAILABLE AGE
v1. Local True 131d
v1.admissionregistration.k8s.io Local True 131d
v1.apiextensions.k8s.io Local True 131d
v1.apps Local True 131d
v1.authentication.k8s.io Local True 131d
v1.authorization.k8s.io Local True 131d
v1.autoscaling Local True 131d
v1.batch Local True 131d
v1.certificates.k8s.io Local True 131d
v1.coordination.k8s.io Local True 131d
v1.discovery.k8s.io Local True 131d
v1.events.k8s.io Local True 131d
v1.kuboard.cn Local True 78d
v1.monitoring.coreos.com Local True 78d
v1.networking.k8s.io Local True 131d
v1.node.k8s.io Local True 131d
v1.policy Local True 131d
v1.rbac.authorization.k8s.io Local True 131d
v1.scheduling.k8s.io Local True 131d
v1.storage.k8s.io Local True 131d
v1alpha1.monitoring.coreos.com Local True 78d
v1beta1.batch Local True 131d
v1beta1.discovery.k8s.io Local True 131d
v1beta1.events.k8s.io Local True 131d
v1beta1.flowcontrol.apiserver.k8s.io Local True 131d
v1beta1.metrics.k8s.io kube-system/metrics-server True 131d
重启metrics-server再使用kubectl top nodes查看,正常了
[root@master1 ~]# kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
xxx 710m 2% 14488Mi 45%
xxxx 1115m 4% 14746Mi 46%
xxxx 599m 2% 23571Mi 74%
xxxx 201m 0% 5558Mi 4%