说明:本部署文章参照了 https://github.com/opsnull/follow-me-install-kubernetes-cluster ,欢迎给作者star
Heapster是一个收集者,将每个Node上的cAdvisor的数据进行汇总,然后导到第三方工具(如InfluxDB)。
Heapster 是通过调用 kubelet 的 http API 来获取 cAdvisor 的 metrics 数据的。
由于 kublet 只在 10250 端口接收 https 请求,故需要修改 heapster 的 deployment 配置。同时,需要赋予 kube-system:heapster ServiceAccount 调用 kubelet API 的权限。
注意:如果没有特殊指明,本文档的所有操作均在 k8s-master1节点上执行。
下载 heapster 文件
cd /opt/k8s/work wget https://github.com/kubernetes/heapster/archive/v1.5.4.tar.gz tar -xzvf v1.5.4.tar.gz mv v1.5.4.tar.gz heapster-1.5.4.tar.gz
官方文件目录: heapster-1.5.4/deploy/kube-config/influxdb
修改配置
$ cd heapster-1.5.4/deploy/kube-config/influxdb $ cp grafana.yaml{,.orig} $ diff grafana.yaml.orig grafana.yaml 67c67 < # type: NodePort --- > type: NodePort
- 开启 NodePort;
$ cp heapster.yaml{,.orig} $ diff heapster.yaml.orig heapster.yaml < - --source=kubernetes:https://kubernetes.default --- > - --source=kubernetes:https://kubernetes.default?kubeletHttps=true&kubeletPort=10250
- 由于 kubelet 只在 10250 监听 https 请求,故添加相关参数;
下载镜像
images=( heapster-amd64:v1.5.3 heapster-grafana-amd64:v4.4.3 heapster-influxdb-amd64:v1.3.3 ) for imageName in ${images[@]} ; do docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName gcr.io/google_containers/$imageName done
执行所有定义文件
$ cd /opt/k8s/work/heapster-1.5.4/deploy/kube-config/influxdb $ ls *.yaml grafana.yaml heapster.yaml influxdb.yaml $ kubectl create -f . $ cd ../rbac/ $ cp heapster-rbac.yaml{,.orig} vim heapster-rbac.yaml
kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: heapster-kubelet-api roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:kubelet-api-admin subjects: - kind: ServiceAccount name: heapster namespace: kube-system kubectl create -f heapster-rbac.yaml
- 将 serviceAccount kube-system:heapster 与 ClusterRole system:kubelet-api-admin 绑定,授予它调用 kubelet API 的权限;
如果不修改,默认的 ClusterRole system:heapster 权限不足:
E1128 10:00:05.010716 1 manager.go:101] Error in scraping containers from kubelet:172.27.128.150:10250: failed to get all container stats from Kubelet URL "https://172.27.128.150:10250/stats/container/": request failed - "403 Forbidden", response: "Forbidden (user=system:serviceaccount:kube-system:heapster, verb=create, resource=nodes, subresource=stats)" E1128 10:00:05.018556 1 manager.go:101] Error in scraping containers from kubelet:172.27.128.149:10250: failed to get all container stats from Kubelet URL "https://172.27.128.149:10250/stats/container/": request failed - "403 Forbidden", response: "Forbidden (user=system:serviceaccount:kube-system:heapster, verb=create, resource=nodes, subresource=stats)" E1128 10:00:05.022664 1 manager.go:101] Error in scraping containers from kubelet:172.27.128.148:10250: failed to get all container stats from Kubelet URL "https://172.27.128.148:10250/stats/container/": request failed - "403 Forbidden", response: "Forbidden (user=system:serviceaccount:kube-system:heapster, verb=create, resource=nodes, subresource=stats)" W1128 10:00:25.000467 1 manager.go:152] Failed to get all responses in time (got 0/3)
检查执行结果
[root@k8s-master1 influxdb]# kubectl get pods -n kube-system | grep -E 'heapster|monitoring' heapster-7b5d8fb59c-997p8 1/1 Running 0 10m monitoring-grafana-59d85ddc6-ws7j9 1/1 Running 0 10m monitoring-influxdb-5fffc746fd-m7bbb 1/1 Running 0 10m
检查 kubernets dashboard 界面,可以正确显示各 Nodes、Pods 的 CPU、内存、负载等统计数据和图表:
访问 grafana
通过 kube-apiserver 访问:
获取 monitoring-grafana 服务 URL:
[root@k8s-master1 influxdb]# kubectl cluster-info Kubernetes master is running at https://192.168.161.150:8443 Heapster is running at https://192.168.161.150:8443/api/v1/namespaces/kube-system/services/heapster/proxy CoreDNS is running at https://192.168.161.150:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy kubernetes-dashboard is running at https://192.168.161.150:8443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy monitoring-grafana is running at https://192.168.161.150:8443/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy monitoring-influxdb is running at https://192.168.161.150:8443/api/v1/namespaces/kube-system/services/monitoring-influxdb/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
---恢复内容结束---