kubernetes1.18安装kube-prometheus

https://blog.csdn.net/guoxiaobo2010/article/details/106532357/

kube-prometheus简介

kube-prometheus是coreos的一个开源项目,用于监控kubernetes集群

安装kube-prometheus

1. 安装git工具

yum install git -y

2. 克隆kube-prometheus

[root@localhost ~]# git clone https://github.com/coreos/kube-prometheus.git

根据 Readme.md 进行版本的选择,本次 k8s 安装的是 1.18 ,所以 prometheus 选的分支为 release-0.5 

[root@localhost manifests]# git checkout remotes/origin/release-0.5  -b  0.5

 

3. 查看manifest

[root@localhost manifests]# ll
total 1664
-rw-r--r-- 1 root root     388 Feb 10 11:07 alertmanager-alertmanager.yaml
-rw-r--r-- 1 root root     973 Feb 10 11:07 alertmanager-secret.yaml
-rw-r--r-- 1 root root      96 Feb 10 11:07 alertmanager-serviceAccount.yaml
-rw-r--r-- 1 root root     254 Feb 10 11:07 alertmanager-serviceMonitor.yaml
-rw-r--r-- 1 root root     271 Feb 10 11:07 alertmanager-service.yaml
-rw-r--r-- 1 root root     550 Feb 10 11:07 grafana-dashboardDatasources.yaml
-rw-r--r-- 1 root root 1396092 Feb 10 11:07 grafana-dashboardDefinitions.yaml
-rw-r--r-- 1 root root     454 Feb 10 11:07 grafana-dashboardSources.yaml
-rw-r--r-- 1 root root    7539 Feb 10 11:07 grafana-deployment.yaml
-rw-r--r-- 1 root root      86 Feb 10 11:07 grafana-serviceAccount.yaml
-rw-r--r-- 1 root root     208 Feb 10 11:07 grafana-serviceMonitor.yaml
-rw-r--r-- 1 root root     201 Feb 10 11:07 grafana-service.yaml
-rw-r--r-- 1 root root     376 Feb 10 11:07 kube-state-metrics-clusterRoleBinding.yaml
-rw-r--r-- 1 root root    1651 Feb 10 11:07 kube-state-metrics-clusterRole.yaml
-rw-r--r-- 1 root root    1874 Feb 10 11:07 kube-state-metrics-deployment.yaml
-rw-r--r-- 1 root root     192 Feb 10 11:07 kube-state-metrics-serviceAccount.yaml
-rw-r--r-- 1 root root     829 Feb 10 11:07 kube-state-metrics-serviceMonitor.yaml
-rw-r--r-- 1 root root     403 Feb 10 11:07 kube-state-metrics-service.yaml
-rw-r--r-- 1 root root     266 Feb 10 11:07 node-exporter-clusterRoleBinding.yaml
-rw-r--r-- 1 root root     283 Feb 10 11:07 node-exporter-clusterRole.yaml
-rw-r--r-- 1 root root    2741 Feb 10 11:07 node-exporter-daemonset.yaml
-rw-r--r-- 1 root root      92 Feb 10 11:07 node-exporter-serviceAccount.yaml
-rw-r--r-- 1 root root     711 Feb 10 11:07 node-exporter-serviceMonitor.yaml
-rw-r--r-- 1 root root     355 Feb 10 11:07 node-exporter-service.yaml
-rw-r--r-- 1 root root     292 Feb 10 11:07 prometheus-adapter-apiService.yaml
-rw-r--r-- 1 root root     396 Feb 10 11:07 prometheus-adapter-clusterRoleAggregatedMetricsReader.yaml
-rw-r--r-- 1 root root     304 Feb 10 11:07 prometheus-adapter-clusterRoleBindingDelegator.yaml
-rw-r--r-- 1 root root     281 Feb 10 11:07 prometheus-adapter-clusterRoleBinding.yaml
-rw-r--r-- 1 root root     188 Feb 10 11:07 prometheus-adapter-clusterRoleServerResources.yaml
-rw-r--r-- 1 root root     219 Feb 10 11:07 prometheus-adapter-clusterRole.yaml
-rw-r--r-- 1 root root    1378 Feb 10 11:07 prometheus-adapter-configMap.yaml
-rw-r--r-- 1 root root    1327 Feb 10 11:07 prometheus-adapter-deployment.yaml
-rw-r--r-- 1 root root     325 Feb 10 11:07 prometheus-adapter-roleBindingAuthReader.yaml
-rw-r--r-- 1 root root      97 Feb 10 11:07 prometheus-adapter-serviceAccount.yaml
-rw-r--r-- 1 root root     236 Feb 10 11:07 prometheus-adapter-service.yaml
-rw-r--r-- 1 root root     269 Feb 10 11:07 prometheus-clusterRoleBinding.yaml
-rw-r--r-- 1 root root     216 Feb 10 11:07 prometheus-clusterRole.yaml
-rw-r--r-- 1 root root     621 Feb 10 11:07 prometheus-operator-serviceMonitor.yaml
-rw-r--r-- 1 root root     734 Feb 10 11:07 prometheus-prometheus.yaml
-rw-r--r-- 1 root root     293 Feb 10 11:07 prometheus-roleBindingConfig.yaml
-rw-r--r-- 1 root root     983 Feb 10 11:07 prometheus-roleBindingSpecificNamespaces.yaml
-rw-r--r-- 1 root root     188 Feb 10 11:07 prometheus-roleConfig.yaml
-rw-r--r-- 1 root root     820 Feb 10 11:07 prometheus-roleSpecificNamespaces.yaml
-rw-r--r-- 1 root root   80762 Feb 10 11:07 prometheus-rules.yaml
-rw-r--r-- 1 root root      93 Feb 10 11:07 prometheus-serviceAccount.yaml
-rw-r--r-- 1 root root    6829 Feb 10 11:07 prometheus-serviceMonitorApiserver.yaml
-rw-r--r-- 1 root root     395 Feb 10 11:07 prometheus-serviceMonitorCoreDNS.yaml
-rw-r--r-- 1 root root    6172 Feb 10 11:07 prometheus-serviceMonitorKubeControllerManager.yaml
-rw-r--r-- 1 root root    6778 Feb 10 11:07 prometheus-serviceMonitorKubelet.yaml
-rw-r--r-- 1 root root     347 Feb 10 11:07 prometheus-serviceMonitorKubeScheduler.yaml
-rw-r--r-- 1 root root     247 Feb 10 11:07 prometheus-serviceMonitor.yaml
-rw-r--r-- 1 root root     260 Feb 10 11:07 prometheus-service.yaml
drwxr-xr-x 2 root root    4096 Feb 10 11:12 setup

4. 修改镜像源

国外镜像源某些镜像无法拉取,我们这里修改prometheus-operator,prometheus,alertmanager,kube-state-metrics,node-exporter,prometheus-adapter的镜像源为国内镜像源。我这里使用的是中科大的镜像源。

sed -i 's/quay.io/quay.mirrors.ustc.edu.cn/g' setup/prometheus-operator-deployment.yaml
sed -i 's/quay.io/quay.mirrors.ustc.edu.cn/g' prometheus-prometheus.yaml 
sed -i 's/quay.io/quay.mirrors.ustc.edu.cn/g' alertmanager-alertmanager.yaml
sed -i 's/quay.io/quay.mirrors.ustc.edu.cn/g' kube-state-metrics-deployment.yaml
sed -i 's/quay.io/quay.mirrors.ustc.edu.cn/g' node-exporter-daemonset.yaml
sed -i 's/quay.io/quay.mirrors.ustc.edu.cn/g' prometheus-adapter-deployment.yaml

5. 修改promethes,alertmanager,grafana的service类型为NodePort类型

为了可以从外部访问prometheus,alertmanager,grafana,我们这里修改promethes,alertmanager,grafana的service类型为NodePort类型。

修改prometheus的service

[root@k8s-master manifests]# cat prometheus-service.yaml 
apiVersion: v1
kind: Service
metadata:
  labels:
    prometheus: k8s
  name: prometheus-k8s
  namespace: monitoring
spec:
  type: NodePort # 新增
  ports:
  - name: web
    port: 9090
    targetPort: web
    nodePort: 30090 # 新增
  selector:
    app: prometheus
    prometheus: k8s
  sessionAffinity: ClientIP

修改alertmanager的service

[root@k8s-master manifests]# cat alertmanager-service.yaml 
apiVersion: v1
kind: Service
metadata:
  labels:
    alertmanager: main
  name: alertmanager-main
  namespace: monitoring
spec:
  type: NodePort # 新增
  ports:
  - name: web
    port: 9093
    targetPort: web
    nodePort: 30093 # 新增
  selector:
    alertmanager: main
    app: alertmanager
  sessionAffinity: ClientIP

修改grafana的service

[root@k8s-master manifests]# cat grafana-service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    app: grafana
  name: grafana
  namespace: monitoring
spec:
  type: NodePort # 新增
  ports:
  - name: http
    port: 3000
    targetPort: http
    nodePort: 32000 # 新增
  selector:
    app: grafana

6. 安装kube-prometheus并确认状态

安装CRD和prometheus-operator

[root@k8s-master manifests]# kubectl apply -f setup/
namespace/monitoring created
customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/thanosrulers.monitoring.coreos.com created
clusterrole.rbac.authorization.k8s.io/prometheus-operator created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-operator created
deployment.apps/prometheus-operator created
service/prometheus-operator created
serviceaccount/prometheus-operator created

下载prometheus-operator镜像需要花费几分钟,这里等待几分钟,直到prometheus-operator变成running状态

[root@k8s-master manifests]# kubectl get pod -n monitoring
NAME                                  READY   STATUS    RESTARTS   AGE
prometheus-operator-c5c9679cd-4wwf7   2/2     Running   0          13s
安装prometheus, alertmanager, grafana, kube-state-metrics, node-exporter等资源
[root@k8s-master manifests]# kubectl apply -f .
alertmanager.monitoring.coreos.com/main created
secret/alertmanager-main created
service/alertmanager-main created
serviceaccount/alertmanager-main created
servicemonitor.monitoring.coreos.com/alertmanager created
secret/grafana-datasources created
configmap/grafana-dashboard-apiserver created
configmap/grafana-dashboard-cluster-total created
configmap/grafana-dashboard-controller-manager created
configmap/grafana-dashboard-k8s-resources-cluster created
configmap/grafana-dashboard-k8s-resources-namespace created
configmap/grafana-dashboard-k8s-resources-node created
configmap/grafana-dashboard-k8s-resources-pod created
configmap/grafana-dashboard-k8s-resources-workload created
configmap/grafana-dashboard-k8s-resources-workloads-namespace created
configmap/grafana-dashboard-kubelet created
configmap/grafana-dashboard-namespace-by-pod created
configmap/grafana-dashboard-namespace-by-workload created
configmap/grafana-dashboard-node-cluster-rsrc-use created
configmap/grafana-dashboard-node-rsrc-use created
configmap/grafana-dashboard-nodes created
configmap/grafana-dashboard-persistentvolumesusage created
configmap/grafana-dashboard-pod-total created
configmap/grafana-dashboard-prometheus-remote-write created
configmap/grafana-dashboard-prometheus created
configmap/grafana-dashboard-proxy created
configmap/grafana-dashboard-scheduler created
configmap/grafana-dashboard-statefulset created
configmap/grafana-dashboard-workload-total created
configmap/grafana-dashboards created
deployment.apps/grafana created
service/grafana created
serviceaccount/grafana created
servicemonitor.monitoring.coreos.com/grafana created
clusterrole.rbac.authorization.k8s.io/kube-state-metrics created
clusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics created
deployment.apps/kube-state-metrics created
service/kube-state-metrics created
serviceaccount/kube-state-metrics created
servicemonitor.monitoring.coreos.com/kube-state-metrics created
clusterrole.rbac.authorization.k8s.io/node-exporter created
clusterrolebinding.rbac.authorization.k8s.io/node-exporter created
daemonset.apps/node-exporter created
service/node-exporter created
serviceaccount/node-exporter created
servicemonitor.monitoring.coreos.com/node-exporter created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
clusterrole.rbac.authorization.k8s.io/prometheus-adapter created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-adapter created
clusterrolebinding.rbac.authorization.k8s.io/resource-metrics:system:auth-delegator created
clusterrole.rbac.authorization.k8s.io/resource-metrics-server-resources created
configmap/adapter-config created
deployment.apps/prometheus-adapter created
rolebinding.rbac.authorization.k8s.io/resource-metrics-auth-reader created
service/prometheus-adapter created
serviceaccount/prometheus-adapter created
clusterrole.rbac.authorization.k8s.io/prometheus-k8s created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-k8s created
servicemonitor.monitoring.coreos.com/prometheus-operator created
prometheus.monitoring.coreos.com/k8s created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s-config created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s-config created
role.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s created
prometheusrule.monitoring.coreos.com/prometheus-k8s-rules created
service/prometheus-k8s created
serviceaccount/prometheus-k8s created
servicemonitor.monitoring.coreos.com/prometheus created
servicemonitor.monitoring.coreos.com/kube-apiserver created
servicemonitor.monitoring.coreos.com/coredns created
servicemonitor.monitoring.coreos.com/kube-controller-manager created
servicemonitor.monitoring.coreos.com/kube-scheduler created
servicemonitor.monitoring.coreos.com/kubelet created

下载镜像比较花费时间,可以先去泡杯咖啡,等上半小时再回来,然后查看命名空间monitoring下面的pod状态,直到monitoring命名空间下所有pod都变为running状态,就大功告成了。

[root@k8s-master manifests]# kubectl get pod -n monitoring
NAME                                  READY   STATUS    RESTARTS   AGE
alertmanager-main-0                   2/2     Running   0          6m30s
alertmanager-main-1                   2/2     Running   0          6m30s
alertmanager-main-2                   2/2     Running   0          6m30s
grafana-5c55845445-x8s96              1/1     Running   0          6m30s
kube-state-metrics-5848b95f69-jwtkj   3/3     Running   0          6m30s
node-exporter-82zqh                   2/2     Running   0          6m30s
node-exporter-j97g4                   2/2     Running   0          6m30s
node-exporter-tg7c9                   2/2     Running   0          6m30s
prometheus-adapter-7d68d6f886-wjjx4   1/1     Running   0          6m30s
prometheus-k8s-0                      3/3     Running   1          6m29s
prometheus-k8s-1                      3/3     Running   1          6m29s
prometheus-operator-c5c9679cd-4wwf7   2/2     Running   0          88m
访问prometheus,alert-manager,grafana

1. 访问prometheus

浏览器打开http://192.168.92.201:30090,192.168.92.201为master的IP

2. 访问alert-manager

浏览器打开http://192.168.92.201:30093

3. 访问grafana

浏览器打开http://192.168.92.201:32000

用户名/密码:admin/admin

参考资料

https://github.com/coreos/kube-prometheus

https://www.jianshu.com/p/2fbbe767870d

 

  • 1
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值