目录
#以configmap的形式管理prometheus组件的配置文件
一、Prometheus介绍
新版Prometheus官方推荐直接helm部署更加简单,最近正在进一步学习该组件,helm一键部署和配置监控近期会记录,可以关注下。 -- 20210822
node-exporter+Prometheus+Grafana监控方案
#Prometheus
Prometheus (中文名:普罗米修斯)是由 SoundCloud 开发的开源监控报警系统和时序列数据库(TSDB).自2012年起,许多公司及组织已经采用 Prometheus,并且该项目有着非常活跃的开发者和用户社区.现在已经成为一个独立的开源项目。
Prometheus 在2016加入 CNCF ( Cloud Native Computing Foundation ), 作为在 kubernetes 之后的第二个由基金会主持的项目。 Prometheus 的实现参考了Google内部的监控实现,与源自Google的Kubernetes结合起来非常合适。
另外相比influxdb的方案,性能更加突出,而且还内置了报警功能。它针对大规模的集群环境设计了拉取式的数据采集方式,只需要在应用里面实现一个metrics接口,然后把这个接口告诉Prometheus就可以完成数据采集了
Prometheus的特点:
1、多维数据模型(时序列数据由metric名和一组key/value组成)
2、在多维度上灵活的查询语言(PromQl)
3、不依赖分布式存储,单主节点工作.
4、通过基于HTTP的pull方式采集时序数据
5、可以通过中间网关进行时序列数据推送(pushing)
6、目标服务器可以通过发现服务或者静态配置实现
7、多种可视化和仪表盘支持
#组件的功能
1、node-exporter组件负责收集节点上的metrics监控数据,并将数据推送给prometheus
2、prometheus负责存储这些数据
3、grafana将这些数据通过网页以图形的形式展现给用户
prometheus 相关组件,Prometheus生态系统由多个组件组成,其中许多是可选的:
1、Prometheus 主服务,用来抓取和存储时序数据
2、client library 用来构造应用或 exporter 代码 (go,java,python,ruby)
3、push 网关可用来支持短连接任务
4、可视化的dashboard (两种选择,promdash 和 grafana.目前主流选择是 grafana.)
4、一些特殊需求的数据出口(用于HAProxy, StatsD, Graphite等服务)
5、实验性的报警管理端(alartmanager,单独进行报警汇总,分发,屏蔽等 )
promethues 的各个组件基本都是用 golang 编写,对编译和部署十分友好.并且没有特殊依赖.基本都是独立工作。
#本地镜像准备
#k8s集群的所有节点上下载所需要的image
[root@node02 ~]# docker pull prom/node-exporter
[root@node02 ~]# docker pull prom/prometheus:v2.0.0
[root@node02 ~]# docker pull grafana/grafana:4.2.0
二、Node Exporter部署
[root@manage01 kubernetes]# mkdir /opt/kubernetes/prometheus
#采用daemonset方式部署node-exporter组件
node-exporter的pod和service都是9100端口,然后映射到宿主机的31672端口
[root@manage01 prometheus]# cat node-exporter.yaml
apiVersion: apps/v1 #FAQ02
kind: DaemonSet ##FAQ01
metadata:
name: node-exporter
namespace: kube-system
labels:
k8s-app: node-exporter
spec:
selector: ##FAQ03
matchLabels:
k8s-app: node-exporter
template:
metadata:
labels:
k8s-app: node-exporter
spec:
containers:
- image: prom/node-exporter
name: node-exporter
ports:
- containerPort: 9100
protocol: TCP
name: http
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: node-exporter
name: node-exporter
namespace: kube-system
spec:
ports:
- name: http
port: 9100
nodePort: 31672
protocol: TCP
type: NodePort
selector:
k8s-app: node-exporter
FAQ01:利用DaemonSet对象创建pod,能确保其创建的Pod在集群中的每一台(或指定)Node上都运行一个副本。如果集群中动态加入了新的Node,DaemonSet中的Pod也会被添加在新加入Node上运行。删除一个DaemonSet也会级联删除所有其创建的Pod。下面是一些典型的DaemonSet的使用场景:
在每台节点上运行一个集群存储服务,例如运行glusterd,ceph。
在每台节点上运行一个日志收集服务,例如fluentd,logstash。
在每台节点上运行一个节点监控服务,例如Prometheus Node Exporter, collectd, Datadog agent, New Relic agent, 或Ganglia gmond
FAQ02:k8s 1.16以后的版本API变了,详情看
https://www.infoq.cn/article/rrJ5IGIAXy4V-X2Hb0QA
no matches for kind "DaemonSet" in version "extensions/v1beta1"
no matches for kind "Deployment" in version "extensions/v1beta1"
FAQ03:新版本的 apps.v1 API需要在yaml文件中,selector变为必选项
https://www.cnblogs.com/robinunix/p/11155860.html
error: error validating "prometheus-deployment.yaml": error validating data: ValidationError(Deployment.spec): missing required field "selector" in io.k8s.api.apps.v1.DeploymentSpec; if you choose to ignore these errors, turn validation off with --validate=false
#浏览器访问测试
http://192.168.192.129:31672/metrics浏览器访问
三、部署prometheus组件
#RBAC(Role-Based Access Control,基于角色的访问控制)在k8s v1.5中引入,在v1.6版本时升级为Beta版本,并成为kubeadm安装方式下的默认选项,相对于其他访问控制方式,新的RBAC具有如下优势:
- 对集群中的资源和非资源权限均有完整的覆盖
整个RBAC完全由几个API对象完成,同其他API对象一样,可以用kubectl或API进行操作
可以在运行时进行调整,无需重启API Server
要使用RBAC授权模式,需要在API Server的启动参数中加上--authorization-mode=RBAC
#RBAC权限控制
[root@manage01 prometheus]# cat rbac-setup.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: prometheus
rules:
- apiGroups: [""]
resources:
- nodes
- nodes/proxy
- services
- endpoints
- pods
verbs: ["get", "list", "watch"]
- apiGroups:
- extensions
resources:
- ingresses
verbs: ["get", "list", "watch"]
- nonResourceURLs: ["/metrics"]
verbs: ["get"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: prometheus
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: prometheus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus
subjects:
- kind: ServiceAccount
name: prometheus
namespace: kube-system
[root@manage01 prometheus]# kubectl create -f rbac-setup.yaml
#ConfigMap顾名思义,是用于保存配置数据的键值对,可以用来保存单个属性,也可以保存配置文件。Secret可以为Pod提供密码、Token、私钥等敏感数据;对于一些非敏感数据,比如应用的配置信息,则可以使用ConfigMap。
#以configmap的形式管理prometheus组件的配置文件
[root@manage01 prometheus]# cat configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-config
namespace: kube-system
data:
prometheus.yml: |
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: 'kubernetes-apiservers'
kubernetes_sd_configs:
- role: endpoints
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: default;kubernetes;https
- job_name: 'kubernetes-nodes'
kubernetes_sd_configs:
- role: node
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics
- job_name: 'kubernetes-cadvisor'
kubernetes_sd_configs:
- role: node
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
- job_name: 'kubernetes-service-endpoints'
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
action: replace
target_label: __scheme__
regex: (https?)
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
action: replace
target_label: __address__
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
action: replace
target_label: kubernetes_name
- job_name: 'kubernetes-services'
kubernetes_sd_configs:
- role: service
metrics_path: /probe
params:
module: [http_2xx]
relabel_configs:
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe]
action: keep
regex: true
- source_labels: [__address__]
target_label: __param_target
- target_label: __address__
replacement: blackbox-exporter.example.com:9115
- source_labels: [__param_target]
target_label: instance
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
target_label: kubernetes_name
- job_name: 'kubernetes-ingresses'
kubernetes_sd_configs:
- role: ingress
relabel_configs:
- source_labels: [__meta_kubernetes_ingress_annotation_prometheus_io_probe]
action: keep
regex: true
- source_labels: [__meta_kubernetes_ingress_scheme,__address__,__meta_kubernetes_ingress_path]
regex: (.+);(.+);(.+)
replacement: ${1}://${2}${3}
target_label: __param_target
- target_label: __address__
replacement: blackbox-exporter.example.com:9115
- source_labels: [__param_target]
target_label: instance
- action: labelmap
regex: __meta_kubernetes_ingress_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_ingress_name]
target_label: kubernetes_name
- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: kubernetes_pod_name
[root@manage01 prometheus]# kubectl create -f configmap.yaml
#Prometheus deployment 创建
[root@manage01 prometheus]# cat prometheus.deploy.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
name: prometheus-deployment
name: prometheus
namespace: kube-system
spec:
selector:
matchLabels:
app: prometheus
replicas: 1
selector:
matchLabels:
app: prometheus
template:
metadata:
labels:
app: prometheus
spec:
containers:
- image: prom/prometheus:v2.0.0
name: prometheus
command:
- "/bin/prometheus"
args:
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus"
- "--storage.tsdb.retention=24h"
ports:
- containerPort: 9090
protocol: TCP
volumeMounts:
- mountPath: "/prometheus"
name: data
- mountPath: "/etc/prometheus"
name: config-volume
resources:
requests:
cpu: 100m
memory: 100Mi
limits:
cpu: 500m
memory: 2500Mi
serviceAccountName: prometheus
volumes:
- name: data
emptyDir: {}
- name: config-volume
configMap:
name: prometheus-config
[root@manage01 prometheus]# kubectl create -f prometheus.deploy.yml
#Prometheus service 创建
[root@manage01 prometheus]# cat prometheus.svc.yml
---
kind: Service
apiVersion: v1
metadata:
labels:
app: prometheus
name: prometheus
namespace: kube-system
spec:
type: NodePort
ports:
- port: 9090
targetPort: 9090
nodePort: 30003
selector:
app: prometheus
[root@manage01 prometheus]# cat prometheus.svc.yml
#浏览器访问测试
http://[node ip]:30003/targets
四、部署grafana组件
#grafana deployment 创建
[root@manage01 prometheus]# cat grafana-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: grafana-core
namespace: kube-system
labels:
app: grafana
component: core
spec:
selector:
matchLabels:
app: grafana
replicas: 1
template:
metadata:
labels:
app: grafana
component: core
spec:
containers:
- image: grafana/grafana:4.2.0
name: grafana-core
imagePullPolicy: IfNotPresent
# env:
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 100m
memory: 100Mi
env:
# The following env variables set up basic auth twith the default admin user and admin password.
- name: GF_AUTH_BASIC_ENABLED
value: "true"
- name: GF_AUTH_ANONYMOUS_ENABLED
value: "false"
# - name: GF_AUTH_ANONYMOUS_ORG_ROLE
# value: Admin
# does not really work, because of template variables in exported dashboards:
# - name: GF_DASHBOARDS_JSON_ENABLED
# value: "true"
readinessProbe:
httpGet:
path: /login
port: 3000
# initialDelaySeconds: 30
# timeoutSeconds: 1
volumeMounts:
- name: grafana-persistent-storage
mountPath: /var
volumes:
- name: grafana-persistent-storage
emptyDir: {}
[root@manage01 prometheus]# kubectl create -f grafana-deploy.yaml
#grafana server配置文件
[root@manage01 prometheus]# cat grafana-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: grafana
namespace: kube-system
labels:
app: grafana
component: core
spec:
type: NodePort
ports:
- port: 3000
selector:
app: grafana
component: core
[root@manage01 prometheus]# kubectl create -f grafana-svc.yaml
#grafana ingress配置文件
[root@manage01 prometheus]# cat grafana-ing.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: grafana
namespace: kube-system
spec:
rules:
- host: k8s.grafana
http:
paths:
- path: /
backend:
serviceName: grafana
servicePort: 3000
[root@manage01 prometheus]# kubectl create -f grafana-ing.yaml
#最终部署结果
[root@manage01 prometheus]# kubectl get all -n kube-system
NAME READY STATUS RESTARTS AGE
pod/grafana-core-7789756d87-snc9h 1/1 Running 2 52m
pod/kube-dns-c56c79458-xj4p7 3/3 Running 0 11m
pod/node-exporter-plldm 1/1 Running 3 117m
pod/node-exporter-xngl9 1/1 Running 2 85m
pod/prometheus-759d85775b-9n5t5 1/1 Running 0 11m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/grafana NodePort 10.10.10.200 <none> 3000:46945/TCP 50m
service/kube-dns ClusterIP 10.10.10.2 <none> 53/UDP,53/TCP 8d
service/node-exporter NodePort 10.10.10.31 <none> 9100:31672/TCP 117m
service/prometheus NodePort 10.10.10.23 <none> 9090:30003/TCP 98m
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/node-exporter 2 2 2 2 2 <none> 117m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/grafana-core 1/1 1 1 52m
deployment.apps/kube-dns 1/1 1 1 8d
deployment.apps/prometheus 1/1 1 1 100m
NAME DESIRED CURRENT READY AGE
replicaset.apps/grafana-core-7789756d87 1 1 1 52m
replicaset.apps/kube-dns-c56c79458 1 1 1 8d
replicaset.apps/prometheus-759d85775b 1 1 1 100m
#页面访问配置
http://192.168.192.130:46945/
至此,k8s集群部署暂时完结,需要源码者参照附件。
转载:https://blog.51cto.com/ylw6006/2084403