Prometheus 是一个开源的系统监控和告警工具包,最初由 SoundCloud 开发,后来加入了 CNCF(云原生计算基金会),成为了云原生技术栈的核心项目之一。Prometheus 专为可靠的、灵活的监控系统而设计,广泛用于监控分布式系统和微服务架构。
Prometheus 的关键特性
多维数据模型
- Prometheus 采用多维数据模型,时间序列数据通过指标名称和键值对的标签(labels)来标识。这种设计允许灵活的查询和过滤,帮助用户轻松监控和分析特定维度的数据。
强大的查询语言 (PromQL)
- PromQL 是 Prometheus 的查询语言,功能强大,允许用户通过简单的语法对监控数据进行实时查询、计算和聚合。PromQL 可以帮助用户生成自定义的监控图表和告警规则。
独立的时间序列数据库 (TSDB)
- Prometheus 包含一个内置的时间序列数据库,能够高效地存储和查询监控数据。与传统数据库不同,Prometheus 专为处理时间序列数据而设计,具有更好的性能和可扩展性。
自动服务发现
- Prometheus 支持多种服务发现机制,如 Kubernetes、Consul、Etcd 等,使其能够自动发现并监控动态变化的服务和实例,无需手动配置。
灵活的告警管理
- Prometheus 通过 Alertmanager 实现告警管理,允许用户根据自定义的规则触发告警,并通过多种方式发送通知(如邮件、Slack、PagerDuty 等)。Alertmanager 还支持告警分组、抑制和静默等高级功能。
多种数据导出方式
- Prometheus 通过 Exporters 导出数据,这些导出器可以收集和转换来自不同系统和应用的监控数据,使 Prometheus 能够监控各种类型的系统(如操作系统、数据库、消息队列等)。
生态系统丰富
- Prometheus 拥有一个庞大的生态系统,包括多种 Exporters、仪表盘工具(如 Grafana)、集成工具(如 Thanos 用于水平扩展和高可用)以及社区贡献的插件和扩展。
适用场景
Prometheus 特别适用于以下场景:
- 云原生和容器化环境:Prometheus 与 Kubernetes 等容器编排平台紧密集成,是监控云原生应用的理想选择。
- 微服务架构:由于其灵活的标签系统和强大的查询语言,Prometheus 能够有效地监控分布式微服务架构。
- 高并发和动态环境:Prometheus 的服务发现和自动监控功能使其能够适应快速变化的基础设施和应用环境。
要使用 Prometheus 进行监控,需要下载和配置几个关键组件。以下是主要的组件:
Prometheus server
官方网站
https://prometheus.io/download/#prometheus
网站下载对应的软件包,直接wget太慢
tar -zxvf prometheus-2.53.1.linux-amd64.tar.gz -C /usr/local/
cd /usr/local/
mv prometheus-2.53.1.linux-amd64/ prometheus
尝试运行server
cd prometheus/
./prometheus --config.file=prometheus.yml
校验配置文件
./promtool check config ./prometheus.yml
编写启动脚本配置
vim /usr/lib/systemd/system/prometheus.service
[Unit]
Description=Prometheus Server
After=network.target
[Service]
Type=simple
User=root
WorkingDirectory=/usr/local/prometheus
ExecStart=/usr/local/prometheus/prometheus --config.file=/usr/local/prometheus/prometheus.yml --web.enable-lifecycle
Restart=on-failure
[Install]
WantedBy=multi-user.target
启动
systemctl daemon-reload
systemctl start prometheus
systemctl status prometheus
systemctl enable prometheus
netstat -tunlp| grep prometheus
访问测试
172.16.208.12:9090
Prometheus node_exporter
依然去官网找到对应的下载包
tar -zxvf anaconda-ks.cfg node_exporter-1.6.1.linux-amd64.tar.gz -C /usr/local/
cd /usr/local/
mv node_exporter-1.6.1.linux-amd64/ node_exporter
设置用户
groupadd prometheus
useradd -g prometheus -s /sbin/nologin prometheus
chown -R prometheus:prometheus /usr/local/node_exporter
#配置自启动服务如没有该文件自行创建
vim /usr/lib/systemd/system/node_exporter.service
[unit]
Description=The node_exporter Server
After=network.target
[Service]
ExecStart=/usr/local/node_exporter/node_exporter
Restart=on-failure
RestartSec=15s
SyslogIdentifier=node_exporter
[Install]
WantedBy=multi-user.target
systemctl daemon-reload
systemctl enable node_exporter
systemctl restart node_exporter
在prometheus的server端配置
vim /usr/local/prometheus/prometheus.yml
static_configs:
- targets: ["192.168.8.207:9100","192.168.8.208:9100","192.168.8.209:9100","192.168.8.210:9100"]
连接测试:
alertmanager
Prometheus 的告警管理通过 Alertmanager 组件来实现,其中告警分组、抑制和静默是三个重要的功能,用于对告警进行更精细的控制和管理。
告警分组 (Grouping):告警分组是指将相似的告警事件归类到一起,以减少告警的数量和噪声。当多个告警在同一时间触发时,Alertmanager 可以将它们按照标签或其他属性进行分组,并将它们作为一个通知来发送。这种方式能够避免大量相似的告警单独发送,导致通知过载。
告警抑制 (Inhibition):告警抑制是指当某些告警条件满足时,抑制其他相关告警。通常用在一个问题会触发多个告警的情况下,其中一个告警可以掩盖其他次要的告警。例如,如果某个服务的主节点不可用,你可能会有告警标示多个从节点不可用,但主节点不可用已经说明了问题的根源,次要告警可以被抑制。
告警静默 (Silencing):告警静默是一种临时的抑制机制,允许你在特定的时间范围内禁用某些告警通知。通常用于计划内的维护或已知问题的处理,防止在这些期间发送大量无意义的告警通知。
去官网找到对应的下载包
tar -zxvf alertmanager-0.27.0.linux-amd64.tar.gz -C /usr/local/
mv /usr/local/alertmanager-0.27.0.linux-amd64/ /usr/local/alertmanager
配置启动服务
vim /usr/lib/systemd/system/alertmanager.service
[Unit]
Description=Prometheus Alertmanager
After=network.target
[Service]
ExecStart=/usr/local/alertmanager/alertmanager --config.file=/usr/local/alertmanager/alertmanager.yml
[Install]
WantedBy=multi-user.target
启动
systemctl daemon-reload
systemctl start alertmanager.service
systemctl status alertmanager.service
systemctl enable alertmanager.service
访问测试:(9093端口)
配置文件
cat /usr/local/prometheus/prometheus.yml
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
alerting:
alertmanagers:
- static_configs:
- targets: #指定alertmanager地址,可以指定多个,Prometheus会将告警发到这些指定的Alertmanager
- 172.16.208.12:9093
配置告警
一般情况下,Alertmanager 的配置文件(通常为
alertmanager.yml
)会包含以下几个关键部分
全局配置 (Global):全局配置部分定义了 Alertmanager 的全局设置,这些设置可以在多个地方复用。常见的配置项包括邮件服务器(SMTP)、Slack webhook URL 以及 PagerDuty integration keys 等
-
resolve_timeout
:设置告警在解决后的超时时间。 -
smtp_smarthost
:指定用于发送邮件的 SMTP 服务器。 -
smtp_from
:发送告警邮件时使用的发件人地址。 -
smtp_auth_username
和smtp_auth_password
:SMTP 服务器的认证信息。
接收器 (Receivers):接收器定义了告警的接收方式,即告警通知将通过什么渠道发送(如邮件、dingding等)。每个接收器都可以指定不同的通知方式。
-
name
:接收器的名称。 -
email_configs
、slack_configs
等:具体的通知方式配置。
路由 (Route):路由部分决定了如何将告警分发给不同的接收器。可以根据告警的标签、严重程度等属性进行路由。
-
group_by
:定义如何分组告警。 -
receiver
:指定默认接收器。 -
routes
:定义更细粒度的路由规则,比如根据告警的严重程度选择不同的接收器。
抑制规则 (Inhibit Rules):抑制规则定义了在什么情况下应该抑制其他告警。这通常用于避免在主要告警触发时发送不必要的次要告警。
-
source_match
:指定触发抑制的告警条件。 -
target_match
:指定被抑制的告警条件。 -
equal
:定义在什么情况下抑制生效,比如相同的alertname
、cluster
、service
。
静默 (Silences):静默部分可以通过 Alertmanager 的 Web 界面或 API 动态配置,但也可以在配置文件中静态定义一些静默规则,用于在特定时间段内静默特定的告警。
-
matchers
:定义匹配哪些告警会被静默。 -
startsAt
和endsAt
:静默的开始和结束时间。 -
createdBy
和comment
:静默规则的创建者和注释。
模板 (Templates):模板部分用于定义告警通知的内容格式,可以根据需要自定义通知消息的内容,以适应不同的接收器。
templates
:指定模板文件的路径,Alertmanager 会使用这些模板文件来渲染通知内容。
配置文件示例:
global:
resolve_timeout: 5m
smtp_smarthost: 'smtp.example.com:587'
smtp_from: 'alertmanager@example.com'
smtp_auth_username: 'alertmanager'
smtp_auth_password: 'password'
smtp_require_tls: true
route:
group_by: ['alertname', 'cluster', 'service']
group_wait: 30s
group_interval: 5m
repeat_interval: 3h
receiver: 'email-receiver'
routes:
- match:
severity: 'critical'
receiver: 'slack-receiver'
- match:
severity: 'warning'
receiver: 'email-receiver'
receivers:
- name: 'email-receiver'
email_configs:
- to: 'team@example.com'
send_resolved: true
- name: 'slack-receiver'
slack_configs:
- api_url: 'https://hooks.slack.com/services/XXXXX/XXXXX/XXXXX'
channel: '#alerts'
send_resolved: true
inhibit_rules:
- source_match:
severity: 'critical'
target_match:
severity: 'warning'
equal: ['alertname', 'cluster', 'service']
templates:
- '/etc/alertmanager/template/*.tmpl'
配置邮箱告警
邮箱配置步骤
邮箱-进入设置-账户与安全-账户安全开启第三方客户端登录,生成一个新密码
vim /usr/local/alertmanager/alertmanager.yml
global:
resolve_timeout: 5m
smtp_smarthost: 'smtp.qiye.aliyun.com:25'
smtp_from: 'user@aliyun.net'
smtp_auth_username: 'user@aliyun.net'
smtp_auth_password: 'password'
smtp_hello: "qiye.aliyun.com"
route:
group_by: ['alertname']
group_wait: 30s
group_interval: 5m
repeat_interval: 3h
receiver: 'email_receiver'
routes:
receivers:
- name: 'email_receiver'
email_configs:
- send_resolved: true
to: 'user@aliyun.net'
html: '{{ template "email.html" . }}'
inhibit_rules:
- source_match:
severity: 'critical'
target_match:
severity: 'warning'
equal: ['alertname', 'dev', 'instance']
配置钉钉告警
特别注意在配置时候安全设置中的关键词设置,应与alertmanager.yml中的group对应
wget https://github.com/timonwong/prometheus-webhook-dingtalk/releases/download/v2.1.0/prometheus-webhook-dingtalk-2.1.0.linux-amd64.tar.gz
tar -zxvf prometheus-webhook-dingtalk-2.1.0.linux-amd64.tar.gz -C /usr/local/
mv /usr/local/prometheus-webhook-dingtalk-2.1.0.linux-amd64 /usr/local/prometheus-webhook-dingtalk
vim /lib/systemd/system/prometheus-webhook-dingtalk.service
[Unit]
Description=Prometheus Webhook Dingtalk
After=network.target
[Service]
Type=simple
User=root
Group=root
ExecStart=/usr/local/prometheus-webhook-dingtalk/prometheus-webhook-dingtalk --config.file=/usr/local/prometheus-webhook-dingtalk/config.yml
Restart=on-failure
[Install]
WantedBy=multi-user.target
cd /usr/local/prometheus-webhook-dingtalk
cp config.example.yml config.yml
vim /usr/local/prometheus-webhook-dingtalk/config.yml
targets:
#url是前面创建群组机器人得到的webhook地址
webhook1:
url: https://oapi.dingtalk.com/robot/send?access_token=xxxxxxxxxxxxxxxxxxx
systemctl daemon-reload
systemctl start prometheus-webhook-dingtalk.service
systemctl status prometheus-webhook-dingtalk.service
systemctl enable prometheus-webhook-dingtalk.service
在alertmanager的配置文件中添加dingtalk-webhook接收者
vim /usr/local/alertmanager/alertmanager.yml
global:
resolve_timeout: 5m
smtp_smarthost: 'smtp.qiye.aliyun.com:25'
smtp_from: 'user@qq.net'
smtp_auth_username: ' '
smtp_auth_password: ' '
smtp_hello: "qiye.aliyun.com"
route:
group_by: ['alertname']
group_wait: 30s
group_interval: 5m
repeat_interval: 3h
receiver: 'email_receiver'
routes:
receivers:
- name: 'email_receiver'
email_configs:
- send_resolved: true
to: 'user@qq.net'
- name: 'dingtalk_webhook'
webhook_configs:
- url: 'http://172.16.208.12:8060/dingtalk/webhook1/send'
send_resolved: true
inhibit_rules:
- source_match:
severity: 'critical'
target_match:
severity: 'warning'
equal: ['alertname', 'dev', 'instance']
热重载
curl -lv -X POST http://127.0.0.1:9093/-/reload
rule
prometheus和zabbix不同你需要自己配置告警规则,这具有相当好的灵活性,且在对多个相同项进行配置的时候降低的冗余性,即对如磁盘使用率,磁盘剩余空间等相似的告警项减少重复告警产生。笔者自己整理了实际使用中的告警模板,涵盖了cpu、内存、硬盘、网络等常见的几个监控项,读者可以自行选择使用
vim /usr/local/prometheus/rules/base.yml
groups:
- name: node_exporter_alerts
rules:
- alert: NodeDown
expr: up == 0
for: 5m
labels:
severity: critical
annotations:
summary: Instance {{ $labels.instance }} is down
description: Prometheus target {{ $labels.instance }} is down.
- alert: HighLoad
expr: node_load1 > 1.5
for: 5m
labels:
severity: warning
annotations:
summary: High load on {{ $labels.instance }}
description: Load average over last 1 minute is above 1.5.
- alert: SystemTimeSkew
expr: abs(node_timex_offset_seconds) > 0.1
for: 10m
labels:
severity: warning
annotations:
summary: Instance {{ $labels.instance }} has a large time skew
description: System clock is out of sync by more than 100ms
- alert: FileSystemFull
expr: (node_filesystem_avail_bytes / node_filesystem_size_bytes) * 100 < 10
for: 5m
labels:
severity: critical
annotations:
summary: Filesystem on {{ $labels.instance }} is almost full
description: Filesystem on {{ $labels.instance }} is below 10% free space.
- alert: HostHighCpuLoad
expr: sum(node_load1) / count(node_cpu_seconds_total{mode="system"}) * 100 > 80
for: 5m
labels:
severity: warning
annotations:
summary: Instance {{ $labels.instance }} has high CPU load
description: CPU load is over 80% for more than 5 minutes.
- alert: HighCPUUsage
expr: 100 - (avg by (instance)(irate(node_cpu_seconds_total{mode="idle"}[1m]) )) * 100 > 90
for: 5m
labels:
severity: warning
annotations:
summary: High CPU usage on {{ $labels.instance }}
description: CPU usage is above 90%.
- alert: CPUThrottling
expr: rate(container_cpu_cfs_throttled_seconds_total[5m]) > 1
for: 5m
labels:
severity: critical
annotations:
summary: CPU throttling detected on {{ $labels.instance }}
description: CPU throttling is above 1 second in the last 5 minutes.
- alert: HighSystemCPUUsage
expr: rate(node_cpu_seconds_total{mode="system"}[5m]) > 0.1
for: 5m
labels:
severity: warning
annotations:
summary: High system CPU usage on {{ $labels.instance }}
description: System CPU usage is above 10%.
- alert: HighUserCPUUsage
expr: rate(node_cpu_seconds_total{mode="user"}[5m]) > 0.8
for: 5m
labels:
severity: warning
annotations:
summary: High user CPU usage on {{ $labels.instance }}
description: User CPU usage is above 80%.
- alert: HighIOWait
expr: rate(node_cpu_seconds_total{mode="iowait"}[5m]) > 0.1
for: 5m
labels:
severity: warning
annotations:
summary: High IO wait on {{ $labels.instance }}
description: IO wait is above 10%.
- alert: HighMemoryUsage
expr: (node_memory_MemTotal_bytes - node_memory_MemAvailable_bytes) / node_memory_MemTotal_bytes * 100 > 80
for: 5m
labels:
severity: warning
annotations:
summary: High memory usage on {{ $labels.instance }}
description: Memory usage is above 80%.
- alert: LowMemoryAvailability
expr: (node_memory_MemAvailable_bytes / node_memory_MemTotal_bytes) < 0.2
for: 5m
labels:
severity: warning
annotations:
summary: Low memory availability on {{ $labels.instance }}
description: Memory availability is below 20%.
- alert: HighSwapUsage
expr: node_memory_SwapFree_bytes / node_memory_SwapTotal_bytes * 100 < 20
for: 10m
labels:
severity: warning
annotations:
summary: Instance {{ $labels.instance }} swap usage is high
description: Swap usage is more than 80%.
- alert: HighSwapInRate
expr: rate(node_vmstat_pswpin[5m]) > 100
for: 5m
labels:
severity: warning
annotations:
summary: High swap in rate on {{ $labels.instance }}
description: Swap in rate is above 100 per second.
- alert: HighSwapOutRate
expr: rate(node_vmstat_pswpout[5m]) > 100
for: 5m
labels:
severity: warning
annotations:
summary: High swap out rate on {{ $labels.instance }}
description: Swap out rate is above 100 per second.
- alert: HostDiskFull
expr: (node_filesystem_avail_bytes{fstype=~"ext4|xfs"} / node_filesystem_size_bytes{fstype=~"ext4|xfs"}) * 100 < 10
for: 5m
labels:
severity: critical
annotations:
summary: Instance {{ $labels.instance }} disk is almost full
description: Disk is almost full (< 10% left) for more than 5 minutes.
- alert: HighDiskUsage
expr: (node_filesystem_size_bytes - node_filesystem_free_bytes) / node_filesystem_size_bytes * 100 > 80
for: 5m
labels:
severity: warning
annotations:
summary: High disk usage on {{ $labels.instance }}
description: Disk usage is above 80%.
- alert: LowDiskInodes
expr: (node_filesystem_files_free / node_filesystem_files) < 0.1
for: 5m
labels:
severity: warning
annotations:
summary: Low disk inodes on {{ $labels.instance }}
description: Disk inodes are below 10%.
- alert: HighDiskReadBytesRate
expr: rate(node_disk_read_bytes_total[5m]) > 1000000
for: 5m
labels:
severity: warning
annotations:
summary: High disk read bytes rate on {{ $labels.instance }}
description: Disk read bytes rate is above 1MB/s.
- alert: HighDiskWriteBytesRate
expr: rate(node_disk_written_bytes_total[5m]) > 1000000
for: 5m
labels:
severity: warning
annotations:
summary: High disk write bytes rate on {{ $labels.instance }}
description: Disk write bytes rate is above 1MB/s.
- alert: HighTcpTimeWait
expr: node_sockstat_TCP_tw >= 5000
for: 1m
labels:
severity: warning
annotations:
summary: "Tcp TimeWait数量大于5000, 实例: {{ $labels.instance }},当前值:{{ $value }}%"
- alert: HighNetworkTraffic
expr: rate(node_network_receive_bytes_total[5m]) > 10000000 or rate(node_network_transmit_bytes_total[5m]) > 10000000
for: 5m
labels:
severity: warning
annotations:
summary: High network traffic on {{ $labels.instance }}
description: Network traffic is above 10MB/s.
- alert: HighNetworkErrorsRate
expr: rate(node_network_receive_errs_total[1m]) > 0 or rate(node_network_transmit_errs_total[1m]) > 0
for: 5m
labels:
severity: warning
annotations:
summary: High network errors rate on {{ $labels.instance }}
description: Network errors rate detected.
- alert: HighNetworkPacketsRate
expr: rate(node_network_receive_packets_total[5m]) > 1000 or rate(node_network_transmit_packets_total[5m]) > 1000
for: 5m
labels:
severity: warning
annotations:
summary: High network packets rate on {{ $labels.instance }}
description: Network packets rate is above 1000 per second.
- alert: HighNetworkLatency
expr: rate(node_network_transmit_errs_total[5m]) > 0
for: 5m
labels:
severity: warning
annotations:
summary: High network latency on {{ $labels.instance }}
description: Network latency is above 100ms.
修改Prometheus配置,加载此rule文件
vim /usr/local/prometheus/prometheus.yml
rule_files:
- '/usr/local/prometheus/rules/base_rules.yml'
修改完规则可以直接使用热重载
curl -X POST http://127.0.0.1:9090/-/reload
测试:
告警路由分组
在Prometheus的配置中你可以设置路由,有几种常见的方式,在前面的rules设置中,对告警进行了分级,有critical和warning,即你可通过不同的告警级别路由,如critical发送到钉钉,warning发送到邮箱,如果你感觉告警太过于频繁,你也可以将critical发送到钉钉或者邮箱,warning不发送告警。
第一种:critical发送到钉钉,warning发送到邮箱
vim /usr/local/alertmanager/alertmanager.yml
route:
group_by: ['alertname']
group_wait: 30s
group_interval: 5m
repeat_interval: 3h
receiver: 'email_receiver'
routes:
- match:
severity: 'critical'
receiver: 'email_receiver'
- match:
severity: 'warning'
receiver: 'dingtalk_webhookr'
receivers:
- name: 'email_receiver'
email_configs:
- send_resolved: true
to: 'user@aliyun.net'
- name: 'dingtalk_webhook'
webhook_configs:
- url: 'http://172.16.208.12:8060/dingtalk/webhook1/send'
send_resolved: true
第二种:只接收critical级别告警
vim /usr/local/alertmanager/alertmanager.yml
route:
group_by: ['alertname']
group_wait: 30s
group_interval: 5m
repeat_interval: 3h
receiver: 'email_receiver'
routes:
- match:
severity: 'critical'
receiver: 'email_receiver'
- match:
severity: 'warning'
receiver: 'null_receiver'
receivers:
- name: 'null_receiver'
- name: 'email_receiver'
email_configs:
- send_resolved: true
to: 'user@aliyun.net'
html: '{{ template "email.html" . }}'
- name: 'dingtalk_webhook'
webhook_configs:
- url: 'http://172.16.208.12:8060/dingtalk/webhook1/send'
send_resolved: true
这里有一点不同,你需要指定一个空用户来收到warning级别告警,如果不设置会报错,另外在 receiver: 'email_receiver’这一栏,这个接收项一定要是能接收信息的recevier,不能直接填null_receiver
告警模板
添加告警模板可以美化告警提示,让收到的告警信息一目了然
邮箱告警模板
vim /usr/local/alertmanager/templates/base.yml
{{ define "email.html" }}
{{ range .Alerts }}
<pre>
========start==========
告警程序: prometheus_alert
告警级别: {{ .Labels.severity }}
告警类型: {{ .Labels.alertname }}
故障主机: {{ .Labels.instance }}
告警主题: {{ .Annotations.summary }}
告警详情: {{ .Annotations.description }}
触发时间: {{ (.StartsAt.Add 28800e9).Format "2006-01-02 15:04:05" }}
========end==========
</pre>
{{ end }}
{{ end }}
钉钉告警模板
vim /usr/local/alertmanager/templates/dingding.yml
{{ define "__subject" }}
[{{ .Status | toUpper }}{{ if eq .Status "firing" }}:{{ .Alerts.Firing | len }}{{ end }}]
{{ end }}
{{ define "__alert_list" }}{{ range . }}
---
**告警类型**: {{ .Labels.alertname }}
**告警级别**: {{ .Labels.severity }}
**故障主机**: {{ .Labels.instance }}
**告警信息**: {{ .Annotations.description }}
**触发时间**: {{ (.StartsAt.Add 28800e9).Format "2006-01-02 15:04:05" }}
{{ end }}{{ end }}
{{ define "__resolved_list" }}{{ range . }}
---
**告警类型**: {{ .Labels.alertname }}
**告警级别**: {{ .Labels.severity }}
**故障主机**: {{ .Labels.instance }}
**触发时间**: {{ (.StartsAt.Add 28800e9).Format "2006-01-02 15:04:05" }}
**恢复时间**: {{ (.EndsAt.Add 28800e9).Format "2006-01-02 15:04:05" }}
{{ end }}{{ end }}
{{ define "ops.title" }}
{{ template "__subject" . }}
{{ end }}
{{ define "ops.content" }}
{{ if gt (len .Alerts.Firing) 0 }}
**====侦测到{{ .Alerts.Firing | len }}个故障====**
{{ template "__alert_list" .Alerts.Firing }}
---
{{ end }}
{{ if gt (len .Alerts.Resolved) 0 }}
**====恢复{{ .Alerts.Resolved | len }}个故障====**
{{ template "__resolved_list" .Alerts.Resolved }}
{{ end }}
{{ end }}
{{ define "ops.link.title" }}{{ template "ops.title" . }}{{ end }}
{{ define "ops.link.content" }}{{ template "ops.content" . }}{{ end }}
{{ template "ops.title" . }}
{{ template "ops.content" . }}
集成grafana
Grafana 是一个开源的多平台数据可视化工具,主要用于监控和分析应用程序和基础设施的性能数据。Grafana 能够从多种数据源(如 Prometheus、InfluxDB、MySQL、Elasticsearch 等)提取数据,并将其可视化为图表、仪表板和警报。它广泛应用于 DevOps、监控和数据分析领域,帮助团队和组织更好地理解和管理其系统性能和健康状况。当然,Prometheus没有自己的图表可视化工具,需要与其集成可视化。
下载安装
wget https://dl.grafana.com/oss/release/grafana-8.0.3-1.x86_64.rpm
yum install grafana-8.0.3-1.x86_64.rpm
systemctl start grafana-server
systemctl status grafana-server
systemctl enable grafana-server
默认账号/密码 admin/admin
浏览器输入http://172.16.208.12:3000
导入模板
登录后,你可以导入仪表板模板。如果你有一个特定的模板ID可以按照以下步骤导入:
- 在Grafana界面的左侧菜单中,点击 “+” 图标,然后选择 “Import”。
- 在Import界面,输入你想导入的仪表板ID,然后点击 “Load”。
常用的模板
11209、4921、1860
你也可以直接登录dashboard网站找自己想用的id
https://grafana.com/grafana/dashboards/
效果图
总结:
Prometheus 的优点在于其强大的多维数据模型、灵活的 PromQL 查询语言、自动服务发现和适用于云原生环境的设计。与 Zabbix 相比,Prometheus 更加现代化,特别适合监控动态的云原生和容器化环境。Zabbix 是一款成熟的监控工具,提供了一个用户友好的界面、强大的报警管理、自动化发现以及丰富的模板,特别适用于传统 IT 基础设施的监控。然而,Zabbix 的数据处理方式较为静态,扩展性和实时性方面不如 Prometheus。对于需要监控大量动态、分布式服务的场景,Prometheus 更具优势,而 Zabbix 则在传统服务器和网络设备监控中表现优异。
想了解zabbix的同学也可参考我的另一篇文章:
https://blog.csdn.net/sozee910/article/details/141722082