Prometheus下的alertmanager微信报警配置及本地时间配置,解决UTC改本地时间

万万没想到,配个微信报警,花了我一天的时间,以下可以直接粘贴使用,亲测可用,都是一步一步摸索的结晶,有用请点个赞吧!转载请备注出处

解释以下为什么不使用grafana配和预警,因为grafana比较局限,而且在使用模板的情况下是不允许进行Alert预警的,所以最后我才直接采用了Prometheus下的alertmanager报警,放弃了grafana。

Prometheus下的alertmanager报警配置及本地时间配置

1.企业微信账号注册

企业微信账号的注册请点击这里,很简单,不赘述!
添加Grafana的应用:
在这里插入图片描述剩下的自己填写啊,真的不想在这里花太多时间
在这里插入图片描述后面要用到的字段:
to_user:
在这里插入图片描述agent_id 和 api_secret:
在这里插入图片描述

2.alertmanager的下载配置

alertmanager的下载地址

切记下载对应版本,否则都启动不起来,别问我怎么知道的!啊,想想就上头!
在这里插入图片描述

3. 对alertmanager进行配置

这里分为三步

第一进行文件解压:tar -zxvf alertmanager-0.21.0.linux-amd64.tar.gz

第二对文件进行配置

第一个文件是在/opt/software/prometheus/alertmanager/alertmanager路径下,修改的文件为alertmanager.yml,里面原本的内容全部删除,放下面的配置:

global:
  resolve_timeout: 5m
receivers:
- name: wechat
  wechat_configs:
  - agent_id: 'xx2' #这里时企业微信的id
    api_secret: 'xxxxxxY' # 这里是企业微信的secret
    corp_id: 'wxx4' # 这里是企业rp_id
    send_resolved: true
    to_user: 'JiangWanLin' # 企业微信的名字
route:
  group_by:
  - job
  group_interval: 5m
  group_wait: 30s
  receiver: wechat
  repeat_interval: 12h
  routes:
  - match:
      alertname: Watchdog
    receiver: wechat
templates:
- '/opt/software/prometheus/alertmanager/alertmanager/wechat.tmpl' # 这是你微信的模板文件,就是你报警信息的模板

参数说明:

  • corp_id: 企业微信账号唯一 ID, 可以在我的企业中查看。
  • to_party: 需要发送的组。
  • agent_id: 第三方企业应用的 ID,可以在自己创建的第三方企业应用详情页面查看。
  • api_secret: 第三方企业应用的密钥,可以在自己创建的第三方企业应用详情页面查看。

第二个文件是在/opt/software/prometheus/alertmanager/alertmanager路径下,修改的文件为wechat.tmpl

说明:这个模板可以发出报警,并且恢复时也可提醒,使用本地时间,成效如下:
在这里插入图片描述完整内容:

{{ define "wechat.default.message" }}
{{- if gt (len .Alerts.Firing) 0 -}}
{{- range $index, $alert := .Alerts -}}
{{- if eq $index 0 -}}
**********告警通知**********
告警类型: {{ $alert.Labels.alertname }}
告警级别: {{ $alert.Labels.severity }}
{{- end }}
=====================
告警主题: {{ $alert.Annotations.summary }}
告警详情: {{ $alert.Annotations.description }}
故障时间: {{ $alert.StartsAt.Local }}
{{ if gt (len $alert.Labels.instance) 0 -}}故障实例: {{ $alert.Labels.instance }}{{- end -}}
{{- end }}
{{- end }}

{{- if gt (len .Alerts.Resolved) 0 -}}
{{- range $index, $alert := .Alerts -}}
{{- if eq $index 0 -}}
**********恢复通知**********
告警类型: {{ $alert.Labels.alertname }}
告警级别: {{ $alert.Labels.severity }}
{{- end }}
=====================
告警主题: {{ $alert.Annotations.summary }}
告警详情: {{ $alert.Annotations.description }}
故障时间: {{ $alert.StartsAt.Local }}
恢复时间: {{ $alert.EndsAt.Local }}
{{ if gt (len $alert.Labels.instance) 0 -}}故障实例: {{ $alert.Labels.instance }}{{- end -}}
{{- end }}
{{- end }}
{{- end }}

尖叫注意!我废了两小时才得出的结论!
alertmanager用的UTC时间,所以报警时间出来都是错误的!!!所以在这里我跋山涉水终于找到了使用本地时间的方法!就是alert.StartsAt.Local这种写法!!!原罪还是自己太菜!但是好在解决了,全网首发~

第三个文件是在Prometheus下进行修改报警设置:

在这里插入图片描述添加预警的指标,我这里写了一个test.rules:

groups:
- name: monitor_base
  rules:
  - alert: CpuUsageAlert_waring
    expr: sum(avg(irate(node_cpu_seconds_total{mode!='idle'}[5m])) without (cpu)) by (instance) > 0.60
    for: 2m
    labels:
      level: warning
    annotations:
      summary: "Instance {{ $labels.instance }} CPU usage high"
      description: "{{ $labels.instance }} CPU usage above 60% (current value: {{ $value }})"
  - alert: CpuUsageAlert_serious
    #expr: sum(avg(irate(node_cpu_seconds_total{mode!='idle'}[5m])) without (cpu)) by (instance) > 0.85
    expr: (100 - (avg by (instance) (irate(node_cpu_seconds_total{job=~".*",mode="idle"}[5m])) * 100)) > 85
    for: 3m
    labels:
      level: serious
    annotations:
      summary: "Instance {{ $labels.instance }} CPU usage high"
      description: "{{ $labels.instance }} CPU usage above 85% (current value: {{ $value }})"
  - alert: MemUsageAlert_waring
    expr: avg by(instance) ((1 - (node_memory_MemFree_bytes + node_memory_Buffers_bytes + node_memory_Cached_bytes) / node_memory_MemTotal_bytes) * 100) > 70
    for: 2m
    labels:
      level: warning
    annotations:
      summary: "Instance {{ $labels.instance }} MEM usage high"
      description: "{{$labels.instance}}: MEM usage is above 70% (current value is: {{ $value }})"
  - alert: MemUsageAlert_serious
    expr: (node_memory_MemTotal_bytes - node_memory_MemAvailable_bytes)/node_memory_MemTotal_bytes > 0.90
    for: 3m
    labels:
      level: serious
    annotations:
      summary: "Instance {{ $labels.instance }} MEM usage high"
      description: "{{ $labels.instance }} MEM usage above 90% (current value: {{ $value }})"
  - alert: DiskUsageAlert_warning
    expr: (1 - node_filesystem_free_bytes{fstype!="rootfs",mountpoint!="",mountpoint!~"/(run|var|sys|dev).*"} / node_filesystem_size_bytes) * 100 > 80
    for: 2m
    labels:
      level: warning
    annotations:
      summary: "Instance {{ $labels.instance }} Disk usage high"
      description: "{{$labels.instance}}: Disk usage is above 80% (current value is: {{ $value }})"
  - alert: DiskUsageAlert_serious
    expr: (1 - node_filesystem_free_bytes{fstype!="rootfs",mountpoint!="",mountpoint!~"/(run|var|sys|dev).*"} / node_filesystem_size_bytes) * 100 > 90
    for: 3m
    labels:
      level: serious
    annotations:
      summary: "Instance {{ $labels.instance }} Disk usage high"
      description: "{{$labels.instance}}: Disk usage is above 90% (current value is: {{ $value }})"
  - alert: NodeFileDescriptorUsage
    expr: avg by (instance) (node_filefd_allocated{} / node_filefd_maximum{}) * 100 > 60
    for: 2m
    labels:
      level: warning
    annotations:
      summary: "Instance {{ $labels.instance }} File Descriptor usage high"
      description: "{{$labels.instance}}: File Descriptor usage is above 60% (current value is: {{ $value }})"
  - alert: NodeLoad15
    expr: avg by (instance) (node_load15{}) > 80
    for: 2m
    labels:
      level: warning
    annotations:
      summary: "Instance {{ $labels.instance }} Load15 usage high"
      description: "{{$labels.instance}}: Load15 is above 80 (current value is: {{ $value }})"
  - alert: NodeAgentStatus
    expr: avg by (instance) (up{}) == 0
    for: 2m
    labels:
      level: warning
    annotations:
      summary: "{{$labels.instance}}: has been down"
      description: "{{$labels.instance}}: Node_Exporter Agent is down (current value is: {{ $value }})"
  - alert: NodeProcsBlocked
    expr: avg by (instance) (node_procs_blocked{}) > 10
    for: 2m
    labels:
      level: warning
    annotations:
      summary: "Instance {{ $labels.instance }}  Process Blocked usage high"
      description: "{{$labels.instance}}: Node Blocked Procs detected! above 10 (current value is: {{ $value }})"
  - alert: NetworkTransmitRate
    #expr:  avg by (instance) (floor(irate(node_network_transmit_bytes_total{device="ens192"}[2m]) / 1024 / 1024)) > 50
    expr:  avg by (instance) (floor(irate(node_network_transmit_bytes_total{}[2m]) / 1024 / 1024 * 8 )) > 40
    for: 1m
    labels:
      level: warning
    annotations:
      summary: "Instance {{ $labels.instance }} Network Transmit Rate usage high"
      description: "{{$labels.instance}}: Node Transmit Rate (Upload) is above 40Mbps/s (current value is: {{ $value }}Mbps/s)"
  - alert: NetworkReceiveRate
    #expr:  avg by (instance) (floor(irate(node_network_receive_bytes_total{device="ens192"}[2m]) / 1024 / 1024)) > 50
    expr:  avg by (instance) (floor(irate(node_network_receive_bytes_total{}[2m]) / 1024 / 1024 * 8 )) > 40
    for: 1m
    labels:
      level: warning
    annotations:
      summary: "Instance {{ $labels.instance }} Network Receive Rate usage high"
      description: "{{$labels.instance}}: Node Receive Rate (Download) is above 40Mbps/s (current value is: {{ $value }}Mbps/s)"
  - alert: DiskReadRate
    expr: avg by (instance) (floor(irate(node_disk_read_bytes_total{}[2m]) / 1024 )) > 200
    for: 2m
    labels:
      level: warning
    annotations:
      summary: "Instance {{ $labels.instance }} Disk Read Rate usage high"
      description: "{{$labels.instance}}: Node Disk Read Rate is above 200KB/s (current value is: {{ $value }}KB/s)"
  - alert: DiskWriteRate
    expr: avg by (instance) (floor(irate(node_disk_written_bytes_total{}[2m]) / 1024 / 1024 )) > 20
    for: 2m
    labels:
      level: warning
    annotations:
      summary: "Instance {{ $labels.instance }} Disk Write Rate usage high"
      description: "{{$labels.instance}}: Node Disk Write Rate is above 20MB/s (current value is: {{ $value }}MB/s)"

全部复制进去就行了,如下图:
在这里插入图片描述注意:这里需要重启Prometheus

第三步进行运行配置文件alertmanager:

nohup ./alertmanager --config.file=./alertmanager.yml --storage.path=/opt/software/prometheus/alertmanager/alertmanager/data/ --log.level=debug &

启动之后可以在hostname:9093查看到信息,显示出如下界面则说明alertmanager启动成功:
在这里插入图片描述后台日志:
在这里插入图片描述
Prometheus下可以看到预警的指标,访问地址ip:9090,出现如下界面则说明预警指标配置成功!:
在这里插入图片描述在这里插入图片描述

4.测试报警

故意宕掉一个节点,然后企业微信中显示如下,节点恢复后显示如下:
在这里插入图片描述

本文参考了如下博客:
https://www.cnblogs.com/sanduzxcvbnm/p/13724172.html
https://blog.csdn.net/wanchaopeng/article/details/83857130
https://blog.51cto.com/10874766/2530127?source=dra
https://www.fxkjnj.com/?p=2488
感谢以上博客的贡献!

报警模板:
https://awesome-prometheus-alerts.grep.to/rules

  • 15
    点赞
  • 26
    收藏
    觉得还不错? 一键收藏
  • 6
    评论
评论 6
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值