prometheus 结合cAdvisor、AlertManager、node-exporter、 监控容器并实现邮箱告警
prometheus 监控容器
rometheus是一款面向云原生应用程序的开源监控工具,作为第一个从CNCF毕业的监控工具而言,开发者对于Prometheus寄予了巨大的希望。 在Kubernetes社区中,很多人认为Prometheus是容器场景中监控的第一方案,成为容器监控标准的制定者。
什么是 cAdvisor
cAdvisor (Container Advisor) 是 Google 开源的一个容器监控工具,可用于对容器资源的使用情况和性能进行监控。它以守护进程方式运行,用于收集、聚合、处理和导出正在运行容器的有关信息。具体来说,该组件对每个容器都会记录其资源隔离参数、历史资源使用情况、完整历史资源使用情况的直方图和网络统计信息。
cAdvisor 本身就对 Docker 容器支持,并且还对其它类型的容器尽可能的提供支持,力求兼容与适配所有类型的容器。
由以上介绍我们可以知道,cAdvisor 是用于监控容器引擎的。由于其监控的实用性,Kubernetes 已经默认将其与 Kubelet 融合,所以我们无需再单独部署 cAdvisor 组件来暴露节点中容器运行的信息,直接使用 Kubelet 组件提供的指标采集地址即可。
Cadvisor 进行收集,通过 Prometheus 作为数据源,利用 Grafana 进行展示。
环境说明:
主机名 | IP | 所需软件 |
---|---|---|
master | 192.168.58.110 | docker-ce |
client | 192.168.58.20 | docker-ce |
master、client主机安装docker并配置镜像加速
docker安装
配置网络源(rhel红帽系统)
[root@master ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-8.repo
配置docker-ce 源
[root@master ~]# cd /etc/yum.repos.d/
[root@master yum.repos.d]# curl -o docker-ce.repo https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/docker-ce.repo
安装 docker-ce 以及依赖包和工具
[root@master ~]# dnf -y install yum-utils device-mapper-persistent-data lvm2
[root@master ~]# yum -y install docker-ce --allowerasing
安装完成后,使用 docker version 命令查看docker的版本信息
[root@master ~]# docker version
Client: Docker Engine - Community
Version: 20.10.11
API version: 1.41
Go version: go1.16.9
Git commit: dea9396
Built: Thu Nov 18 00:36:58 2021
OS/Arch: linux/amd64
Context: default
Experimental: true
配置docker镜像 加速
个人加速器链接获取 请访问 docker 基础用法
[root@master ~]# mkdir -p /etc/docker
[root@master ~]# vim /etc/docker/daemon.json
{
"registry-mirrors": ["https://a74l47xi.mirror.aliyuncs.com"] //此处的网址是个人账户分配的
}
[root@master ~]# systemctl daemon-reload
[root@master ~]# systemctl restart docker
等待docker在两台主机部署完成后,在 master 主机上拉取prom/Prometheus官方镜像
[root@master ~]# docker pull prom/prometheus
Using default tag: latest
latest: Pulling from prom/prometheus
97518928ae5f: Pull complete
5b58818b7f48: Pull complete
d9a64d9fd162: Pull complete
4e368e1b924c: Pull complete
867f7fdd92d9: Pull complete
387c55415012: Pull complete
07f94c8f51cd: Pull complete
ce8cf00ff6aa: Pull complete
e44858b5f948: Pull complete
4000fdbdd2a3: Pull complete
Digest: sha256:18d94ae734accd66bccf22daed7bdb20c6b99aa0f2c687eea3ce4275fe275062
Status: Downloaded newer image for grafana/grafana:latest
docker.io/prom/prometheus:latest
[root@master ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
prom/prometheus latest a3d385fc29f9 11 days ago 201MB
在 client 上获取prometheus.yml配置文件
prometheus官方
# 将prometheus的安装包上传至主机中,解压,将prometheus.yaml配置文件传输到master主机的/opt目录中
[root@client ~]# ls
anaconda-ks.cfg prometheus-2.31.1.linux-amd64.tar.gz
[root@client ~]# tar xf prometheus-2.31.1.linux-amd64.tar.gz
[root@client ~]# cd prometheus-2.31.1
[root@client prometheus-2.31.1]# scp /root/prometheus-2.31.1/prometheus.yml 192.168.58.110:/opt/prometheus.yml
root@192.168.58.110's password:
prometheus.yml 100% 934 29.3KB/s 00:00
master 主机上使用官方镜像运行prometheus 容器,并进行端口和目录文件映射
# 查看配置文件
[root@master ~]# vi /opt/prometheus.yml
# my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets:
# - alertmanager:9093
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - "first_rules.yml"
# - "second_rules.yml"
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: "prometheus"
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ["localhost:9090"]
# 查看镜像
[root@master ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
prom/prometheus latest a3d385fc29f9 11 days ago 201MB
# 映射端口和配置文件到主机上且设置随docker启动而启动容器
[root@master opt]# docker run -d --name prometheus --restart always -p 9090:9090 -v /opt/prometheus.yml:/etc/prometheus/prometheus.yml prom/prometheus
cb748d375af075241ea835c14a00896a8d94a3e05f911f8b88c155be9ae35980
[root@master opt]# docker ps | grep prometheus
cb748d375af0 prom/prometheus "/bin/prometheus --c…" 7 seconds ago Up 7 seconds 0.0.0.0:9090->9090/tcp, :::9090->9090/tcp prometheus
# 查看容器运行状态
[root@master ~]# docker ps | grep prometheus
933b88601ed6 prom/prometheus "/bin/prometheus --c…" 10 minutes ago Up 10 minutes 0.0.0.0:9090->9090/tcp, :::9090->9090/tcp prometheus
访问ip+9090
查看此时prometheus 监控的对象(本机)
client 端 部署cAdvisor
在client主机上拉取google/cadvisor官方镜像
[root@client ~]# systemctl enable --now docker
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
[root@client ~]# docker pull google/cadvisor
Using default tag: latest
latest: Pulling from google/cadvisor
ff3a5c916c92: Pull complete
44a45bb65cdf: Pull complete
0bbe1a2fe2a6: Pull complete
Digest: sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04
Status: Downloaded newer image for google/cadvisor:latest
docker.io/google/cadvisor:latest
在 client 主机上使用官方镜像运行cadvisor容器并进行目录、端口映射
docker run \
--volume=/:/rootfs:ro \
--volume=/var/run:/var/run:ro \
--volume=/sys:/sys:ro \
--volume=/var/lib/docker/:/var/lib/docker:ro \
--volume=/dev/disk/:/dev/disk:ro \
--publish=8080:8080 \
--detach=true \
--name=cadvisor \
--privileged \
--device=/dev/kmsg \
google/cadvisor
用上述命令运行运行cadvisor
[root@client ~]# docker run \
> --volume=/:/rootfs:ro \
> --volume=/var/run:/var/run:ro \
> --volume=/sys:/sys:ro \
> --volume=/var/lib/docker/:/var/lib/docker:ro \
> --volume=/dev/disk/:/dev/disk:ro \
> --publish=8080:8080 \
> --detach=true \
> --name=cadvisor \
> --privileged \
> --device=/dev/kmsg \
> google/cadvisor
7d1c33918e92b406965224d4383ca4bb6520f8171073cb28d4bf872904f6edb1
# 查看容器运行状态
[root@client ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7d1c33918e92 google/cadvisor "/usr/bin/cadvisor -…" 14 seconds ago Up 7 seconds 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp cadvisor
网页访问ip+8080
向下翻还有图标
查看docker容器详细信息
在 master 主机上配置prometheus.yml文件
使prometheus能够接受到cadvisor采集的信息从而实现对cadvisor所处主机的监控
[root@master ~ ]# vi /opt/prometheus.yml
# my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets:
# - alertmanager:9093
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - "first_rules.yml"
# - "second_rules.yml"
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: "prometheus"
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ["localhost:9090"]
- job_name: "Rong Qi" //添加此处
static_configs: /添加此处
- targets: ["192.168.58.20:8080"] //添加此处
重启Prometheus容器,重载配置文件
[root@master opt]# docker restart prometheus
prometheus
# 查看容器运行情况
[root@master opt]# docker ps | grep prometheus
cb748d375af0 prom/prometheus "/bin/prometheus --c…" 26 hours ago Up 5 minutes 0.0.0.0:9090->9090/tcp, :::9090->9090/tcp prometheus
网页访问,查看监控情况
master端安装Grafana
用来获取Prometheus的监控数据,从而实现监控数据可视化
拉取grafan/grafan官方镜像
[root@master ~]# docker pull grafana/grafana
Using default tag: latest
latest: Pulling from grafana/grafana
97518928ae5f: Pull complete
5b58818b7f48: Pull complete
d9a64d9fd162: Pull complete
4e368e1b924c: Pull complete
867f7fdd92d9: Pull complete
387c55415012: Pull complete
07f94c8f51cd: Pull complete
ce8cf00ff6aa: Pull complete
e44858b5f948: Pull complete
4000fdbdd2a3: Pull complete
Digest: sha256:18d94ae734accd66bccf22daed7bdb20c6b99aa0f2c687eea3ce4275fe275062
Status: Downloaded newer image for grafana/grafana:latest
docker.io/grafana/grafana:latest
使用镜像运行grafana容器,并映射端口提供服务
[root@master ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
prom/prometheus latest a3d385fc29f9 11 days ago 201MB
grafana/grafana latest 9b957e098315 2 weeks ago 275MB
[root@master ~]# docker run -dit --name grafan -p 3000:3000 grafana/grafana
2a068867c04d57aa67ece4d35f28e2a77f188c248de6a43bc071a9bb21aae417
[root@master ~]# docker ps | grep grafan
2a068867c04d grafana/grafana "/run.sh" 11 seconds ago Up 8 seconds 0.0.0.0:3000->3000/tcp, :::3000->3000/tcp grafan
访问主页(ip+3000端口号)
第一次登入,需要修改密码
修改密码后,进入首页
添加prometheus 数据源(就是prometheus的访问地址)
填写过后,向下划,点击保存并测试
添加数据源完成,后导入一个模板
输入模板编码11600
选择数据源
模板(2)编号:315
prometheus 结合AlertManager、node-exporter、 监控容器并实现邮箱告警
在 client 主机上拉取 prom/node-exporter镜像
[root@client ~]# docker pull prom/node-exporter
Using default tag: latest
latest: Pulling from prom/node-exporter
aa2a8d90b84c: Pull complete
b45d31ee2d7f: Pull complete
b5db1e299295: Pull complete
Digest: sha256:f2269e73124dd0f60a7d19a2ce1264d33d08a985aed0ee6b0b89d0be470592cd
Status: Downloaded newer image for prom/node-exporter:latest
docker.io/prom/node-exporter:latest
[root@client ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
prom/prometheus latest a3d385fc29f9 12 days ago 201MB
prom/node-exporter latest 1dbe0e931976 3 weeks ago 20.9MB
google/cadvisor latest eb1210707573 3 years ago 69.6MB
在 client 主机上运行node_exporter容器,并映射端口
[root@client system]# docker run --name node-exporter -d -p 9100:9100 prom/node-exporter
5ce6fbc393dca3a13196386abbc1977631416326ee8632358ab2036810a1114e
# 查看容器运行状态
[root@client ~]# docker ps | grep node_exporter
5ce6fbc393dc prom/node-exporter "/bin/node_exporter" 2 minutes ago Up 2 minutes 0.0.0.0:9100->9100/tcp, :::9100->9100/tcp node-exporter
访问网页ip+9100
接下来,我们需要将 node-exporter 信息配置到 Prometheus 中,来让 Prometheus 定期获取 exporter 采集的信息,那么就需要在 master 修改 prometheus.yml 配置文件,在 scrape_configs 下新增一个 job,配置如下:
[root@master ~]# vi /opt/prometheus.yml
# my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets:
# - alertmanager:9093
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - "first_rules.yml"
# - "second_rules.yml"
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: "prometheus"
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ["localhost:9090"]
- job_name: "Rong Qi"
static_configs:
- targets: ["192.168.58.20:8080"]
- job_name: "Linux Server" //添加此处
static_configs: //添加此处
- targets: ["192.168.58.20:9100"] //添加此处
# 重启prometheus容器
[root@master ~]# docker restart prometheus
prometheus
# 查看prometheus容器状态
[root@master ~]# docker ps | grep prometheus
cb748d375af0 prom/prometheus "/bin/prometheus --c…" 27 hours ago Up About a minute 0.0.0.0:9090->9090/tcp, :::9090->9090/tcp prometheus
查看监控节点状态
在client主机上部署AlertManager
拉取 prom/alertmanager官方镜像
[root@client ~]# docker pull prom/alertmanager
Using default tag: latest
latest: Pulling from prom/alertmanager
aa2a8d90b84c: Already exists
b45d31ee2d7f: Already exists
e64c3c57ffe7: Pull complete
7665a4a59238: Pull complete
9a345be9cdfe: Pull complete
aa42aae1183b: Pull complete
Digest: sha256:9ab73a421b65b80be072f96a88df756fc5b52a1bc8d983537b8ec5be8b624c5a
Status: Downloaded newer image for prom/alertmanager:latest
docker.io/prom/alertmanager:latest
[root@client ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
prom/prometheus latest a3d385fc29f9 12 days ago 201MB
prom/node-exporter latest 1dbe0e931976 3 weeks ago 20.9MB
prom/alertmanager latest ba2b418f427c 4 months ago 57.5MB
google/cadvisor latest eb1210707573 3 years ago 69.6MB
在client主机上运行alertmanager 容器并进行端口映射
[root@client ~]# docker run --name alertmanager -d -p 9093:9093 prom/alertmanager
b65bc10bb6f184e3a4b3bdc8b9ee27b099d25385c188f678124375e5b938b3d1
# 查看容器运行状态
[root@client ~]# docker ps | grep alertmanager
b65bc10bb6f1 prom/alertmanager "/bin/alertmanager -…" 23 seconds ago Up 15 seconds 0.0.0.0:9093->9093/tcp, :::9093->9093/tcp alertmanager
AlertManager 默认启动的端口为 9093,启动完成后,浏览器访问 http://<IP>:9093 可以看到默认提供的 UI 页面,不过现在是没有任何告警信息的,因为我们还没有配置报警规则来触发报警。
AlertManager 配置邮件告警
AlertManager 默认配置文件为 alertmanager.yml,在容器内路径为 /etc/alertmanager/alertmanager.yml,默认配置如下:
global:
resolve_timeout: 5m
route:
group_by: ['alertname']
group_wait: 10s
group_interval: 10s
repeat_interval: 1h
receiver: 'web.hook'
receivers:
- name: 'web.hook'
webhook_configs:
- url: 'http://127.0.0.1:5001/'
inhibit_rules:
- source_match:
severity: 'critical'
target_match:
severity: 'warning'
equal: ['alertname', 'dev', 'instance']
简单介绍一下主要配置的作用:
- global: 全局配置,包括报警解决后的超时时间、SMTP 相关配置、各种渠道通知的 API 地址等等。
- route: 用来设置报警的分发策略,它是一个树状结构,按照深度优先从左向右的顺序进行匹配。
- receivers: 配置告警消息接受者信息,例如常用的 email、wechat、slack、webhook 等消息通知方式。
- inhibit_rules: 抑制规则配置,当存在与另一组匹配的警报(源)时,抑制规则将禁用与一组匹配的警报(目标)。
那么,我们就来配置一下使用 Email 方式通知报警信息,这里以 163 邮箱为例,配置如下:
[root@client ~]# vi alertmanager.yml
global:
resolve_timeout: 5m
smtp_from: 'xmfile00@163.com'
smtp_smarthost: 'smtp.163.com:465'
smtp_auth_username: 'xmfile00@163.com'
smtp_auth_password: 'UBVOAADJIPYTIGDM'
smtp_require_tls: false
smtp_hello: 'qq.com'
route:
group_by: ['alertname']
group_wait: 5s
group_interval: 5s
repeat_interval: 5m
receiver: 'email'
receivers:
- name: 'email'
email_configs:
- to: '2031314675@qq.com'
send_resolved: true
inhibit_rules:
- source_match:
severity: 'critical'
target_match:
severity: 'warning'
equal: ['alertname', 'dev', 'instance']
- smtp_smarthost: 这里为 QQ 邮箱 SMTP 服务地址,官方地址为 smtp.qq.com 端口为 465 或 587,同时要设置开启 POP3/SMTP 服务。
- smtp_auth_password: 这里为第三方登录 QQ 邮箱的授权码,非 QQ 账户登录密码,否则会报错,获取方式在 QQ 邮箱服务端设置开启 POP3/SMTP 服务时会提示。
- smtp_require_tls: 是否使用 tls,根据环境不同,来选择开启和关闭。如果提示报错 email.loginAuth failed: 530 Must issue a STARTTLS command first,那么就需要设置为 true。着重说明一下,如果开启了 tls,提示报错 starttls failed: x509: certificate signed by unknown authority,需要在 email_configs 下配置 - - insecure_skip_verify: true 来跳过 tls 验证。
创建alertmanager容器,将配置文件COPY到clien主机上
[root@client ~]# docker run --name alertmanager -d -p 9093:9093 prom/alertmanager
b65bc10bb6f184e3a4b3bdc8b9ee27b099d25385c188f678124375e5b938b3d1
[root@client ~]# docker ps | grep alertmanager
b65bc10bb6f1 prom/alertmanager "/bin/alertmanager -…" 23 seconds ago Up 15 seconds 0.0.0.0:9093->9093/tcp, :::9093->9093/tcp alertmanager
[root@client opt]# docker cp alertmanager:/etc/alertmanager/alertmanager.yml /root/alertmanager.yml
[root@client opt]# cd
[root@client ~]# ls
alertmanager.yml anaconda-ks.cfg node_exporter-1.3.0.linux-amd64.tar.gz prometheus-2.31.1.linux-amd64.tar.gz prometheus.yml
[root@client ~]# mkdir prometheus
[root@client ~]# mv alertmanager.yml /prometheus
[root@client ~]# cd prometheus/
[root@client prometheus]# ls
alertmanager.yml
修改 AlertManager 启动命令,将本地 alertmanager.yml 文件挂载到容器内指定位置。
[root@client ~]# docker run -d --name alertmanager -p 9093:9093 -v /root/prometheus/alertmanager.yml:/etc/alertmanager/alertmanager.yml prom/alertmanager
a5269c1bd3d8bfd0b3c586312d2b424b3da381f718b02095d95a2a82215c28c8
[root@client ~]# docker ps | grep alertmanager
a5269c1bd3d8 prom/alertmanager "/bin/alertmanager -…" 9 seconds ago Up 7 seconds 0.0.0.0:9093->9093/tcp, :::9093->9093/tcp alertmanager
Prometheus 配置 AlertManager 告警规则(master端)
接下来,我们需要在 Prometheus 配置 AlertManager 服务地址以及告警规则,新建报警规则文件 node-up.rules 如下:
$ mkdir -p /opt/prometheus/rules && cd /opt/prometheus/rules/
$ vim node-up.rules
groups:
- name: node-up
rules:
- alert: node-up
expr: up{job="node-exporter"} == 0
for: 15s
labels:
severity: 1
team: node
annotations:
summary: "192.168.58.20 已停止运行超过 15s!"
说明一下:该 rules 目的是监测 node 是否存活,expr 为 PromQL 表达式验证特定节点 job=“node-exporter” 是否活着,for 表示报警状态为 Pending 后等待 15s 变成 Firing 状态,一旦变成 Firing 状态则将报警发送到 AlertManager。
然后,修改 prometheus.yml 配置文件,添加 rules 规则文件。
[root@master rules]# vi /opt/prometheus.yml
# my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets:
- 192.168.58.20:9093 //添加此处
rule_files: //添加此处
- "/usr/local/prometheus/rules/*.rules" //添加此处
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - "first_rules.yml"
# - "second_rules.yml"
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: "prometheus"
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ["localhost:9090"]
- job_name: "Rong Qi"
static_configs:
- targets: ["192.168.58.20:8080"]
- job_name: "Linux Server"
static_configs:
- targets: ["192.168.58.20:9100"]
注意: 这里 rule_files 为容器内路径,需要将本地 node-up.rules 文件挂载到容器内指定路径,修改 Prometheus 启动命令如下,并重启服务。
[root@master rules]# docker run --name prometheus -d -p 9090:9090 \
> -v /opt/prometheus.yml:/etc/prometheus/prometheus.yml:ro \
> -v /root/prometheus/rules/:/usr/local/prometheus/rules/ \
> prom/prometheus
f876b1759f86f1991c314333a6256c8a8e0da426511285e9b3a41ebe1207b48c
# 服务异常无法启动
[root@master rules]# docker ps -a| grep prometheus
f876b1759f86 prom/prometheus "/bin/prometheus --c…" 45 seconds ago Exited (2) 44 seconds ago prometheus
错误原因:
[root@master opt]# docker logs --details prometheus
ts=2021-12-30T12:14:36.148Z caller=main.go:437 level=error msg="Error loading config (--config.file=/etc/prometheus/prometheus.yml)" err="parsing YAML file /etc/prometheus/prometheus.yml: yaml: unmarshal errors:\n line 18: field rule_files already set in type config.plain"
翻译
328/5000
[root@master opt]# docker logs—详细信息
ts = 2021 - 12 - 30 - t12:14:36.148z调用者=主要。: 437 =错误味精= "错误加载配置(——config.file = / etc /普罗米修斯/ prometheus.yml)“呃= "解析/etc/prometheus/prometheus YAML文件。Yml: yaml: unmarshal错误:\n line 18: field rule_files already set in type config.plain"
第一点 ----prometheus YAML 格式不对或者路径不对
[root@master opt]# docker run --name prometheus -d -p 9090:9090 prom/prometheus
515d4340dc79756cdb9b5e623aa876cb27dc67457cbe016df772bd34f9238b50
[root@master opt]# docker exec -it prometheus /bin/sh
/prometheus $ cd /usr/lo
/bin/sh: cd: can't cd to /usr/lo: No such file or directory
/prometheus $ cd /usr/
/usr $ ls
sbin share
无loca目录
修改配置文件后
[root@master opt]# vi prometheus.yml
# my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets:
- 192.168.58.20:9093
rule_files:
- 'rules/*.yml'
[root@master opt]# docker run --name prometheus -d -p 9090:9090 -v /opt/prometheus.yml:/etc/prometheus/prometheus.yml:ro prom/prometheus
1c53172de6124d65dbe991820b45f4e76524d070b7ce52b87a9d726ba58363b7
[root@master opt]# docker ps -a| grep prometheus
1c53172de612 prom/prometheus "/bin/prometheus --c…" 8 seconds ago Exited (2) 6 seconds ago prometheus
[root@master opt]# docker logs --details prometheus
ts=2021-12-30T12:21:46.577Z caller=main.go:437 level=error msg="Error loading config (--config.file=/etc/prometheus/prometheus.yml)" err="parsing YAML file /etc/prometheus/prometheus.yml: yaml: unmarshal errors:\n line 18: field rule_files already set in type config.plain"
问题不知如何解决