一、prometheus安装
1.拉取镜像
docker pull prom/prometheus
2.启动
docker run -d -p 9091:9090 \
> -v /data/prometheus:/etc/prometheus \
> -v /data/prometheus/prometheus.yml:/etc/prometheus/prometheus.yml \
> --name prometheus prom/prometheus
-d参数表示我们要在后台运行容器,
-p 9091:9090参数表示我们将主机的9091端口映射到容器的9090端口,因为我这里9090提示端口冲突所以我改成9091,
-v /data/prometheus:/etc/prometheus参数表示挂载目录,启动命令运行前创建好/data/prometheus目录,
-v /data/prometheus/prometheus.yml:/etc/prometheus/prometheus.yml参数表示挂载配置文件,方便日后修改prometheus.yml文件,
-name prometheus参数表示我们给容器取一个名字为prometheus。
3.访问普罗米修斯网页界面
最后,我们可以通过浏览器访问普罗米修斯(Prometheus)的Web界面。在浏览器中输入"http://localhost:9090",你将看到普罗米修斯的控制台界面。
通过以上步骤,你已经成功地使用Docker(Docker)安装并启动了Prometheus。现在你可以开始配置Prometheus(Prometheus)以监控你的应用程序。
4.进入Prometheus容器,执行命令:
docker exec -it b2df256f2e10 /bin/sh
b2df256f2e10是我当前的容器id,你换成你自己的即可
5.查看宿主机上的/data/prometheus下的prometheus.yml文件
如果没有,自己创建一个prometheus.yml
prometheus.yml文件内容
# my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets:
# - alertmanager:9093
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - "first_rules.yml"
# - "second_rules.yml"
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: "prometheus"
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ["localhost:9090"]
二、node-expoter安装
1.node-expoter的镜像安装可以参考Docker部署Prometheus+Grafana+node-exporter_docker安装普罗米修斯-CSDN博客
2.拉取镜像:
docker pull prom/node-exporter
拉取失败
可能是原来配置的加速器都不可用了,所以新增加速器
"https://docker.m.daocloud.io"
新增加速器后需要重启daocker
sudo systemctl restart docker
再次重新拉取镜像,拉取成功了。
3.启动
docker run --name exporter -p 9102:9100 -d prom/node-exporter
-p 9102:9100参数表示宿主机端口9102对应exporter端口9100,因为9100在我本地冲突,随意我这里修改端口为9102,如果你不冲突还是可以用-p 9100:9100.
前端访问:http://ip:9102/
三、prometheus配置node-expoter
1.修改prometheus.yml文件
# my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets:
# - alertmanager:9093
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - "first_rules.yml"
# - "second_rules.yml"
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: "prometheus"
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ["localhost:9090"]
- job_name: "node-exporter"
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ["你自己的ip:9102"]
也就是在原prometheus.yml文件上添加
- job_name: "node-exporter"
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ["你自己的ip:9102"]
2.更新prometheus上的prometheus.yml文件
执行命令:
sudo docker exec -it prometheus /bin/sh -c 'kill -HUP $(pidof prometheus)'
这个命令直接更新prometheus.yml文件,不用重启prometheus。