1.下载地址: https://prometheus.io/download/
2,上传和解压
3.后台启动配置
启动脚本
#!/bin/bash
/home/node_exporter/node_exporter >> /applog/node_exporter/node_exporter.log 2>&1 &停止脚本
#!/bin/bash
pkill node_exporter
4.通过浏览器访问被监控端就可以查看到node_exporter在被监控端收集的监控信息
http://10.0.0.115:9100/metrics
5.配置node_exporter好后,返回prometheus服务器的配置文件里添加被监控机器的配置段
在主配置文件最后加上下面三行vim /home/prometheus/prometheus.yml- job_name: 'prometheus' # 取一个 job 名称来代表被监控的机器static_configs :- targets: ['localhost:9090'] # 这里改成被监控机器 的IP ,后面端口接 9100改完配置文件后 , 重启服务
6.回prometheus的web管理界面 --》点Status --》点Targets --》可以看到多了一台监控目标
7.prometheus 服务端配置件写法
# my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets:
# - alertmanager:9093
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - "first_rules.yml"
# - "second_rules.yml"
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ['10.0.0.115:9090','10.0.0.115:9100','10.0.0.115:3000']