一、Federation机制简介
联邦允许一个prometheus server 从另外一个prometheus server 获取metrics。
此外联邦模式可以实现prometheus监控prometheus。遵循以下两点:
- 在同一个数据中心,每个prometheus监控其他的prometheus。
- 上下级模式。上一级的prometheus监控数据中心级别的prometheus。
二、容器化安装方式
参考Prometheus Server的按照:https://lixinkuan.blog.csdn.net/article/details/113631550
需要修改配置: vi prometheus.yml
# my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets:
# - alertmanager:9093# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - "first_rules.yml"
# - "second_rules.yml"# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.static_configs:
- targets: ['localhost:9090']- job_name: 'federate'
scrape_interval: 15shonor_labels: true
metrics_path: '/federate'params:
'match[]':
- '{job="prometheus"}'
- '{__name__=~"job:.*"}'
static_configs:
- targets:
- '10.21.70.101:9090'
三、安装包安装方式
安装环境:10.21.70.85
软件安装包下载地址:https://prometheus.io/download/
下载prometheus-2.20.0.linux-amd64.tar.gz (目前最新版本)
3.1、安装命令
解压:tar -zxvf prometheus-2.20.0.linux-amd64.tar.gz
制作启动脚本: vi prometheus-start.sh
./prometheus --config.file="./prometheus.yml" &
3.2、配置数据文件
编辑 vi prometheus.yml
# my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets:
# - alertmanager:9093# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - "first_rules.yml"
# - "second_rules.yml"# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.static_configs:
- targets: ['localhost:9090']- job_name: 'federate'
scrape_interval: 15shonor_labels: true
metrics_path: '/federate'params:
'match[]':
- '{job="prometheus"}'
- '{__name__=~"job:.*"}'
static_configs:
- targets:
- '10.21.70.101:9090'
3.3、启动Prometheus服务
启动脚本:./prometheus-start.sh
3.4、关闭Prometheus服务
关闭服务:pkill prometheus
3.5、验证是否成功
访问网址:http://10.21.70.85:9090/targets ,监控数据汇总成功!