参考 https://www.codercto.com/a/35819.html
https://blog.51cto.com/lvsir666/2409052
https://www.cnblogs.com/Dev0ps/p/10668116.html
配置参考 https://github.com/percona/grafana-dashboards
先装docker 不会的请看我的docker相关文章
运行prometheus
docker run -d -p 9090:9090 \
-v /docker/prometheus/:/etc/prometheus/ \
prom/prometheus
Prometheus 有一个配置文件,通过参数 --config.file 来指定,配置文件格式为 YAML 。我们可以打开默认的配置文件 prometheus.yml 看下里面的内容
# my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets:
# - alertmanager:9093
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - "first_rules.yml"
# - "second_rules.yml"
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ['localhost:9090']
Prometheus 默认的配置文件分为四大块:
global 块:Prometheus 的全局配置,比如 scrape_interval 表示 Prometheus 多久抓取一次数据, evaluation_interval 表示多久检测一次告警规则;
alerting 块:关于 Alertmanager 的配置,这个我们后面再看;
rule_files 块:告警规则,这个我们后面再看;
scrape_config 块:这里定义了 Prometheus 要抓取的目标,我们可以看到默认已经配置了一个名称为 prometheus 的 job,这是因为 Prometheus 在启动的时候也会通过 HTTP 接口暴露自身的指标数据,这就相当于 Prometheus 自己监控自己,虽然这在真正使用 Prometheus 时没啥用处,但是我们可以通过这个例子来学习如何使用 Prometheus;可以访问 http://localhost:9090/metrics 查看 Prometheus 暴露了哪些指标;
访问
http://localhost:9090/
http://localhost:9090/metrics
可以看很多指标
下载监控节点 这个不推荐docker部署,docker的话一些指标不一定准确
wget https://github.com/prometheus/node_exporter/releases/download/v0.16.0/node_exporter-0.16.0.linux-amd64.tar.gz
tar xvfz node_exporter-0.16.0.linux-amd64.tar.gz
cd node_exporter-0.16.0.linux-amd64
nohup ./node_exporter &
docker安装
docker run -d \
--net="host" \
--pid="host" \
--name=node-exporter \
-v "/:/host:ro,rslave" \
quay.io/prometheus/node-exporter \
--path.rootfs /host
安装 Grafana
docker run -d -p 3000:3000 grafana/grafana
访问loclahost:3000
先登录 amdin admin
配置数据源
选择
填上地址
保存
导入两个json文件
安装mysql监控节点
$ wget https://github.com/prometheus/mysqld_exporter/releases/download/v0.11.0/mysqld_exporter-0.11.0.linux-amd64.tar.gz
$ tar xvfz mysqld_exporter-0.11.0.linux-amd64.tar.gz
$ cd mysqld_exporter-0.11.0.linux-amd64/
nohup./mysqld_exporter --config.my-cnf=“my.cnf” &
# my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets:
# - alertmanager:9093
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - "first_rules.yml"
# - "second_rules.yml"
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ['10.9.44.12:9090']
- job_name: 'server1'
static_configs:
- targets: ['10.9.44.12:9100']
- job_name: 'server2'
static_configs:
- targets: ['10.9.44.11:9100']
- job_name: 'server3'
static_configs:
- targets: ['10.9.44.13:9100']
- job_name: mysql
static_configs:
- targets: ['10.9.44.12:9104']
labels:
instance: db1
- job_name: redis
static_configs:
- targets: ['10.9.44.12:9121']
labels:
instance: redis
下载redis监控
wget https://github.com/oliver006/redis_exporter/releases/download/v0.15.0/redis_exporter-v0.15.0.linux-amd64.tar.gz
启动
nohup ./redis_exporter redis//10.9.44.12:6379 & -web.listenaddress 0.0.0.0:9121
下载模板
wget https://grafana.com/api/dashboards/763/revisions/1/download
官方api文档地址https://prometheus.io/docs/prometheus/latest/querying/api/
docker 安装kafka监控
docker run -ti -d --restart=“always” --net=“host” -p 9308:9308 danielqsj/kafka-exporter --kafka.server=10.128.18.28:9092,10.128.18.27:9092
docker run -ti -d --restart="always" --net="host" -p 9308:9308 danielqsj/kafka-exporter --kafka.server=10.128.18.27:9092 --kafka.server=10.128.18.28:9092
- job_name: kafka
static_configs:
- targets: ['10.128.18.29:9308']
labels:
instance: kafka@10.128.18.29
json文件链接:https://share.weiyun.com/50QGgGP
mysql docker监控
docker run -d \
-p 9104:9104 \
--net="host" \
-e DATA_SOURCE_NAME="root:123456@(10.128.18.23:3306)/bdf" \
prom/mysqld-exporter