grafana+prometheus监控hbase

先上grafana监控hbasedashboard大图

1、准备工作hbase 1.2.0

文件我放在网盘,

链接:https://pan.baidu.com/s/1B2PWimrpCQ9MqOedPvXdaA?pwd=YYDS 
提取码:YYDS

 将文件放到Hbase 安装目录的conf 和lib下

$HBASE_HOME/lib/jmx_prometheus_javaagent-0.16.1.jar

$HBASE_HOME/conf/jmx_hbase.yaml

 vi hbase-env.sh

export HBASE_MASTER_JMX="-javaagent:$HBASE_HOME/lib/jmx_prometheus_javaagent-0.16.1.jar=0.0.0.0:26010:$HBASE_HOME/conf/jmx_hbase.yaml"
export HBASE_REGIONSERVER_JMX="-javaagent:$HBASE_HOME/lib/jmx_prometheus_javaagent-0.16.1.jar=0.0.0.0:26030:$HBASE_HOME/conf/jmx_hbase.yaml"
export HBASE_MASTER_OPTS="-Xmx2g -Xms2g -Xmn512m -XX:PermSize=128m -XX:MaxPermSize=256m -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:$HBASE_HOME/logs/gc-$(hostname)-hbase.log $HBASE_MASTER_OPTS $HBASE_MASTER_JMX"
export HBASE_REGIONSERVER_OPTS="-Xmx12g -Xms12g -Xmn512m -XX:PermSize=128m -XX:MaxPermSize=256m -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:$HBASE_HOME/logs/gc-$(hostname)-hbase.log $HBASE_REGIONSERVER_OPTS $HBASE_REGIONSERVER_JMX"

具体有需要了解hbase其他配置的可以留言
上述三个文件scp到其他的master 和regionserver

重启hbase

2、prometheus添加hbase配置

sudo vi /usr/local/prometheus/prometheus.yml

# my global config
global:
  scrape_interval:     30s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 30s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

# Alertmanager configuration
alerting:
  alertmanagers:
  - static_configs:
    - targets:
       - hd-n55:9093

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  # - "first_rules.yml"
  # - "second_rules.yml"
  #-  "/usr/local/prometheus/rule_files/node.yml"
  #-  "/usr/local/prometheus/rule_files/jvm.yml"
  -  "/usr/local/prometheus/rule_files/*.yml"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'prometheus'
    static_configs:
    - targets: ['hd-n55:7070']
      labels:
        service: prometheus

  - job_name: 'clickhouse'
    static_configs:
    - targets: ['hd.n34:9363','hd.n35:9363','hd.n36:9363']
      labels:
        service: clickhouse


  - job_name: 'kafka'
    static_configs:
    - targets: ['hd.n132:9308','hd.n133:9308','hd.n134:9308']
      labels:
        service: kafka


  - job_name: 'node'
    static_configs:
    - targets: ['hd.m1:9100','hd.m2:9100','hd.n1:9100','hd.n2:9100','hd.n3:9100','hd.n13:9100','hd.n14:9100','hd.n15:9100','hd.n16:9100','hd-n54:9100','hd.n106:9100','hd-n31:9100','hd-n32:9100','hd-n33:9100','hd-n34:9100','hd-n35:9100','hd-n36:9100','hd-n41:9100','hd-n48:9100','hd-n49:9100','hd-n52:9100','hd-n53:9100','hd-n55:9100','hd-es-1:9100','hd.n31:9100','hd.n32:9100','hd.n33:9100','hd.n34:9100','hd.n35:9100','hd.n36:9100','hd.n128:9100','hd.n129:9100','hd.n130:9100','hd.n131:9100','hd.n132:9100','hd.n133:9100','hd.n134:9100','tomcat-26:9100','hd.n12:9100','nginx-238:9100','10.0.0.170:9100']
      labels:
        service: linux

  - job_name: 'elasticsearch'
    static_configs:
    - targets: ['hd.n31:9114','hd.n32:9114','hd.n33:9114']
      labels:
        service: elasticsearch

  - job_name: 'canal'
    static_configs:
    - targets: ['hd-es-1:11112']
      labels:
        service: canal

  - job_name: 'redis'
    static_configs:
    - targets: ['hd-es-1:9121']
      labels:
        service: redis


  - job_name: 'pushgateway'
    honor_labels: true
    static_configs:
      - targets: ['hd-n55:9091']
        labels:
          service: flink

  - job_name: 'tomcat'
    static_configs:
    - targets: ['hd-n48:38081','hd-n48:38082','hd.n15:38081','hd.n15:38082','tomcat-26:38081','tomcat-26:38082','hd.n12:38081','hd.n12:38082','hd.n13:38081','hd.n13:38082','hd.n14:38081','hd.n14:38082','hd.n16:38081','hd.n16:38082','hd-n49:38081','hd-n49:38082','hd.n106:38081','hd.n106:38082','hd.n106:38083']
      labels:
        service: apache

  - job_name: 'mysqld_exporter'
    static_configs:
    - targets: ['10.0.0.170:9104']
      labels:
        service: mysql


  - job_name: 'alertmanager'
    static_configs:
    - targets: ['hd-n55:9093']
      labels:
        service: alertmanager

  - job_name: 'hbase-master'
    static_configs:
    - targets: ['hd.m1:26010','hd.m2:26010']
      labels:
        type: master
        service: hbase

  - job_name: 'hbase-region'
    static_configs:
    - targets: ['hd-n31:26030','hd-n32:26030','hd-n33:26030','hd-n34:26030','hd-n35:26030','hd-n36:26030','hd-n52:26030','hd-n53:26030','hd-n54:26030']
      labels:
        type: regionserver
        service: hbase

 reload prometheus
curl -XPOST http://10.0.0.*:7070/-/reload

在启动prometheus需要添加--web.enable-lifecycle
nohup ./prometheus --config.file=prometheus.yml --web.listen-address=:7070 --web.enable-lifecycle > ./prometheus.log &

3、在grafana import hbase dashboard id:12722

HBase 1.x | Grafana Labs

即完成hbase的监控界面。

  

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

冰帆<

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值