nagios+ganglia监控Hadoop集群

nagios+ganglia监控


  与Cacti、Nagios、Zabbix等工具相比,Ganglia更关注整个集群的性能和可用性。可以用于集群的性能监控、分析和优化。

   Ganglia就是这样一种工具。Ganglia 是 UC Berkeley 发起的一个开源监视项目,设计用于测量数以千计的节点。Ganglia主要监控集群的性能指标,如cpu 、mem、硬盘利用率, I/O负载、网络流量情况等, 也可以监控自定义的性能指标。通过Ganglia绘制的曲线很容易见到每个节点的工作状态,对合理调整、分配系统资源,提高系统整体性能起到重要作用。gmond 带来的系统负载非常少,这使得它成为在集群中各台计算机上运行的一段代码,而不会影响用户性能。

Ganglia-architecture

  1. 每个被检测的节点或集群运行一个gmond进程,进行监控数据的收集、汇总和发送。gmond即可以作为发送者(收集本机数据),也可以作为接收者(汇总多个节点的数据)。

  2. 通常在整个监控体系中只有一个gmetad进程。该进程定期检查所有的gmonds,主动收集数据,并存储在RRD存储引擎中。

  3. ganglia-web是使用php编写的web界面,以图表的方式展现存储在RRD中的数据。通常与gmetad进程运行在一起。

其中,RRDtool(Round Robin Database tool,环状数据库工具)是一组操作RRD数据的API,支持数据图形化。RRD是一种环状数据库技术,只存储固定数量的数据,新的数据会覆盖最旧的数据。

Ganglia规划:

在动手部署Ganglia之前,首先要对监控体系进行初步的规划。主要考虑两方面的问题:

  1. 单集群 or 多集群

    如果节点较少,使用单集群配置起来更容易; 如果节点很多,使用多集群可以避免广播风暴。但是需要为每个集群配置不同的组播通道(通过端口区分),同时要配置gmetad同时监听这多个通道。

  2. 组播模式 or 单播模式

    组播模式是ganglia的默认模式,同一集群的多个gmond之间互相交换数据,gmetad中可以指定集群中的任意一个或多个节点作为"data_source";

    组播模式可能会带来网络的 “抖动(Jitter)”。据说设置节点的时钟同步可以避免抖动的问题; 但如果网络环境不支持组播(比如Amazon’s AWS EC2),就需要使用单播模式。单播模式时,将大部分节点的gmond.conf中,global的deaf设置改为"yes",则这些节点只发生数据,不接收其他节点的数据,同样也不能作为gmetad中的"data_source"。

    单播模式中还需要设置“send_metadata_interval”,比如30秒。以强制发送元数据。

     ganglia将一个gmetad覆盖的所有集群/节点称为一个grid。可以在/etc/ganglia/gmetad.conf中通过gridname指定其名称。多个grid的数据也可以聚合到一个上级gmetad中。

安装配置:

ganglia 是分布式的监控系统,有两个Daemon, 分别是:客户端Ganglia Monitoring Daemon


(gmond)和服务端Ganglia Meta Daemon (gmetad),还有Ganglia PHP Web Frontend(基于


web的动态访问方式)组成是一个Linux下图形化监控系统运行性能的软件,界面美观、丰富,功能强大



软件下载:http://ganglia.sourceforce.net/


环境:rhel6.3 X86_64 selinux禁止或许可,关闭iptables


######################################################


安装软件可通过yum,rpm,源码安装,在lanmp架构中,我们用源码安装了nginx,mysql,php。此次的ganglia我们通过rpm来安装,下载的软件包不是rpm包,故需将这些软件包制作为rpm包


######################################################



下载包:


get ganglia-3.6.0.tar.gz ganglia-web-3.5.2.tar.gz libconfuse-devel-2.6-3.el6.x86_64.rpm libconfuse-2.6-3.el6.x86_64.rpm



安装制作rpm包的工具


[root@server34 ~]# yum install rpm-build -y

制作ganglia服务端的rpm包,制作过程中需要一些依赖性,根据提示安装依赖性

[root@server34 ~]# rpmbuild -tb ganglia-3.6.0.tar.gz

错误:error: Failed build dependencies:

libart_lgpl-devel is needed by ganglia-3.6.0-1.x86_64

gcc-c++ is needed by ganglia-3.6.0-1.x86_64

python-devel is needed by ganglia-3.6.0-1.x86_64

libconfuse-devel is needed by ganglia-3.6.0-1.x86_64

pcre-devel is needed by ganglia-3.6.0-1.x86_64

expat-devel is needed by ganglia-3.6.0-1.x86_64

rrdtool-devel is needed by ganglia-3.6.0-1.x86_64

apr-devel > 1 is needed by ganglia-3.6.0-1.x86_64


解决,安装依赖性:

[root@server34 ~]# yum install libart_lgpl-devel gcc-c++ python-devel libconfuse-devel expat-devel apr-devel pcre-devel -y

[root@server34 ~]# rpmbuild -tb ganglia-3.6.0.tar.gz

错误:error: Failed build dependencies:

libconfuse-devel is needed by ganglia-3.6.0-1.x86_64

rrdtool-devel is needed by ganglia-3.6.0-1.x86_64

解决,安装依赖性:

[root@server34 ~]# rpm -ivh libconfuse-*

warning: libconfuse-2.6-3.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID 0608b895: NOKEY

Preparing...                ########################################### [100%]

  1:libconfuse             ########################################### [ 50%]

  2:libconfuse-devel       ########################################### [100%]


[root@server34 ~]# rpmbuild -tb ganglia-3.6.0.tar.gz

错误:error: Failed build dependencies:

rrdtool-devel is needed by ganglia-3.6.0-1.x86_64

解决:                                      

下载软件包 :rrdtool-perl-1.3.8-6.el6.x86_64.rp

安装rrdtool-perl软件

[root@server34 ~]# rpm -ivh rrdtool-devel-1.3.8-6.el6.x86_64.rpm

warning: rrdtool-devel-1.3.8-6.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID c105b9de: NOKEY

Preparing...                ########################################### [100%]

  1:rrdtool-devel          ########################################### [100%]

[root@server34 ~]# rpmbuild -tb ganglia-3.6.0.tar.gz


制作ganglia客户端的rpm包

[root@server34 ~]# rpmbuild -tb ganglia-web-3.5.2.tar.gz

[root@server34 x86_64]# ls

ganglia-devel-3.6.0-1.x86_64.rpm   ganglia-gmond-modules-python-3.6.0-1.x86_64.rpm

ganglia-gmetad-3.6.0-1.x86_64.rpm  libganglia-3.6.0-1.x86_64.rpm

ganglia-gmond-3.6.0-1.x86_64.rpm

制作好的rpm包

安装

[root@server34 x86_64]# rpm -ivh *

Preparing...                ########################################### [100%]

  1:libganglia             ########################################### [ 20%]

  2:ganglia-gmond          ########################################### [ 40%]

  3:ganglia-gmond-modules-p########################################### [ 60%]

  4:ganglia-devel          ########################################### [ 80%]

  5:ganglia-gmetad         ########################################### [100%]

[root@server34 noarch]# pwd

/root/rpmbuild/RPMS/noarch

[root@server34 noarch]# rpm -ivh ganglia-web-3.5.2-1.noarch.rpm

Preparing...                ########################################### [100%]

  1:ganglia-web            ########################################### [100%]

##########以上为rpm软件包的制作过程#######################

ganglia实现监控功能,需更改其配置文件:

[root@server34 rpmbuild]# vim /etc/ganglia/gmond.conf

cluster {

 name = "my cluster"    ####集群名

 owner = "unspecified"

 latlong = "unspecified"

 url = "unspecified"

}


更改端口

mcast_join = 239.2.11.71

 port = 8756    ######监听端口

 ttl = 1

}


/* You can specify as many udp_recv_channels as you like as well. */

udp_recv_channel {

 mcast_join = 239.2.11.71

 port = 8756

 bind = 239.2.11.71

 retry_bind = true

 # Size of the UDP buffer. If you are handling lots of metrics you really

 # should bump it up to e.g. 10MB or even higher.

 # buffer = 10485760

}



/* You can specify as many tcp_accept_channels as you like to share

  an xml description of the state of the cluster */

tcp_accept_channel {

 port = 8756


####为避免更多的主机进入,可改变默认的端口:如本实验将gmond配置文件的端口8649->8679



[root@server34 rpmbuild]# vim /etc/ganglia/gmetad.conf

data_source "my cluster" 192.168.0.34:8756

[root@server34 rpmbuild]# /etc/init.d/gmond start

Starting GANGLIA gmond:                                    [  OK  ]

[root@server34 rpmbuild]# /etc/init.d/gmetad start

Starting GANGLIA gmetad:                                   [  OK  ]



[root@server34 rrds]# cd /var/lib/ganglia/rrd/my\ cluster/

[root@server34 my cluster]# ls

server34.example.com  __SummaryInfo__


[root@server34 my cluster]# cd server34.example.com/

[root@server34 server34.example.com]# ls

boottime.rrd                          mem_writeback.rrd

bytes_in.rrd                          part_max_used.rrd

bytes_out.rrd                         pkts_in.rrd

cpu_aidle.rrd                         pkts_out.rrd

cpu_idle.rrd                          proc_run.rrd

cpu_intr.rrd                          procstat_gmond_cpu.rrd

cpu_nice.rrd                          procstat_gmond_mem.rrd

cpu_num.rrd                           proc_total.rrd

cpu_sintr.rrd                         rx_bytes_eth0.rrd

cpu_speed.rrd                         rx_bytes_lo.rrd

cpu_steal.rrd                         rx_drops_eth0.rrd

cpu_system.rrd                        rx_drops_lo.rrd

cpu_user.rrd                          rx_errs_eth0.rrd

cpu_wio.rrd                           rx_errs_lo.rrd

disk_free_absolute_rootfs.rrd         rx_pkts_eth0.rrd

disk_free_percent_rootfs.rrd          rx_pkts_lo.rrd

disk_free.rrd                         swap_free.rrd

diskstat_sda_io_time.rrd              swap_total.rrd

diskstat_sda_percent_io_time.rrd      tcp_attemptfails.rrd

diskstat_sda_read_bytes_per_sec.rrd   tcpext_listendrops.rrd

diskstat_sda_reads_merged.rrd         tcpext_tcploss_percentage.rrd

diskstat_sda_reads.rrd                tcp_insegs.rrd

diskstat_sda_read_time.rrd            tcp_outsegs.rrd

diskstat_sda_weighted_io_time.rrd     tcp_retrans_percentage.rrd

dskstat_sda_write_bytes_per_sec.rrd  tx_bytes_eth0.rrd

diskstat_sda_writes_merged.rrd        tx_bytes_lo.rrd

diskstat_sda_writes.rrd               tx_drops_eth0.rrd

diskstat_sda_write_time.rrd           tx_drops_lo.rrd

disk_total.rrd                        tx_errs_eth0.rrd

entropy_avail.rrd                     tx_errs_lo.rrd

load_fifteen.rrd                      tx_pkts_eth0.rrd

load_five.rrd                         tx_pkts_lo.rrd

load_one.rrd                          udp_indatagrams.rrd

mem_buffers.rrd                       udp_inerrors.rrd

mem_cached.rrd                        udp_outdatagrams.rrd

mem_dirty.rrd                         udp_rcvbuferrors.rrd

mem_free.rrd                          vm_pgmajfault.rrd

mem_hardware_corrupted.rrd            vm_pgpgin.rrd

mem_mapped.rrd                        vm_pgpgout.rrd

mem_shared.rrd                        vm_vmeff.rrd

mem_total.rrd


为实现集群,将此客户端所需的rpm包拷贝至另一台需要被监控的客户主机


[root@server34 x86_64]# scp ganglia-gmond-modules-python-3.6.0-1.x86_64.rpm ganglia-gmond-3.6.0-1.x86_64.rpm libganglia-3.6.0-1.x86_64.rpm 192.168.0.17:



客户端配置:


下载包:


libconfuse-2.7-4.el6.x86_64.rpm                

libconfuse-devel-2.7-4.el6.x86_64.rpm  

[root@server17 ~]# yum localinstall ganglia-gmond-3.6.0-1.x86_64.rpm ganglia-gmond-modules-python-3.6.0-1.x86_64.rpm libconfuse-2.7-4.el6.x86_64.rpm  libconfuse-devel-2.7-4.el6.x86_64.rpm libganglia-3.6.0-1.x86_64.rpm -y

[root@server17 ~]# vim /etc/ganglia/gmond.conf

cluster {

 name = "my cluster"

 owner = "unspecified"

 latlong = "unspecified"

 url = "unspecified"


}


更改端口


mcast_join = 239.2.11.71

 port = 8756

 ttl = 1

}



/* You can specify as many udp_recv_channels as you like as well. */


udp_recv_channel {

 mcast_join = 239.2.11.71

 port = 8756

 bind = 239.2.11.71

 retry_bind = true

 # Size of the UDP buffer. If you are handling lots of metrics you really

 # should bump it up to e.g. 10MB or even higher.

 # buffer = 10485760

}



/* You can specify as many tcp_accept_channels as you like to share

  an xml description of the state of the cluster */

tcp_accept_channel {

 port = 8756

所有hadoop所在的节点,均需要配置hadoop-metrics2.properties,配置如下:

# syntax: [prefix].[source|sink|jmx].[instance].[options]
# See package.html for org.apache.hadoop.metrics2 for details

*.sink.file.class=org.apache.hadoop.metrics2.sink.FileSink

#namenode.sink.file.filename=namenode-metrics.out

#datanode.sink.file.filename=datanode-metrics.out

#jobtracker.sink.file.filename=jobtracker-metrics.out

#tasktracker.sink.file.filename=tasktracker-metrics.out

#maptask.sink.file.filename=maptask-metrics.out

#reducetask.sink.file.filename=reducetask-metrics.out
# Below are for sending metrics to Ganglia
#
# for Ganglia 3.0 support
# *.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink30
#
# for Ganglia 3.1 support
*.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31

*.sink.ganglia.period=10

# default for supportsparse is false
*.sink.ganglia.supportsparse=true

*.sink.ganglia.slope=jvm.metrics.gcCount=zero,jvm.metrics.memHeapUsedM=both
*.sink.ganglia.dmax=jvm.metrics.threadsBlocked=70,jvm.metrics.memHeapUsedM=40

namenode.sink.ganglia.servers=10.82.58.211:8649

datanode.sink.ganglia.servers=10.82.58.211:8649

jobtracker.sink.ganglia.servers=10.82.58.211:8649

tasktracker.sink.ganglia.servers=10.82.58.211:8649

maptask.sink.ganglia.servers=10.82.58.211:8649
reducetask.sink.ganglia.servers=10.82.58.211:8649


所有hadoop所在的节点,均需要配置hadoop-metrics2.properties,配置如下


# HBase-specific configuration to reset long-running stats (e.g. compactions)
# If this variable is left out, then the default is no expiration.
hbase.extendedperiod = 3600

# Configuration of the "hbase" context for ganglia
# Pick one: Ganglia 3.0 (former) or Ganglia 3.1 (latter)
# hbase.class=org.apache.hadoop.metrics.ganglia.GangliaContext
hbase.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
hbase.period=10
hbase.servers=10.82.58.211:8649

# Configuration of the "jvm" context for null
jvm.class=org.apache.hadoop.metrics.spi.NullContextWithUpdateThread
jvm.period=10

# Configuration of the "jvm" context for file
# jvm.class=org.apache.hadoop.hbase.metrics.file.TimeStampingFileContext
# jvm.fileName=/tmp/metrics_jvm.log

# Configuration of the "jvm" context for ganglia
# Pick one: Ganglia 3.0 (former) or Ganglia 3.1 (latter)
# jvm.class=org.apache.hadoop.metrics.ganglia.GangliaContext
jvm.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
jvm.period=10
jvm.servers=10.82.58.211:8649

# Configuration of the "rpc" context for null
rpc.class=org.apache.hadoop.metrics.spi.NullContextWithUpdateThread
rpc.period=10

# Configuration of the "rpc" context for file
# rpc.class=org.apache.hadoop.hbase.metrics.file.TimeStampingFileContext
# rpc.fileName=/tmp/metrics_rpc.log

# Configuration of the "rpc" context for ganglia
# Pick one: Ganglia 3.0 (former) or Ganglia 3.1 (latter)
# rpc.class=org.apache.hadoop.metrics.ganglia.GangliaContext
 rpc.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
 rpc.period=10
 rpc.servers=10.82.58.211:8649

# Configuration of the "rest" context for ganglia
# Pick one: Ganglia 3.0 (former) or Ganglia 3.1 (latter)
# rest.class=org.apache.hadoop.metrics.ganglia.GangliaContext
 rest.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
 rest.period=10
 rest.servers=10.82.58.211:8649







[root@server17 ~]# /etc/init.d/gmond start

检测

浏览器:

http://192.168.0.34/ganglia

ganglia与nagios报警整合


[root@server34 ~]# cp ganglia-3.6.0/contrib/check_ganglia.py  /usr/local/nagios/libexec/

check_ganglia.py 需以nagios的身份运行

[root@server34 libexec]# chown nagios.nagios check_ganglia.py

[root@server34 libexec]# vim check_ganglia.py

ganglia_port = 8756

[root@server34 libexec]# vim check_ganglia.py

if critical > warning:

 if value >= critical:

   print "CHECKGANGLIA CRITICAL: %s is %.2f" % (metric, value)

   sys.exit(2)

 elif value >= warning:

   print "CHECKGANGLIA WARNING: %s is %.2f" % (metric, value)

   sys.exit(1)

 else:

   print "CHECKGANGLIA OK: %s is %.2f" % (metric, value)

   sys.exit(0)

else:

 if critical >= value:

   print "CHECKGANGLIA CRITICAL: %s is %.2f" % (metric, value)

   sys.exit(2)

 elif warning >= value:

   print "CHECKGANGLIA WARNING: %s is %.2f" % (metric, value)

   sys.exit(1)

 else:

  print "CHECKGANGLIA OK: %s is %.2f" % (metric, value)

   sys.exit(0)

检测:

[root@server34 libexec]# /usr/local/nagios/libexec/check_ganglia.py -h server34.example.com -m disk_free_percent_rootfs -w 30 -c 10

CHECKGANGLIA OK: disk_free_percent_rootfs is 86.33

增加检测ganglia的命令


[root@server34 objects]# vim commands.cfg

# 'check_ganglia' command definition

define command{

       command_name    check_ganglia

       command_line    $USER1$/check_ganglia.py -h $HOSTADDRESS$ -m $ARG1$ -w $ARG2$ -c $ARG2$

       }

增加ganglia的模板

[root@server34 objects]# vim templates.cfg

define service {

use generic-service

name ganglia-service

hostgroup_name ganglia-servers

service_groups ganglia-metrics

}

[root@server34 objects]# vim hosts.cfg

define hostgroup{

       hostgroup_name  linux-servers ; The name of the hostgroup

       alias           Linux Servers ; Long name of the group

       members         *     ; Comma separated list of hosts that belong to this group

       }

define hostgroup {

hostgroup_name ganglia-servers

alias ganglia-servers

members *

}

[root@server34 objects]# vim services.cfg

##################################check_ganglia###################

define servicegroup {

servicegroup_name ganglia-metrics

alias Ganglia Metrics

}


define service{

       use                             ganglia-service

      service_description             根分区空闲

       check_command                   check_ganglia!disk_free_percent_rootfs!20!10

       }

define service{

       use                             ganglia-service         ;

       service_description             系统负载

       check_command                   check_ganglia!load_one!4!5

       }

检查配置是否正确


[root@server34 objects]# /usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg

[root@server34 objects]# /etc/init.d/nagios reload

检查:

http://192.168.0.34/nagios

如果一切正常,您应该看到 Ganglia 数据现在已经在 Nagios 的监视之下;结合使用 Ganglia 和 Nagios,您可以监视任何内容。您可以控制整个云!

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值