Ganglia的安装配置

1. 在主节点上安装ganglia-webfrontend和ganglia-monitor
  1. sudo apt-get install ganglia-webfrontend ganglia-monitor
复制代码
在主节点上安装ganglia-webfrontend和ganglia-monitor。在其他监视节点上,只需要安装ganglia-monitor即可
将ganglia的文件链接到apache的默认目录下
  1. sudo ln -s /usr/share/ganglia-webfrontend /var/www/ganglia
复制代码

2. 安装ganglia-monitor
在其他监视节点上,只需要安装ganglia-monitor
  1. sudo apt-get install ganglia-monitor
复制代码

3. Ganglia配置
gmond.conf
在每个节点上都需要配置/etc/ganglia/gmond.conf,配置相同如下所示
  1. sudo vim /etc/ganglia/gmond.conf
复制代码

修改后的/etc/ganglia/gmond.conf
  1. globals {                    
  2.   daemonize = yes  ##以后台的方式运行            
  3.   setuid = yes             
  4.   user = ganglia     #运行Ganglia的用户              
  5.   debug_level = 0               
  6.   max_udp_msg_len = 1472        
  7.   mute = no             
  8.   deaf = no             
  9.   host_dmax = 0 /*secs */ 
  10.   cleanup_threshold = 300 /*secs */ 
  11.   gexec = no             
  12.   send_metadata_interval = 10     #发送数据的时间间隔


  13. /* If a cluster attribute is specified, then all gmond hosts are wrapped inside 
  14. * of a <CLUSTER> tag.  If you do not specify a cluster tag, then all <HOSTS> will 
  15. * NOT be wrapped inside of a <CLUSTER> tag. */ 
  16. cluster { 
  17.   name = "hadoop-cluster"         #集群名称
  18.   owner = "ganglia"               #运行Ganglia的用户
  19.   latlong = "unspecified" 
  20.   url = "unspecified" 


  21. /* The host section describes attributes of the host, like the location */ 
  22. host { 
  23.   location = "unspecified" 


  24. /* Feel free to specify as many udp_send_channels as you like.  Gmond 
  25.    used to only support having a single channel */ 
  26. udp_send_channel { 
  27.   #mcast_join = 239.2.11.71     #注释掉组播
  28.   host = master                 #发送给安装gmetad的机器
  29.   port = 8649                   #监听端口
  30.   ttl = 1 


  31. /* You can specify as many udp_recv_channels as you like as well. */ 
  32. udp_recv_channel { 
  33.   #mcast_join = 239.2.11.71     #注释掉组播
  34.   port = 8649 
  35.   #bind = 239.2.11.71 


  36. /* You can specify as many tcp_accept_channels as you like to share 
  37.    an xml description of the state of the cluster */ 
  38. tcp_accept_channel { 
  39.   port = 8649 
  40. }
复制代码

gmetad.conf
在主节点上还需要配置/etc/ganglia/gmetad.conf,这里面的名字hadoop-cluster和上面gmond.conf中name应该一致。 
/etc/ganglia/gmetad.conf
  1. sudo vim /etc/ganglia/gmetad.conf
复制代码
修改为以下内容
  1. data_source "hadoop-cluster" 10 master:8649 slave:8649
  2. setuid_username "nobody"
  3. rrd_rootdir "/var/lib/ganglia/rrds"
  4. gridname "hadoop-cluster"
  5. 注:master:8649 slave:8649为要监听的主机和端口,data_source中hadoop-cluster与gmond.conf中name一致
复制代码


4. Hadoop配置
在所有hadoop所在的节点,均需要配置hadoop-metrics2.properties,配置如下:
  1. #   Licensed to the Apache Software Foundation (ASF) under one or more
  2. #   contributor license agreements.  See the NOTICE file distributed with
  3. #   this work for additional information regarding copyright ownership.
  4. #   The ASF licenses this file to You under the Apache License, Version 2.0
  5. #   (the "License"); you may not use this file except in compliance with
  6. #   the License.  You may obtain a copy of the License at
  7. #
  8. #       http://www.apache.org/licenses/LICENSE-2.0
  9. #
  10. #   Unless required by applicable law or agreed to in writing, software
  11. #   distributed under the License is distributed on an "AS IS" BASIS,
  12. #   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  13. #   See the License for the specific language governing permissions and
  14. #   limitations under the License.
  15. #

  16. # syntax: [prefix].[source|sink].[instance].[options]
  17. # See javadoc of package-info.java for org.apache.hadoop.metrics2 for details

  18. #注释掉以前原有配置

  19. #*.sink.file.class=org.apache.hadoop.metrics2.sink.FileSink
  20. # default sampling period, in seconds
  21. #*.period=10

  22. # The namenode-metrics.out will contain metrics from all context
  23. #namenode.sink.file.filename=namenode-metrics.out
  24. # Specifying a special sampling period for namenode:
  25. #namenode.sink.*.period=8

  26. #datanode.sink.file.filename=datanode-metrics.out

  27. # the following example split metrics of different
  28. # context to different sinks (in this case files)
  29. #jobtracker.sink.file_jvm.context=jvm
  30. #jobtracker.sink.file_jvm.filename=jobtracker-jvm-metrics.out
  31. #jobtracker.sink.file_mapred.context=mapred
  32. #jobtracker.sink.file_mapred.filename=jobtracker-mapred-metrics.out

  33. #tasktracker.sink.file.filename=tasktracker-metrics.out

  34. #maptask.sink.file.filename=maptask-metrics.out

  35. #reducetask.sink.file.filename=reducetask-metrics.out

  36. *.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31  
  37. *.sink.ganglia.period=10

  38. *.sink.ganglia.slope=jvm.metrics.gcCount=zero,jvm.metrics.memHeapUsedM=both  
  39. *.sink.ganglia.dmax=jvm.metrics.threadsBlocked=70,jvm.metrics.memHeapUsedM=40  

  40. namenode.sink.ganglia.servers=master:8649  
  41. resourcemanager.sink.ganglia.servers=master:8649  

  42. datanode.sink.ganglia.servers=master:8649    
  43. nodemanager.sink.ganglia.servers=master:8649    


  44. maptask.sink.ganglia.servers=master:8649    
  45. reducetask.sink.ganglia.servers=master:8649
复制代码


5. Hbase配置
在所有的hbase节点中均配置hadoop-metrics2-hbase.properties,配置如下:
  1. # syntax: [prefix].[source|sink].[instance].[options]
  2. # See javadoc of package-info.java for org.apache.hadoop.metrics2 for details

  3. #*.sink.file*.class=org.apache.hadoop.metrics2.sink.FileSink
  4. # default sampling period
  5. #*.period=10

  6. # Below are some examples of sinks that could be used
  7. # to monitor different hbase daemons.

  8. # hbase.sink.file-all.class=org.apache.hadoop.metrics2.sink.FileSink
  9. # hbase.sink.file-all.filename=all.metrics

  10. # hbase.sink.file0.class=org.apache.hadoop.metrics2.sink.FileSink
  11. # hbase.sink.file0.context=hmaster
  12. # hbase.sink.file0.filename=master.metrics

  13. # hbase.sink.file1.class=org.apache.hadoop.metrics2.sink.FileSink
  14. # hbase.sink.file1.context=thrift-one
  15. # hbase.sink.file1.filename=thrift-one.metrics

  16. # hbase.sink.file2.class=org.apache.hadoop.metrics2.sink.FileSink
  17. # hbase.sink.file2.context=thrift-two
  18. # hbase.sink.file2.filename=thrift-one.metrics

  19. # hbase.sink.file3.class=org.apache.hadoop.metrics2.sink.FileSink
  20. # hbase.sink.file3.context=rest
  21. # hbase.sink.file3.filename=rest.metrics


  22. *.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31  
  23. *.sink.ganglia.period=10  

  24. hbase.sink.ganglia.period=10  
  25. hbase.sink.ganglia.servers=master:8649
复制代码


6. 启动hadoop、hbase集群
  1. start-dfs.sh
  2. start-yarn.sh
  3. start-hbase.sh
复制代码


7. 启动Ganglia
先需要重启hadoop和hbase 。在各个节点上启动gmond服务,主节点还需要启动gmetad服务。
使用apt-get方式安装的Ganglia,可以直接用service方式启动。
  1. sudo service ganglia-monitor start(每台机器都需要启动)

  2. sudo service gmetad start(在安装了ganglia-webfrontend的机器上启动)
复制代码


8. 检验

登录浏览器查看:http://master/ganglia,可能会出现网络页面找不到的问题,只需要将ganglia的子域添加到监控的机子上即可。

如果Hosts up为9即表示安装成功。

若安装不成功,有几个很有用的调试命令:
以调试模式启动gmetad:gmetad -d 9
查看gmetad收集到的XML文件:telnet master 8649


9. 截图

 


  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值