6 Flume数据流监控
Ganglia由gmond、gmetad和gweb三部分组成。
gmond(Ganglia Monitoring Daemon)是一种轻量级服务,安装在每台需要收集指标数据的节点主机上。使用gmond,你可以很容易收集很多系统指标数据,如CPU、内存、磁盘、网络和活跃进程的数据等。
gmetad(Ganglia Meta Daemon)整合所有信息,并将其以RRD格式存储至磁盘的服务。
gweb(Ganglia Web)Ganglia可视化工具,gweb是一种利用浏览器显示gmetad所存储数据的PHP前端。在Web界面中以图表方式展现
集群的运行状态下收集的多种不同指标数据。
6.1 安装ganglia
步骤1:规划
hadoop102: web gmetad gmod
hadoop103: gmod
hadoop104: gmod
步骤2:在102 103 104分别安装epel-release
[atguigu@hadoop102 flume]$ sudo yum -y install epel-release
步骤3:在hadoop102安装
[atguigu@hadoop102 flume]$ sudo yum -y install ganglia-gmetad
[atguigu@hadoop102 flume]$ sudo yum -y install ganglia-web
[atguigu@hadoop102 flume]$ sudo yum -y install ganglia-gmond
步骤4:在hadoop103、104安装
[atguigu@hadoop102 flume]$ sudo yum -y install ganglia-gmond
6.2 配置ganglia
步骤1:在102修改配置文件/etc/httpd/conf.d/ganglia.conf
# Ganglia monitoring system php web frontend
#
Alias /ganglia /usr/share/ganglia
<Location /ganglia>
# Require local
# 通过windows访问ganglia,需要配置Linux对应的主机(windows)ip地址
Require ip 192.168.1.1
# Require ip 10.1.2.3
# Require host example.org
</Location>
需要注意!!!这个地方是通过Windows访问ganglia,所以需要查看windows端的vmnet8的ip地址:
其他的注释掉就可以了。
步骤2:在102修改配置文件/etc/ganglia/gmetad.conf
[atguigu@hadoop102 flume]$ sudo vim /etc/ganglia/gmetad.conf
修改为:
data_source "my cluster" hadoop102
步骤3:在102 103 104修改配置文件/etc/ganglia/gmond.conf
配置完102后可以分发给103、104。
[atguigu@hadoop102 flume]$ sudo vim /etc/ganglia/gmond.conf
修改为:
cluster {
name = "my cluster"
owner = "unspecified"
latlong = "unspecified"
url = "unspecified"
}
udp_send_channel {
#bind_hostname = yes # Highly recommended, soon to be default.
# This option tells gmond to use a source address
# that resolves to the machine's hostname. Without
# this, the metrics may appear to come from any
# interface and the DNS names associated with
# those IPs will be used to create the RRDs.
# mcast_join = 239.2.11.71
# 数据发送给hadoop102
host = hadoop102
port = 8649
ttl = 1
}
udp_recv_channel {
# mcast_join = 239.2.11.71
port = 8649
# 接收来自任意连接的数据
bind = 0.0.0.0
retry_bind = true
# Size of the UDP buffer. If you are handling lots of metrics you really
# should bump it up to e.g. 10MB or even higher.
# buffer = 10485760
}
步骤4:在102修改配置文件/etc/selinux/config
[atguigu@hadoop102 flume]$ sudo vim /etc/selinux/config
修改为:
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these two values:
# targeted - Targeted processes are protected,
# mls - Multi Level Security protection.
SELINUXTYPE=targeted
**尖叫提示:**selinux本次生效关闭必须重启,如果此时不想重启,可以临时生效之:
[atguigu@hadoop102 flume]$ sudo setenforce 0
6.3 启动ganglia
在102、103、104启动
[atguigu@hadoop102 flume]$ sudo systemctl start gmond
在102启动
[atguigu@hadoop102 flume]$ sudo systemctl start httpd
[atguigu@hadoop102 flume]$ sudo systemctl start gmetad
打开网页浏览器ganglia页面
http://hadoop102/ganglia
**尖叫提示:**如果完成以上操作依然出现权限不足错误,请修改/var/lib/ganglia目录的权限:
[atguigu@hadoop102 flume]$ sudo chmod -R 777 /var/lib/ganglia
配置完上述信息,可以在web端查看内存,cpu,网络等信息。但是没有flume的信息!
6.4 操作Flume测试监控
步骤1:修改/opt/module/flume/conf 目录下的 flume-env.sh 配置,让flume的信息在web端显示。
JAVA_OPTS="-Dflume.monitoring.type=ganglia -Dflume.monitoring.hosts=192.168.9.102:8649 -Xms100m -Xmx200m"
步骤2:启动flume任务
需要加上两条信息:-Dflume.monitoring.type=ganglia -Dflume.monitoring.hosts=hadoop102:8649
[atguigu@hadoop102 job]$ flume-ng agent -c ../conf/ -f flume-netcat-logger.conf -n a1 -Dflume.rlogger==INFO,console -Dflume.monitoring.type=ganglia -Dflume.monitoring.hosts=hadoop102:8649
步骤3:发送数据观察ganglia监控图
图例说明:
字段(图表名称) | 字段含义 |
---|---|
EventPutAttemptCount | source 尝试写入 channel 的事件总数量 |
EventPutSuccessCount | 成功写入 channel 且提交的事件总数量 |
EventTakeAttemptCount | sink 尝试从 channel 拉取事件的总数量。 |
EventTakeSuccessCount | sink 成功读取的事件的总数量 |
StartTime | channel 启动的时间(毫秒) |
StopTime | channel 停止的时间(毫秒) |
ChannelSize | 目前 channel 中事件的总数量 |
ChannelFillPercentage | channel 占用百分比 |
ChannelCapacity | channel 的容量 |