文章目录
1、简介
- Flume提供一个分布式的,可靠的,对大数据量的日志进行高效收集、聚集、移动的服务,Flume只能在Unix环境下运行
- Flume基于流失架构,容错性强,也很灵活简单
- Flume、Kafka用来实时进行数据收集,Spark、Storm用来实时处理数据,impala用来实时查询
2、Flume的相关角色
Source
用于采集数据,Source是产生数据流的地方,同时Source会将产生的数据流传输到Channel,有点类似于Java IO部分的Channel。
Channel
用于桥接Sources和Sinks,类似一个队列
Sink
从Channel收集数据,将数据写到目标源(可以是下一个Source,也可以是HDFS或者是HBase)
Event
传输单元,Flume数据传输的基本单元,以事件形式将数据从源头送至目的地
3、Flume传输过程
Source监控某个文件或者数据流,数据源产生新的数据,拿到该数据后,将数据封装在在一个Event中,并put到Channel后commit提交,Channel队列先进先出,Sink去Channel队列中拉取数据,然后将其写入到HDFS中。
4、部署及使用
<1>、文件配置
修改Flume文件夹中flume-env.sh:
export JAVA_HOME=/opt/software/jdk1.8.0_121
<2>、相关案例
(1)、监控端口数据
目标:Flume监控你一段的Console,另一端Console发送消息,被监控端实时显示发送的数据
1、首先安装talent工具
$ sudo rpm -ivh xinetd-2.3.14-40.el6.x86_64.rpm
$ sudo rpm -ivh telnet-0.17-48.el6.x86_64.rpm
$ sudo rpm -ivh telnet-server-0.17-48.el6.x86_64.rpm
2、创建Flume Agent配置文件flume-telnet.conf
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = netcat
a1.sources.r1.bind = localhost
a1.sources.r1.port = 44444
# Describe the sink
a1.sinks.k1.type = logger
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
3、一定要先判断44444端口是否被占用
$ netstat -tunlp | grep 44444
4、开启flume监听端口
$ bin/flume-ng agent --conf conf/ --name a1 --conf-file job/flume-telnet.conf -Dflume.root.logger==INFO,console
5、使用telent工具向本机44444端口发送信息
$ telnet localhost 44444
运行结果
(2)、实时读取本地文件到HDFS
目标:实时监控Hive日志,并上传到HDFS
1、拷贝Hadoop相关的jar包到Flume的lib目录下(学会根据自己的目录和版本查找jar包)
2、创建Flume Agent配置文件flume-hdfs.conf文件
# Name the components on this agent
a2.sources = r2
a2.sinks = k2
a2.channels = c2
# Describe/configure the source
a2.sources.r2.type = exec
a2.sources.r2.command = tail -F /opt/modules/apache-hive-1.2.2-bin/hive.log
a2.sources.r2.shell = /bin/bash -c
# Describe the sink
a2.sinks.k2.type = hdfs
a2.sinks.k2.hdfs.path = hdfs://hadoop1:8020/flume/%Y%m%d/%H #上传文件的前缀
a2.sinks.k2.hdfs.filePrefix = logs-
#是否按照时间滚动文件夹
a2.sinks.k2.hdfs.round = true
#多少时间单位创建一个新的文件夹
a2.sinks.k2.hdfs.roundValue = 1
#重新定义时间单位
a2.sinks.k2.hdfs.roundUnit = hour
#是否使用本地时间戳
a2.sinks.k2.hdfs.useLocalTimeStamp = true
#积攒多少个 Event 才 flush 到 HDFS 一次
a2.sinks.k2.hdfs.batchSize = 1000
#设置文件类型,可支持压缩
a2.sinks.k2.hdfs.fileType = DataStream
#多久生成一个新的文件
a2.sinks.k2.hdfs.rollInterval = 600
#设置每个文件的滚动大小
a2.sinks.k2.hdfs.rollSize = 134217700
#文件的滚动与 Event 数量无关
a2.sinks.k2.hdfs.rollCount = 0
#最小冗余数
a2.sinks.k2.hdfs.minBlockReplicas = 1
# Use a channel which buffers events in memory
a2.channels.c2.type = memory
a2.channels.c2.capacity = 1000
a2.channels.c2.transactionCapacity = 100
# Bind the source and sink to the channel
a2.sources.r2.channels = c2
a2.sinks.k2.channel = c2
3、执行监控配置
$ bin/flume-ng agent --conf conf/ --name a2 --conf-file job/flume-hdfs.conf
(3)、实时读取目录文件到HDFS
目标:使用flume监听整个目录的文件
1、创建Flume Agent配置文件flume-dir.conf文件
a3.sources = r3
a3.sinks = k3
a3.channels = c3
# Describe/configure the source
a3.sources.r3.type = spooldir
a3.sources.r3.spoolDir = /opt/modules/apache-flume-1.7.0-bin/upload
a3.sources.r3.fileSuffix = .COMPLETED
a3.sources.r3.fileHeader = true
#忽略所有以.tmp 结尾的文件,不上传
a3.sources.r3.ignorePattern = ([^ ]*\.tmp)
# Describe the sink
a3.sinks.k3.type = hdfs
a3.sinks.k3.hdfs.path = hdfs://hadoop1:8020/flume/upload/%Y%m%d/%H
#上传文件的前缀
a3.sinks.k3.hdfs.filePrefix = upload-
#是否按照时间滚动文件夹
a3.sinks.k3.hdfs.round = true
#多少时间单位创建一个新的文件夹
a3.sinks.k3.hdfs.roundValue = 1
#重新定义时间单位
a3.sinks.k3.hdfs.roundUnit = hour
#是否使用本地时间戳
a3.sinks.k3.hdfs.useLocalTimeStamp = true
#积攒多少个 Event 才 flush 到 HDFS 一次
a3.sinks.k3.hdfs.batchSize = 100
#设置文件类型,可支持压缩
a3.sinks.k3.hdfs.fileType = DataStream
#多久生成一个新的文件
a3.sinks.k3.hdfs.rollInterval = 600
#设置每个文件的滚动大小大概是 128M
a3.sinks.k3.hdfs.rollSize = 134217700
#文件的滚动与 Event 数量无关
a3.sinks.k3.hdfs.rollCount = 0
#最小冗余数
a3.sinks.k3.hdfs.minBlockReplicas = 1
# Use a channel which buffers events in memory
a3.channels.c3.type = memory
a3.channels.c3.capacity = 1000
a3.channels.c3.transactionCapacity = 100
# Bind the source and sink to the channel
a3.sources.r3.channels = c3
a3.sinks.k3.channel = c3
2、测试:执行如下脚本后,向upload文件夹中添加文件
$ bin/flume-ng agent --conf conf/ --name a3 --conf-file job/flume-dir.conf
使用Spooling Directory Source时
1、不要在监控目录中创建并持续修改文件
2、上传完成的文件会议.COMPLETED结尾
3、被监控文件夹每600毫秒扫面一次文件变动
(4)、Flume与Flume之间数据传递:单Flume多Channel、Sink
目标:使用 flume-1监控文件变动,flume-1将变动内容传递给flume-2,flume-2负责存储到 HDFS。同时 flume-1将变动内容传递给flume-3,flume-3负责输出到local filesystem。
1、创建flume-1.conf,用于监控hive.log文件的变动,同时产生俩个Channel和俩个Sink分别输送给flume-2和flume-3
# Name the components on this agent
a1.sources = r1
a1.sinks = k1 k2
a1.channels = c1 c2
# 将数据流复制给多个 channel
a1.sources.r1.selector.type = replicating
# Describe/configure the source
a1.sources.r1.type = exec
a1.sources.r1.command = tail -F /opt/modules/apache-hive-1.2.2-bin/hive.log
a1.sources.r1.shell = /bin/bash -c
# Describe the sink
a1.sinks.k1.type = avro
a1.sinks.k1.hostname = hadoop1
a1.sinks.k1.port = 4141
a1.sinks.k2.type = avro
a1.sinks.k2.hostname = hadoop1
a1.sinks.k2.port = 4142
# Describe the channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
a1.channels.c2.type = memory
a1.channels.c2.capacity = 1000
a1.channels.c2.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1 c2
a1.sinks.k1.channel = c1
a1.sinks.k2.channel = c2
2、 创建 flume-2.conf,用于接收 flume-1 的 event,同时产生 1 个 channel 和 1 个 sink,将数据输送给 hdfs
# Name the components on this agent
a2.sources = r1
a2.sinks = k1
a2.channels = c1
# Describe/configure the source
a2.sources.r1.type = avro
a2.sources.r1.bind = hadoop1
a2.sources.r1.port = 4141
# Describe the sink
a2.sinks.k1.type = hdfs
a2.sinks.k1.hdfs.path = hdfs://hadoop1:8020/flume2/%Y%m%d/%H
#上传文件的前缀
a2.sinks.k1.hdfs.filePrefix = flume2-
#是否按照时间滚动文件夹
a2.sinks.k1.hdfs.round = true
#多少时间单位创建一个新的文件夹
a2.sinks.k1.hdfs.roundValue = 1
#重新定义时间单位
a2.sinks.k1.hdfs.roundUnit = hour
#是否使用本地时间戳
a2.sinks.k1.hdfs.useLocalTimeStamp = true
#积攒多少个 Event 才 flush 到 HDFS 一次
a2.sinks.k1.hdfs.batchSize = 100
#设置文件类型,可支持压缩
a2.sinks.k1.hdfs.fileType = DataStream
#多久生成一个新的文件
a2.sinks.k1.hdfs.rollInterval = 600
#设置每个文件的滚动大小大概是128M
a2.sinks.k1.hdfs.rollSize = 134217700
#文件的滚动与 Event 数量无关
a2.sinks.k1.hdfs.rollCount = 0
#最小冗余数
a2.sinks.k1.hdfs.minBlockReplicas = 1
# Describe the channel
a2.channels.c1.type = memory
a2.channels.c1.capacity = 1000
a2.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a2.sources.r1.channels = c1
a2.sinks.k1.channel = c1
3、 创建 flume-3.conf,用于接收flume-1的event,同时产生1个channel和1个sink,将数据输送给本地目录
# Name the components on this agent
a3.sources = r1
a3.sinks = k1
a3.channels = c1
# Describe/configure the source
a3.sources.r1.type = avro
a3.sources.r1.bind = hadoop1
a3.sources.r1.port = 4142
# Describe the sink
a3.sinks.k1.type = file_roll
a3.sinks.k1.sink.directory = /opt/flume1
# Describe the channel
a3.channels.c1.type = memory
a3.channels.c1.capacity = 1000
a3.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a3.sources.r1.channels = c1
a3.sinks.k1.channel = c1
==注意:==输出的本地目录必须是已经存在的目录,如果该目录不存在,并不会创建新的目录
4、测试:分别开启flume-job(依次启动flume-3、flume-2、flume-1),同时产生文件变动、观察结果
$ bin/flume-ng agent --conf conf/ --name a3 --conf-file job/group-job/flume-3.conf
$ bin/flume-ng agent --conf conf/ --name a2 --conf-file job/group-job/flume-2.conf
$ bin/flume-ng agent --conf conf/ --name a1 --conf-file job/group-job/flume-1.conf
(5)、Flume与Flume之间数据传递:多Flume汇总数据到单Flume
目标:flume-1监控文件hive.log,flume-2监控某一个端口的数据流,flume-1与flume-2将数据发送给flume-3,flume3将最终数据写入到HDFS。
1、创建flume-1.conf,用于监控hive.log文件,同时Sink数据到flume-3
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = exec
a1.sources.r1.command = tail -F /opt/modules/apache-hive-1.2.2-bin/hive.log
a1.sources.r1.shell = /bin/bash -c
# Describe the sink
a1.sinks.k1.type = avro
a1.sinks.k1.hostname = hadoop1
a1.sinks.k1.port = 4141
# Describe the channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
2、 创建flume-2.conf,用于监控端口44444数据流,同时Sink数据到flume-3:
# Name the components on this agent
a2.sources = r1
a2.sinks = k1
a2.channels = c1
# Describe/configure the source
a2.sources.r1.type = netcat
a2.sources.r1.bind = hadoop1
a2.sources.r1.port = 44444
# Describe the sink
a2.sinks.k1.type = avro
a2.sinks.k1.hostname = hadoop1
a2.sinks.k1.port = 4141
# Use a channel which buffers events in memory
a2.channels.c1.type = memory
a2.channels.c1.capacity = 1000
a2.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a2.sources.r1.channels = c1
a2.sinks.k1.channel = c1
3、创建flume-3.conf,用于接收flume-1与flume-2发送过来的数据流,最终合并后Sink到HDFS:
# Name the components on this agent
a3.sources = r1
a3.sinks = k1
a3.channels = c1
# Describe/configure the source
a3.sources.r1.type = avro
a3.sources.r1.bind = linux01
a3.sources.r1.port = 4141
# Describe the sink
a3.sinks.k1.type = hdfs
a3.sinks.k1.hdfs.path = hdfs://hadoop1:8020/flume3/%Y%m%d/%H
#上传文件的前缀
a3.sinks.k1.hdfs.filePrefix = flume3-
#是否按照时间滚动文件夹
a3.sinks.k1.hdfs.round = true
#多少时间单位创建一个新的文件夹
a3.sinks.k1.hdfs.roundValue = 1
#重新定义时间单位 a3.sinks.k1.hdfs.roundUnit = hour
#是否使用本地时间戳
a3.sinks.k1.hdfs.useLocalTimeStamp = true
#积攒多少个 Event 才 flush 到 HDFS 一次
a3.sinks.k1.hdfs.batchSize = 100
#设置文件类型,可支持压缩
a3.sinks.k1.hdfs.fileType = DataStream
#多久生成一个新的文件
a3.sinks.k1.hdfs.rollInterval = 600
#设置每个文件的滚动大小大概是128M
a3.sinks.k1.hdfs.rollSize = 134217700
#文件的滚动与 Event 数量无关
a3.sinks.k1.hdfs.rollCount = 0
#最小冗余数
a3.sinks.k1.hdfs.minBlockReplicas = 1
# Describe the channel
a3.channels.c1.type = memory
a3.channels.c1.capacity = 1000
a3.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a3.sources.r1.channels = c1
a3.sinks.k1.channel = c1
4、执行测试:分别开启对应 flume-job(依次启动 flume-3,flume-2,flume-1),同时产生文件变动并观察结果:
$ bin/flume-ng agent --conf conf/ --name a3 --conf-file job/group-job2/flume-3.conf
$ bin/flume-ng agent --conf conf/ --name a2 --conf-file job/group-job2/flume-2.conf
$ bin/flume-ng agent --conf conf/ --name a1 --conf-file job/group-job2/flume-1.conf
测试记得启动hive产生一些日志,同时使用telnet向44444端口发送信息,如
$ bin/hive
$ telnet hadoop1 44444
5、Flume监控之Ganglia
一、Ganglia安装及部署
1、安装httpd服务于php
yum -y install httpd php
2、安装其他的相关依赖
yum -y install rrdtool perl-rrdtool rrdtool-devel
yum -y install apr-devel
3、安装Ganglia
# rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
# yum -y install ganglia-gmetad
# yum -y install ganglia-web
# yum install -y ganglia-gmond
4、修改一些相关配置文件
文件 ganglia.conf: 在Location标签内加上Allow from all
文件 gmetad.conf:
# vi /etc/ganglia/gmetad.conf 修改为:
data_source "linux" 192.168.xx.xxx
文件 gmond.conf:
在cluster内加上
name = "linux"
udp_send_channel内加上
host = 192.168.xx.xxx
udp_recv_channel内加上
bind = 192.168.xx.xxx
文件 config: SELINUX=disabled
SeLinux本次生效关闭必须重启,如不想重启,可以临时生效
sudo setenforce 0
5、启动Ganglia
$ sudo service httpd start
$ sudo service gmetad start
$ sudo service gmond start
6、打开网页 http://192.168.xx.xxx/ganglia
如果显示权限不足错误,请修改:
sudo chmod -R 777 /var/lib/ganglia
二、操作Flume测试监控
1、修改 flume-env.sh配置
JAVA_OPTS="-Dflume.monitoring.type=ganglia
-Dflume.monitoring.hosts=192.168.xx.xxx:8649
-Xms100m
-Xmx200m"
2、启动flume任务
$ bin/flume-ng agent \
--conf conf/ \
--name a1 \
--conf-file job/group-job0/flume-telnet.conf
\ -Dflume.root.logger==INFO,console
\ -Dflume.monitoring.type=ganglia
\ -Dflume.monitoring.hosts=192.168.xx.xxx:8649
3、发送数据观察ganglia监测图
telnet localhost 44444
图例说明:
字段(图标名称) | 字段含义 |
---|---|
EventPutAttemptCount | source尝试写入channel 的事件总数量 |
EventPutSuccessCount | 成功写入channel且提交的事件总数量 |
EventTakeAttemptCount | sink尝试从channel拉取事件的总数量。这不意味着每次事件都被返回,因为sink拉取的时候channel可能没有任何数据。 |
EventTakeSuccessCount | sink成功读取的事件的总数量 |
StartTime | channel 启动的时间(毫秒) |
StopTime | channel 停止的时间(毫秒) |
ChannelSize | 目前channel中事件的总数量 |
ChannelFillPercentage | Channel占用百分比 |
ChannelCapacity | Channel的容量 |