案例需求:
使用Flume-1监控文件变动,Flume-1将变动内容传递给Flume-2,
Flume-2负责存储到HDFS。同时Flume-1将变动内容传递给Flume-3,
Flume-3负责输出到Local FileSystem。
实现步骤:
1.准备工作
在/root/app/flume/job目录下创建group1文件夹
cd group1/
在/root/app/datas/目录下创建flume3文件夹
mkdir flume3
2.创建flume-file-flume.conf
(配置1个接收日志文件的source和两个channel、两个sink,
分别输送给flume-flume-hdfs和flume-flume-dir。)
touch flume-file-flume.conf
vim flume-file-flume.conf
# Name the components on this agent
a1.sources = r1
a1.sinks = k1 k2
a1.channels = c1 c2
# 将数据流复制给所有channel
a1.sources.r1.selector.type = replicating
# Describe/configure the source
a1.sources.r1.type = exec
a1.sources.r1.command = tail -F /opt/module/hive/logs/hive.log
a1.sources.r1.shell = /bin/bash -c
# Describe the sink
a1.sinks.k1.type = avro
a1.sinks.k1.hostname = hadoop102
a1.sinks.k1.port = 4141
a1.sinks.k2.type = avro
a1.sinks.k2.hostname = hadoop102
a1.sinks.k2.port = 4142
# Describe the channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
a1.channels.c2.type = memory
a1.channels.c2.capacity = 1000
a1.channels.c2.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1 c2
a1.sinks.k1.channel = c1
a1.sinks.k2.channel = c2
3.创建flume-flume-hdfs.conf
(配置上级Flume输出的Source,输出是到HDFS的Sink。)
touch flume-flume-hdfs.conf
vim flume-flume-hdfs.conf
# Name the components on this agent
a2.sources = r1
a2.sinks = k1
a2.channels = c1
# Describe/configure the source
a2.sources.r1.type = avro
a2.sources.r1.bind = hadoop102
a2.sources.r1.port = 4141
# Describe the sink
a2.sinks.k1.type = hdfs
a2.sinks.k1.hdfs.path = hdfs://hadoop102:9000/flume2/%Y%m%d/%H
#上传文件的前缀
a2.sinks.k1.hdfs.filePrefix = flume2-
#是否按照时间滚动文件夹
a2.sinks.k1.hdfs.round = true
#多少时间单位创建一个新的文件夹
a2.sinks.k1.hdfs.roundValue = 1
#重新定义时间单位
a2.sinks.k1.hdfs.roundUnit = hour
#是否使用本地时间戳
a2.sinks.k1.hdfs.useLocalTimeStamp = true
#积攒多少个Event才flush到HDFS一次
a2.sinks.k1.hdfs.batchSize = 100
#设置文件类型,可支持压缩
a2.sinks.k1.hdfs.fileType = DataStream
#多久生成一个新的文件
a2.sinks.k1.hdfs.rollInterval = 600
#设置每个文件的滚动大小大概是128M
a2.sinks.k1.hdfs.rollSize = 134217700
#文件的滚动与Event数量无关
a2.sinks.k1.hdfs.rollCount = 0
#最小冗余数
a2.sinks.k1.hdfs.minBlockReplicas = 1
# Describe the channel
a2.channels.c1.type = memory
a2.channels.c1.capacity = 1000
a2.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a2.sources.r1.channels = c1
a2.sinks.k1.channel = c1
4.创建flume-flume-dir.conf
(配置上级Flume输出的Source,输出是到本地目录的Sink)
touch flume-flume-dir.conf
vim flume-flume-dir.conf
# Name the components on this agent
a3.sources = r1
a3.sinks = k1
a3.channels = c2
# Describe/configure the source
a3.sources.r1.type = avro
a3.sources.r1.bind = hadoop102
a3.sources.r1.port = 4142
# Describe the sink
a3.sinks.k1.type = file_roll
a3.sinks.k1.sink.directory = /opt/module/datas/flume3
# Describe the channel
a3.channels.c2.type = memory
a3.channels.c2.capacity = 1000
a3.channels.c2.transactionCapacity = 100
# Bind the source and sink to the channel
a3.sources.r1.channels = c2
a3.sinks.k1.channel = c2
提示:输出的本地目录必须是已经存在的目录,
如果该目录不存在,并不会创建新的目录
5.执行配置文件
(分别开启对应配置文件:flume-flume-dir,flume-flume-hdfs,flume-file-flume。)
bin/flume-ng agent --conf conf/ --name a3
--conf-file job/group1/flume-flume-dir.conf
bin/flume-ng agent --conf conf/ --name a2
--conf-file job/group1/flume-flume-hdfs.conf
bin/flume-ng agent --conf conf/ --name a1
--conf-file job/group1/flume-file-flume.conf
6.启动Hadoop和Hive
sbin/start-dfs.sh (node01)
sbin/start-yarn.sh (node02)
bin/hive (node01)