Flume开发 -- 复制及多路复用

一、需求

使用 Flume-1 监控文件变动,Flume-1 将变动内容传递给 Flume-2,Flume-2 负责存储到 HDFS。同时 Flume-1 将变动内容传递给 Flume-3,Flume-3 负责输出到 Local FileSystem。

二、流程分析

在这里插入图片描述

三、实现步骤

3.1 准备工作

1、在 /opt/module/flume/job 目录下创建 group1 文件夹

[test@hadoop151 job]$ mkdir group1

2、在 /opt/module/datas/ 目录下创建 flume3 文件夹

[test@hadoop151 datas]$ mkdir flume3

3.2 创建 flume-file-flume.conf

配置一个接收日志文件的 source 和两个 channel、两个 sink,分别输送给 flume-flume-hdfs 和 flume-flume-dir。

1、编辑配置文件

[test@hadoop151 job]$ cd group1/
[test@hadoop151 group1]$ ll
总用量 0
[test@hadoop151 group1]$ vim flume-file-flume.conf

2、添加如下内容

# Name the components on this agent 
a1.sources = r1 
a1.sinks = k1 k2 
a1.channels = c1 c2 
 
# 将数据流复制给所有 channel 
a1.sources.r1.selector.type = replicating 
 
# Describe/configure the source 
a1.sources.r1.type = exec 
a1.sources.r1.command = tail -F /opt/module/hive/logs/hive.log 
a1.sources.r1.shell = /bin/bash -c 
 
# Describe the sink 
# sink 端的 avro 是一个数据发送者 
a1.sinks.k1.type = avro 
a1.sinks.k1.hostname = hadoop151
a1.sinks.k1.port = 4141 
 
a1.sinks.k2.type = avro 
a1.sinks.k2.hostname = hadoop151 
a1.sinks.k2.port = 4142 
 
# Describe the channel 
a1.channels.c1.type = memory 
a1.channels.c1.capacity = 1000 
a1.channels.c1.transactionCapacity = 100 
 
a1.channels.c2.type = memory 
a1.channels.c2.capacity = 1000 
a1.channels.c2.transactionCapacity = 100 
 
# Bind the source and sink to the channel 
a1.sources.r1.channels = c1 c2 
a1.sinks.k1.channel = c1 
a1.sinks.k2.channel = c2 

3.3 创建 flume-flume-hdfs.conf

配置上级 Flume 输出的 Source,输出是到 HDFS 的 Sink。

1、编辑配置文件

[test@hadoop151 group1]$ vim flume-flume-hdfs.conf

2、添加如下内容

# Name the components on this agent 
a2.sources = r1 
a2.sinks = k1 
a2.channels = c1 
 
# Describe/configure the source 
# source 端的 avro 是一个数据接收服务 
a2.sources.r1.type = avro 
a2.sources.r1.bind = hadoop151 
a2.sources.r1.port = 4141 
 
# Describe the sink 
a2.sinks.k1.type = hdfs 
a2.sinks.k1.hdfs.path = hdfs://hadoop151:9000/flume2/%Y%m%d/%H 
#上传文件的前缀 
a2.sinks.k1.hdfs.filePrefix = flume2- 
#是否按照时间滚动文件夹 
a2.sinks.k1.hdfs.round = true 
#多少时间单位创建一个新的文件夹 
a2.sinks.k1.hdfs.roundValue = 1 
#重新定义时间单位 
a2.sinks.k1.hdfs.roundUnit = hour 
#是否使用本地时间戳 
a2.sinks.k1.hdfs.useLocalTimeStamp = true 
#积攒多少个 Event 才 flush 到 HDFS 一次 
a2.sinks.k1.hdfs.batchSize = 100 
#设置文件类型,可支持压缩 
a2.sinks.k1.hdfs.fileType = DataStream 
#多久生成一个新的文件 
a2.sinks.k1.hdfs.rollInterval = 600 
#设置每个文件的滚动大小大概是 128M 
a2.sinks.k1.hdfs.rollSize = 134217700 
#文件的滚动与 Event 数量无关 
a2.sinks.k1.hdfs.rollCount = 0 
 
# Describe the channel 
a2.channels.c1.type = memory 
a2.channels.c1.capacity = 1000 
a2.channels.c1.transactionCapacity = 100 
 
# Bind the source and sink to the channel 
a2.sources.r1.channels = c1 
a2.sinks.k1.channel = c1 

3.4 创建 flume-flume-dir.conf

配置上级 Flume 输出的 Source,输出是到本地目录的 Sink。

1、编辑配置文件

[test@hadoop151 group1]$ vim flume-flume-dir.conf

2、添加如下内容

# Name the components on this agent 
a3.sources = r1 
a3.sinks = k1 
a3.channels = c2 
 
# Describe/configure the source 
a3.sources.r1.type = avro 
a3.sources.r1.bind = hadoop151 
a3.sources.r1.port = 4142 
 
# Describe the sink 
a3.sinks.k1.type = file_roll 
a3.sinks.k1.sink.directory = /opt/module/datas/flume3 
 
# Describe the channel 
a3.channels.c2.type = memory 
a3.channels.c2.capacity = 1000 
a3.channels.c2.transactionCapacity = 100 
 
# Bind the source and sink to the channel 
a3.sources.r1.channels = c2 
a3.sinks.k1.channel = c2 

提示:输出的本地目录必须是已经存在的目录。如果该目录不存在,并不会创建新的目录。

3.5 执行配置文件

分别启动对应的 flume 进程:flume-flume-dir、flume-flume-hdfs、flume-file-flume。

[test@hadoop151 flume]$ bin/flume-ng agent --conf conf/ --name a3 --conf-file job/group1/flume-flume-dir.conf
[test@hadoop151 flume]$ bin/flume-ng agent --conf conf/ --name a2 --conf-file job/group1/flume-flume-hdfs.conf
[test@hadoop151 flume]$ bin/flume-ng agent --conf conf/ --name a1 --conf-file job/group1/flume-file-flume.conf

3.6 启动 hadoop 和 hive

[test@hadoop151 ~]$ start-dfs.sh 
Starting namenodes on [hadoop151]
hadoop151: starting namenode, logging to /opt/module/hadoop-2.7.2/logs/hadoop-test-namenode-hadoop151.out
hadoop151: starting datanode, logging to /opt/module/hadoop-2.7.2/logs/hadoop-test-datanode-hadoop151.out
hadoop153: starting datanode, logging to /opt/module/hadoop-2.7.2/logs/hadoop-test-datanode-hadoop153.out
hadoop152: starting datanode, logging to /opt/module/hadoop-2.7.2/logs/hadoop-test-datanode-hadoop152.out
Starting secondary namenodes [hadoop153]
hadoop153: starting secondarynamenode, logging to /opt/module/hadoop-2.7.2/logs/hadoop-test-secondarynamenode-hadoop153.out
[test@hadoop152 ~]$ start-yarn.sh 
starting yarn daemons
starting resourcemanager, logging to /opt/module/hadoop-2.7.2/logs/yarn-test-resourcemanager-hadoop152.out
hadoop151: starting nodemanager, logging to /opt/module/hadoop-2.7.2/logs/yarn-test-nodemanager-hadoop151.out
hadoop153: starting nodemanager, logging to /opt/module/hadoop-2.7.2/logs/yarn-test-nodemanager-hadoop153.out
hadoop152: starting nodemanager, logging to /opt/module/hadoop-2.7.2/logs/yarn-test-nodemanager-hadoop152.out
[test@hadoop151 ~]$ hive
Logging initialized using configuration in file:/opt/module/hive/conf/hive-log4j.properties
hive (default)> 

3.7 查看 HDFS 上数据

在这里插入图片描述

3.8 查看 /opt/module/datas/flume3 目录中数据

在这里插入图片描述在这里插入图片描述

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值