1、复制和多路复用拓扑结构
- 单source,多channel、sink
- Flume 支持将事件流向一个或者多个目的地。这种模式可以将相同数据复制到多个 channel 中,或者将不同数据分发到不同的 channel 中,sink 可以选择传送到不同的目的地。
2、复制和多路复用案例解析
1)案例需求
- 使用 Flume-1 监控文件变动,Flume-1 将变动内容传递给 Flume-2,Flume-2 负责存储到 HDFS。同时 Flume-1 将变动内容传递给 Flume-3,Flume-3 负责输出到 Local FileSystem。
2)需求分析:
(1)准备工作
-
在 /opt/module/flume-1.9.0/jobs 目录下创建 group1 文件夹
-
[xiaoxq@hadoop105 jobs]$ pwd /opt/module/flume-1.9.0/jobs [xiaoxq@hadoop105 jobs]$ mkdir group1
-
在 /opt/module/datas/ 目录下创建 flume 文件夹
-
[xiaoxq@hadoop105 datas]$ pwd /opt/module/datas [xiaoxq@hadoop105 datas]$ mkdir flume3
(2)创建 flume-file-flume.conf
-
配置1个接收日志文件的 source 和两个 channel、两个 sink,分别输送给 flume-flume-hdfs 和 flume-flume-dir。
-
编辑配置文件
-
[xiaoxq@hadoop105 jobs]$ pwd /opt/module/flume-1.9.0/jobs [xiaoxq@hadoop105 jobs]$ cd group1/ [xiaoxq@hadoop105 group1]$ vim flume-file-flume.conf
-
添加如下内容
# Name the components on this agent a1.sources = r1 a1.sinks = k1 k2 a1.channels = c1 c2 # 将数据流复制给所有channel a1.sources.r1.selector.type = replicating # Describe/configure the source a1.sources.r1.type = exec a1.sources.r1.command = tail -F /opt/module/hive-3.1.2/logs/hive.log a1.sources.r1.shell = /bin/bash -c # Describe the sink # sink端的avro是一个数据发送者 a1.sinks.k1.type = avro a1.sinks.k1.hostname = hadoop105 a1.sinks.k1.port = 4141 a1.sinks.k2.type = avro a1.sinks.k2.hostname = hadoop105 a1.sinks.k2.port = 4142 # Describe the channel a1.channels.c1.type = memory a1.channels.c1.capacity = 1000 a1.channels.c1.transactionCapacity = 100 a1.channels.c2.type = memory a1.channels.c2.capacity = 1000 a1.channels.c2.transactionCapacity = 100 # Bind the source and sink to the channel a1.sources.r1.channels = c1 c2 a1.sinks.k1.channel = c1 a1.sinks.k2.channel = c2
(3)创建 flume-flume-hdfs.conf
- 配置上级 Flume 输出的 Source,输出是到 HDFS 的 Sink。
- 编辑配置文件
[xiaoxq@hadoop105 group1]$ vim flume-flume-hdfs.conf
- 添加如下内容
# Name the components on this agent
a2.sources = r1
a2.sinks = k1
a2.channels = c1
# Describe/configure the source
# source端的avro是一个数据接收服务
a2.sources.r1.type = avro
a2.sources.r1.bind = hadoop105
a2.sources.r1.port = 4141
# Describe the sink
a2.sinks.k1.type = hdfs
a2.sinks.k1.hdfs.path = hdfs://hadoop105:9820/flume2/%Y%m%d/%H
#上传文件的前缀
a2.sinks.k1.hdfs.filePrefix = flume2-
#是否按照时间滚动文件夹
a2.sinks.k1.hdfs.round = true
#多少时间单位创建一个新的文件夹
a2.sinks.k1.hdfs.roundValue = 1
#重新定义时间单位
a2.sinks.k1.hdfs.roundUnit = hour
#是否使用本地时间戳
a2.sinks.k1.hdfs.useLocalTimeStamp = true
#积攒多少个Event才flush到HDFS一次
a2.sinks.k1.hdfs.batchSize = 100
#设置文件类型,可支持压缩
a2.sinks.k1.hdfs.fileType = DataStream
#多久生成一个新的文件
a2.sinks.k1.hdfs.rollInterval = 600
#设置每个文件的滚动大小大概是128M
a2.sinks.k1.hdfs.rollSize = 134217700
#文件的滚动与Event数量无关
a2.sinks.k1.hdfs.rollCount = 0
# Describe the channel
a2.channels.c1.type = memory
a2.channels.c1.capacity = 1000
a2.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a2.sources.r1.channels = c1
a2.sinks.k1.channel = c1
(4)创建 flume-flume-dir.conf
- 配置上级 Flume 输出的 Source,输出是到本地目录的 Sink。
- 编辑配置文件
[xiaoxq@hadoop105 group1]$ vim flume-flume-dir.conf
- 添加如下内容
# Name the components on this agent
a3.sources = r1
a3.sinks = k1
a3.channels = c2
# Describe/configure the source
a3.sources.r1.type = avro
a3.sources.r1.bind = hadoop105
a3.sources.r1.port = 4142
# Describe the sink
a3.sinks.k1.type = file_roll
a3.sinks.k1.sink.directory = /opt/module/datas/flume3
# Describe the channel
a3.channels.c2.type = memory
a3.channels.c2.capacity = 1000
a3.channels.c2.transactionCapacity = 100
# Bind the source and sink to the channel
a3.sources.r1.channels = c2
a3.sinks.k1.channel = c2
提示:输出的本地目录必须是已经存在的目录,如果该目录不存在,并不会创建新的目录。
(5)执行配置文件
- 分别启动对应的 flume 进程:flume-flume-dir,flume-flume-hdfs,flume-file-flume。
[xiaoxq@hadoop105 flume-1.9.0]$ bin/flume-ng agent --conf conf/ --name a3 --conf-file job/group1/flume-flume-dir.conf
[xiaoxq@hadoop105 flume-1.9.0]$ bin/flume-ng agent --conf conf/ --name a2 --conf-file job/group1/flume-flume-hdfs.conf
[xiaoxq@hadoop105 flume-1.9.0]$ bin/flume-ng agent --conf conf/ --name a1 --conf-file job/group1/flume-file-flume.conf
(6)启动Hadoop和Hive
[xiaoxq@hadoop105 hadoop-2.7.2]$ sbin/start-dfs.sh
[xiaoxq@hadoop106 hadoop-2.7.2]$ sbin/start-yarn.sh
[xiaoxq@hadoop105 hive]$ bin/hive
hive (default)>
(7)检查HDFS上数据!
(8)检查/opt/module/datas/flume3目录中数据
[xiaoxq@hadoop105 flume3]$ ll
-rw-rw-r--. 1 xiaoxq xiaoxq 6431 8月 18 16:58 1597740567242-18
[xiaoxq@hadoop105 flume3]$ cat 1597740567242-18