扇出(fan out)-Flume与Flume之间数据传递:单Flume多Channel、Sink
目标:使用flume1监控文件变动,flume1将变动内容传递给flume-2,flume-2负责存储到HDFS。同时flume1将变动内容传递给flume-3,flume-3负责输出到local
a1.sources=s1
a1.channels=c1 c2
a1.sinks=k1 k2
#将数据流复制给多个channel
a1.sources.s1.selector.type=replicating
#设置sources
a1.sources.type=exec
a1.sources.command=tail -f /opt/test/exec.txt
#设置channels-1
a1.channels.c1.type=memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
#设置channels-2
a1.channels.c2.type=memory
a1.channels.c2.capacity = 1000
a1.channels.c2.transactionCapacity = 100
#设置sink-1
a1.sinks.k1.type = avro
a1.sinks.k1.hostname = hadoop131
a1.sinks.k1.port = 4141
#设置sink-2
a1.sinks.k2.type = avro
a1.sinks.k2.hostname = hadoop130
a1.sinks.k2.port = 4141
#建立连接
a1.sources.s1.channels=c1 c2
a1.sink1.k1.channel=c1
a1.sink1.k2.channel=c2
用于接收flume1的event,同时产生1个channel和1个sink,将数据输送成logger:
a2.sources = r1
a2.sinks = k1
a2.channels = c1
#配置source
a2.sources.r1.type = avro
a2.sources.r1.bind = hadoop31
a2.sources.r1.port = 4141
#配置sink
a2.sinks.k1.type = logger
#配置channel
a2.channels.c1.type = memory
a2.channels.c1.capacity = 1000
a2.channels.c1.transactionCapacity = 100
#设置连接
a2.sources.r1.channels = c1
a2.sinks.k1.channel = c1
用于接收flume1的event,同时产生1个channel和1个sink,将数据输送给本地目录:
a3.sources = r1
a3.sinks = k1
a3.channels = c1
#配置source
a3.sources.r1.type = avro
a3.sources.r1.bind = hadoop130
a3.sources.r1.port = 4141
#配置sink
a3.sinks.k1.type = file_roll
#备注:此处的文件夹需要先创建好
a3.sinks.k1.sink.directory = /opt/flume3
#配置channel
a3.channels.c1.type = memory
a3.channels.c1.capacity = 1000
a3.channels.c1.transactionCapacity = 100
#连接
a3.sources.r1.channels = c1
a3.sinks.k1.channel = c1