Flume-失效备援failover

Flume-失效备援failover

一、失效备援架构图

Flume失效备援架构图

二、工作原理

如果agent_3和agent_4其中有一个failover失效。可以自动切换下一级agent。

三、示例代码
  • 1、agent_1.conf和agent_2.conf文件相同
# nama the components on this agent
a1.sources = r1
a1.channels = c1
a1.sinks = k1 k2

#set sink group
a1.sinkgroups = g1
a1.sinkgrups.g1.sinks =k1 k2

#配置source
a1.sources.r1.type = exec
a1.sources.r1.command = tail -F /home/hadoop/data/flume_data/avrodir/test.log
a1.sources.r1.fileHeader = true
a1.sources.r1.channels = c1

#配置channel
a1.channels.c1.type = file
a1.channels.c1.checkpointDir = /home/hadoop/data/flume_checkpoint
a1.channels.c1.dataDir = /home/hadoop/data/flume_data

#配置sink_1
a1.sinks.k1.channel = c1
a1.sinks.k1.type = avro
# localhost 或者IP地址192.168.0.0
a1.sinks.k1.hostname = 192.168.200.22
a1.sinks.k1.port = 4141

#配置sink_2
a1.sinks.k1.channel = c1
a1.sinks.k1.type = avro
# localhost 或者IP地址192.168.0.0
a1.sinks.k1.hostname = 192.168.200.23
a1.sinks.k1.port = 4141

#set failover
a1.sinkgroups.g1.processor.type = failover
a1.sinkgroups.g1.processor.priority.k1 = 10
a1.sinkgroups.g1.processor.priority.k2 = 5
a1.sinkgroups.g1.processor.maxpenalty = 10000
  • 2、agent_3.conf和agent_4.conf文件相同
# nama the components on this agent
a1.sources = r1
a1.channels = c1
a1.sinks = k1

#配置source
a1.sources.r1.type = avro
a1.sources.r1.bind = 0.0.0.0
a1.sources.r1.port =4141
a1.sources.r1.fileHeader = true
a1.sources.r1.channels = c1

#配置channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 10000
a1.channels.c1.transactionCapacity = 100

#配置拦截器interceptor
a1.sources.r1.interceptors = i1 i2
a1.sources.r1.interceptors.i1.type = timestamp
a1.sources.r1.interceptors.i2.type = host
a1.sources.r1.interceptors.i2.hostHeader = hostname

#配置sink
a1.sinks.k1.channel = c1
a1.sinks.k1.type = hdfs
a1.sinks.k1.hdfs.path = hdfs://node1:9000/flume_data/avro_2_hdfs/files/%{hostname}
a1.sinks.k1.hdfs.filePrefix = %Y-%m-%d
a1.sinks.k1.hdfs.round = true
a1.sinks.k1.hdfs.roundValue =10
a1.sinks.k1.hdfs.roundUnit = minute
a1.sinks.k1.hdfs.rollInterval = 60
a1.sinks.k1.hdfs.rollSize = 50
a1.sinks.k1.hdfs.rollCount = 10
#a1.sinks.k1.hdfs.useLocalTimeStamp=true
a1.sinks.k1.hdfs.fileType = DataStream
四、启动应用

启动的时候注意启动顺序,倒着逐个启动

  • 1、先启动Agent_3和Agent_4
flume-ng agent -n a1 -c /home/hadoop/my_conf -f /home/hadoop/my_conf/agent_3.conf -D flume.root.logger = info,console
  • 2、再启动Agent_1和Agent_2
flume-ng agent -n a1 -c /home/hadoop/my_conf -f /home/hadoop/my_conf/agent_1.conf -D flume.root.logger = info,console
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值