关闭

flume配置-生产环境下 Taildir Source to kafka Sink

1938人阅读 评论(3) 收藏 举报
分类:

生产环境下用flume搜集后端系统产产生的日志,并写入kafka集群,可以参照下面配置。

clog.sources = source_log
clog.channels = channel_log
clog.sinks = sink_log1 sink_log2 sink_log3 

clog.sources.source_log.type = TAILDIR
clog.sources.source_log.filegroups =  = f1
#####正则匹配文件路径###### 
clog.sources.source_log.filegroups.f1 = /home/data/log/.*.log 
clog.sources.source_log.skipToEnd = True
clog.sources.source_log.positionFile = /home/data/taildir_position.json	 
clog.sources.source_log.batchSize = 1000
clog.sources.source_log.channels = channel_log

clog.sinks.sink_log1.type = org.apache.flume.sink.kafka.KafkaSink
clog.sinks.sink_log1.kafka.topic = haproxy
clog.sinks.sink_log1.kafka.bootstrap.servers = kafka1:9001,kafka2:9001,kafka3:9001
clog.sinks.sink_log1.flumeBatchSize = 2000
clog.sinks.sink_log1.kafka.producer.acks = 1
clog.sinks.sink_log1.channel = channel_log

clog.sinks.sink_log2.type = org.apache.flume.sink.kafka.KafkaSink
clog.sinks.sink_log2.kafka.topic = haproxy
clog.sinks.sink_log2.kafka.bootstrap.servers = kafka1:9001,kafka2:9001,kafka3:9001
clog.sinks.sink_log2.flumeBatchSize = 2000
clog.sinks.sink_log2.kafka.producer.acks = 1
clog.sinks.sink_log2.channel = channel_log


clog.sinks.sink_log3.type = org.apache.flume.sink.kafka.KafkaSink
clog.sinks.sink_log3.kafka.topic = haproxy
clog.sinks.sink_log3.kafka.bootstrap.servers = kafka1:9001,kafka2:9001,kafka3:9001
clog.sinks.sink_log3.flumeBatchSize = 2000
clog.sinks.sink_log3.kafka.producer.acks = 1
clog.sinks.sink_log3.channel = channel_log


clog.channels.channel_log.type = memory
clog.channels.channel_log.capacity = 100000
clog.channels.channel_log.transactionCapacity = 10000


0
1

查看评论
* 以上用户言论只代表其个人观点,不代表CSDN网站的观点或立场
    个人资料
    • 访问:76671次
    • 积分:1383
    • 等级:
    • 排名:千里之外
    • 原创:57篇
    • 转载:30篇
    • 译文:0篇
    • 评论:8条
    博客专栏
    最新评论