注意事项:
1)需求:
在CDH的Flume上面配置2个flows,2个flows的监控文件路径不一样,且将监控的文件发送到不同kafka中,最后删除读取完的文件
http://flume.apache.org/FlumeUserGuide.html#adding-multiple-flows-in-an-agent
http://flume.apache.org/FlumeUserGuide.html#adding-multiple-flows-in-an-agent
2)重点:在c2上面配置一个dataDirs ,如下:
a1.channels.c2.dataDirs = /home/flume/ssp/data
Property Name Default | Description | |
---|---|---|
dataDirs | ~/.flume/file-channel/data | Comma separated list of directories for storing log files. Using multiple directories on separate disks can improve file channel peformance |
3)目录设定权限
chmod 777 /home/flume/ssp/checkpoint
chmod 777 /data/flumeSpool
chmod 777 /home/flume/ssp/checkpoint_ssp
chmod 777 /home/flume/ssp/data
chmod 777 /data/flumeSpool_ssp
4)配置如下:
a1.sources.src.interceptors.i2.type=org.apache.flume.sink.solr.morphline.UUIDInterceptor$Builder
这个配置的意思是:让kafka的key值随机,这样可以发送到不同Partition中去,解决Kafka数据倾向。
# Define a kafka channel
a1.channels.c1.type = file
a1.channels.c1.checkpointDir = /home/flume/ssp/checkpoint
a1.channels.c1.capacity = 10000
a1.channels.c1.transactionCapacity = 1000
a1.channels.c1.kafka.consumer.timeout.ms=60000
# Define an spooldir source
a1.sources.src.type = spooldir
a1.sources.src.deletePolicy=immediate
a1.sources.src.spoolDir = /data/flumeSpool
a1.sources.src.interceptors = i2
a1.sources.src.interceptors.i2.type=org.apache.flume.sink.solr.morphline.UUIDInterceptor$Builder
a1.sources.src.interceptors.i2.headerName=key
a1.sources.src.interceptors.i2.preserveExisting=false
# Define a kafka sink
a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.topic = nginx_log
a1.sinks.k1.brokerList =hadoop11:9092,hadoop12:9092,hadoop13:9092
a1.sinks.k1.requiredAcks = 1
a1.sinks.k1.batchSize = 20
# ssp data -- start --
a1.channels.c2.type = file
a1.channels.c2.checkpointDir = /home/flume/ssp/checkpoint_ssp
a1.channels.c2.dataDirs = /home/flume/ssp/data
a1.channels.c2.capacity = 10000
a1.channels.c2.transactionCapacity = 1000
a1.channels.c2.kafka.consumer.timeout.ms=60000
# Define an spooldir source
a1.sources.src2.type = spooldir
a1.sources.src2.deletePolicy=immediate
a1.sources.src2.spoolDir = /data/flumeSpool_ssp
a1.sources.src2.interceptors = i2
a1.sources.src2.interceptors.i2.type=org.apache.flume.sink.solr.morphline.UUIDInterceptor$Builder
a1.sources.src2.interceptors.i2.headerName=key
a1.sources.src2.interceptors.i2.preserveExisting=false
# Define a kafka sink
a1.sinks.k2.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k2.topic = ssp_nginx_log
a1.sinks.k2.brokerList =hadoop11:9092,hadoop12:9092,hadoop13:9092
a1.sinks.k2.requiredAcks = 1
a1.sinks.k2.batchSize = 20
# ssp data -- end --
# Finally, now that we've defined all of our components, tell
a1.channels = c1 c2
a1.sources = src src2
a1.sinks = k1 k2
# Bind the source and sink to the channel
a1.sources.src.channels = c1
a1.sinks.k1.channel = c1
a1.sources.src2.channels = c2
a1.sinks.k2.channel = c2