配置如下:
启动方法同1.5.0
flume1.6.0新增了kafka的sink,不再需要自己编写,在这里测试其用法
这里的source使用exec的tail,直接从文件中获取,kafka已配置,在该处不涉及
agent配置:
[root@app-sz-68-7 conf]# cat sendoa.conf
sendoa.sources = s1
sendoa.channels = c1
sendoa.sinks = r1
#source section
sendoa.sources.s1.type = exec
sendoa.sources.s1.command = tail -F /usr/local/nginx/logs/oa.com.log
sendoa.sources.s1.channels = c1
# Each sink's type must be defined
sendoa.sinks.r1.type = org.apache.flume.sink.kafka.KafkaSink
sendoa.sinks.r1.topic = flume_1.6_test
sendoa.sinks.r1.brokerList = 192.168.2.151:9092
sendoa.sinks.r1.requiredAcks = 1
sendoa.sinks.r1.batchSize = 20
sendoa.sinks.r1.channel = c1
#Specify the channel the sink should use
sendoa.channels.c1.type = memory
sendoa.channels.c1.capacity = 10000
sendoa.channels.c1.transactionCapacity = 10000
启动flume:
./bin/flume-ng agent -c conf/ -f conf/sendoa.conf -n sendoa -Dflume.monitoring.type=http -Dflume.monitoring.port=34545 &
测试kafka已接收到数据,OK