1.安装flume
2.安装kafka
3.测试kafka的topic是否能正常的传递消息
4.都准备好之后开始接通kafkachannel,
1)kafka的no-sink
#定义agent名, source、channel的名称
a0.sources = r1
a0.channels = c1
#具体定义source
a0.sources.r1.type = exec
a0.sources.r1.command = tail -F /data/logs.txt
a0.channels.c1.type = org.apache.flume.channel.kafka.KafkaChannel
a0.channels.c1.brokerList = 172.16.37.223:9092
a0.channels.c1.zookeeperConnect=172.16.37.107:2181,172.16.37.108:2181,172.16.37.223:2181
a0.channels.c1.topic = FLUME_TEST_TOPIC
#false表示是以纯文本的形式写进入的,true是以event的形式写进入的,以event写进入时,会出现乱码, 默认是true
a0.channels.c1.parseAsFlumeEvent = false
a0.sources.r1.channels = c1
启动flume之后。开启一个kafka的消费者
./bin/kafka-console-producer.sh --broker-list 172.16.37.223:9092 --topic FLUME_TEST_TOPIC
往flume指定的文件中塞入信息。kafka的消费者可以消费到你塞入的信息
2)kafka的no-source
agent.channels = kafka-channel
agent.sources = no-source
agent.sinks = k1
agent.channels.kafka-channel.type = org.apache.flume.channel.kafka.KafkaChannel
agent.channels.kafka-channel.brokerList = 172.16.37.223:9092
agent.channels.kafka-channel.zookeeperConnect = 172.16.37.107:2181,172.16.37.108:2181,172.16.37.223:2181
agent.channels.kafka-channel.topic = FL