Flume对接Kafka
1. 配置和启动kafka
1.1 kafka.conf☆
# define
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# source
a1.sources.r1.type = netcat
a1.sources.r1.bind = hadoop
a1.sources.r1.port = 44444
# sink
a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.kafka.bootstrap.servers = hadoop:9092,hadoop101:9092,hadoop102:9092
a1.sinks.k1.kafka.topic = first
a1.sinks.k1.kafka.flumeBatchSize = 20
a1.sinks.k1.kafka.producer.acks = 1
a1.sinks.k1.kafka.producer.linger.ms = 1
# channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# bind
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
1.2 启动flume
bin/flume-ng agent -c conf/ -n a1 -f job/kafka_conf/kafka.conf
2. 启动kafka消费者并测试
bin/kafka-console-consumer.sh --zookeeper hadoop:2181 --topic first
2.1 测试
nc hadoop 44444
>hello atguigu

可以结合着flume拦截器和多路复用,来实现数据的分类。
516

被折叠的 条评论
为什么被折叠?



