Flume+Kafka整合入门实例
Flume+Kafka架构简介
Flume与Kafka整合,Flume负责过滤、收集数据,Kafka作为数据仓库负责数据存储。就好比Flume是生产者,Kafka是仓库,消费Kafka消息的下端系统为消费者。这样就构成了一条从生产到消费的完整信息流。
具体整合步骤
配置 flume-kafka.properties
agent.sources = s1
agent.channels = c1
agent.sinks = sk1
agent.sources.s1.type = exec
agent.sources.s1.command = tail -f /usr/flume-log/log3.txt
agent.sources.s1.channels = c1
# kafka接收器
agent.sinks.sk1.type = org.apache.flume.sink.kafka.KafkaSink
# kafka的broker地址和端口号
agent.sinks.sk1.brokerList = ip地址:9092
# topic
agent.sinks.sk1.topic = flume-kafka-test
# 序列化方式
agent.sinks.sk1.serializer.class = kafka.serializer.StringEncoder
agent.sinks.sk1.channel = c1
agent.channels.c1.type = memory
agent.channels.c1.capa