案例1:syslog-memory-kafka
将flume采集到的数据落地到kafka上,即sink是kafka(生产者身份)
vim syslog-mem-kafka.conf
# 命名个组件
a1.sources = r1
a1.sinks = k1
a1.channels = c1
#source属性
a1.sources.r1.type = syslogtcp
a1.sources.r1.host=mypc01
a1.sources.r1.port=10086
# 描述channel属性
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# 描述sink属性
a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.kafka.bootstrap.servers = mypc01:9092,mypc:9092,mypc03:9092
# 主题必须提前存在
a1.sinks.k1.kafka.topic = pet
a1.sinks.k1.kafka.producer.acks = 1
a1.sinks.k1.kafka.producer.linger.ms = 1
# 关联source和sink到channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
启动flume
#bin/bash
/usr/local/flume/bin/flume-ng agent -c /usr/local/flume/conf \
-f /usr/local/flume/flumeconf/syslog-mem-kafka.conf \
-n a1 -Dflume.root.logger=INFO,console -Dflume.monitoring.type=http -Dflume.monitoring.port=31002
先启动消费者准备接受消息
kafka-console-consumer.sh \
--bootstrap-server mypc01:9092,mypc02:9092,mypc03:9092 \
--topic pet
测试
echo "aaaaa" | nc mypc01 10086
案例2 kafka-memory-hdfs
kafka的source类型从kafka集群读取数据,就是消费者身份,将数据封装成event落地到hdfs
vim kafka-mem-kafka.conf
# 命名个组件
a1.sources = r1
a1.sinks = k1
a1.channels = c1
#source属性
a1.sources.r1.type = org.apache.flume.source.kafka.KafkaSource
a1.sources.r1.kafka.bootstrap.servers = mypc01:9092,mypc02:9092,mypc03:9092
a1.sources.r1.kafka.consumer.group.id=g1
a1.sources.r1.kafka.topics=pet
# 描述channel属性
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# 描述sink属性
a1.sinks.k1.type=hdfs
a1.sinks.k1.hdfs.path=hdfs://mypc01:8020/kafka/pet/%Y%m%d
a1.sinks.k1.hdfs.filePrefix=FlumeData
a1.sinks.k1.hdfs.fileSuffix = .kafka
a1.sinks.k1.hdfs.rollSize=102400
a1.sinks.k1.hdfs.rollCount = 0
#单位为s
b1001.sinks.k1.hdfs.rollInterval=60
b1001.sinks.k1.hdfs.useLocalTimeStamp = true
# 关联source和sink到channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
启动flume
#bin/bash
/usr/local/flume/bin/flume-ng agent -c /usr/local/flume/conf \
-f /usr/local/flume/flumeconf/kafka-mem-hdfs.conf \
-n a1 -Dflume.root.logger=INFO,console -Dflume.monitoring.type=http -Dflume.monitoring.port=31002
启动生产者,使用生产者发送消息
kafka-console-producer.sh \
--broker-list mypc01:9092,mypc02:9092,mypc03:9092 \
--topic pet
之后就可以在hdfs上看到生成的文件了.
总结
- kafka可以作为source,也可以作为sink