一、exec-flume-kafka
flume-kafka.properties
a1.sources = s1
a1.channels = c1
a1.sinks = k1
a1.sources.s1.type=exec
a1.sources.s1.command=tail -F /usr/local/soft/logs/debug.log
a1.channels.c1.type=memory
a1.channels.c1.capacity=10000
a1.channels.c1.transactionCapacity=100
#设置Kafka接收器
a1.sinks.k1.type= org.apache.flume.sink.kafka.KafkaSink
#设置Kafka的broker地址和端口号
a1.sinks.k1.brokerList=hadoop100:9092
#设置Kafka的Topic
a1.sinks.k1.topic=test
#设置序列化方式
a1.sinks.k1.serializer.class=kafka.serializer.StringEncoder
#设置kafka的ack机制
a1.sinks.k1.requiredAcks = 1
a1.sources.s1.channels=c1
a1.sinks.k1.channel=c1
启动flume
bin/flume-ng agent -c conf -f agents/flume-kafka.properties -name a1 -Dflume.root.logger=INFO,console
在/usr/local/soft/logs/debug.log文件中添加数据
启动Consumer并接收消息:新打开一个窗口:
cd /usr/local/soft/kafka_2.11-2.3.0
bin/kafka-console-consumer.sh --bootstrap-server hadoop100:9092 --topic test --from-beginning
二、kafka-flume-hdfs
kafka-flume-hdfs.properties
a1.sources = r1
a1.sinks = k1
a1.channels = c1
a1.sources.r1.type = org.apache.flume.source.kafka.KafkaSource
a1.sources.r1.zookeeperConnect = hadoop100:2181,hadoop101:2181,hadoop102:2181
a1.sources.r1.topic = test
a1.sources.r1.groupId = flume
a1.sources.r1.kafka.consumer.timeout.ms = 100
# Describe the sink
a1.sinks.k1.type = hdfs
a1.sinks.k1.hdfs.path = hdfs://hadoop100:9000/flume/shellfile/
#上传文件的前缀
a1.sinks.k1.hdfs.filePrefix = upload-
#是否按照时间滚动文件夹
a1.sinks.k1.hdfs.round = true
#多少时间单位创建一个新的文件夹
a1.sinks.k1.hdfs.roundValue = 1
#重新定义时间单位
a1.sinks.k1.hdfs.roundUnit = hour
#是否使用本地时间戳
a1.sinks.k1.hdfs.useLocalTimeStamp = true
#积攒多少个Event才flush到HDFS一次
a1.sinks.k1.hdfs.batchSize = 100
#设置文件类型,可支持压缩
a1.sinks.k1.hdfs.fileType = DataStream
#多久生成一个新的文件
a1.sinks.k1.hdfs.rollInterval = 600
#设置每个文件的滚动大小大概是128M
a1.sinks.k1.hdfs.rollSize = 134217700
#文件的滚动与Event数量无关
a1.sinks.k1.hdfs.rollCount = 0
#最小冗余数
a1.sinks.k1.hdfs.minBlockReplicas = 1
# 指定数据放到channel的内存中
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 1000
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
在flume根目录下执行:
bin/flume-ng agent -c conf -f conf/kafka-flume-hdfs.properties -name a1
启动Producer并发送消息:
kafka-console-producer.sh --broker-list hadoop100:9092 --topic test
通过hdfs命令查看输出目录
hadoop fs -cat /flume/shellfile/*