Flume、Kafka、HDFS整合

1、flume收集数据,下沉到kafka

   flume下写一个配置文件

# 指定各个核心组件
ag1.sources = r1
ag1.sinks = k1
ag1.channels = c1
# 准备数据源
ag1.sources.r1.type = exec
ag1.sources.r1.command = tail -F /usr/local/nginx/logs/log.frame.access.log

# Describe the sink
ag1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
ag1.sinks.k1.kafka.topic=xin
ag1.sinks.k1.kafka.bootstrap.servers=hdp-1:9092,hdp-2:9092,hdp-3:9092
# Use a channel which buffers events in memory
ag1.channels.c1.type = memory
ag1.channels.c1.capacity = 20000
ag1.channels.c1.transactionCapacity = 10000
# Bind the source and sink to the channel
ag1.sources.r1.channels = c1
ag1.sinks.k1.channel = c1

2、flume采集kafka的数据,并下沉到hdfs

# The configuration file needs to define the sources, 
# the channels and the sinks.
# Sources, channels and sinks are defined per agent, 
# in this case called 'agent'

agent.sources = kafkaSource
agent.channels = memoryChannel
agent.sinks = hdfsSink


# The channel can be defined as follows.
agent.sources.kafkaSource.channels = memoryChannel
agent.sources.kafkaSource.type=org.apache.flume.source.kafka.KafkaSource
agent.sources.kafkaSource.zookeeperConnect=127.0.0.1:2181
agent.sources.kafkaSource.topic=flume-data
#agent.sources.kafkaSource.groupId=flume
agent.sources.kafkaSource.kafka.consumer.timeout.ms=100

agent.channels.memoryChannel.type=memory
agent.channels.memoryChannel.capacity=1000
agent.channels.memoryChannel.transactionCapacity=100


# the sink of hdfs
agent.sinks.hdfsSink.type=hdfs
agent.sinks.hdfsSink.channel = memoryChannel
agent.sinks.hdfsSink.hdfs.path=hdfs://master:9000/usr/feiy/flume-data
agent.sinks.hdfsSink.hdfs.writeFormat=Text
agent.sinks.hdfsSink.hdfs.fileType=DataStream

3、flume采集数据,同时下沉到kafka、hdfs

## Explain
#通过sink把数据分别输出到kafka和HDFS上


# Name the components on this agent
agent.sources = r1
agent.sinks = k1 k2
agent.channels = c1 c2

# Describe/configuration the source
agent.sources.r1.type = exec
agent.sources.r1.command = tail -f /root/test.log
agent.sources.r1.shell = /bin/bash -c 

## kafka
#Describe the sink
agent.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
agent.sinks.k1.topic = kafkatest
agent.sinks.k1.brokerList = master:9092
agent.sinks.k1.requiredAcks = 1
agent.sinks.k1.batchSize = 2

# Use a channel which buffers events in memory 
agent.channels.c1.type = memory
agent.channels.c1.capacity = 1000
#agent.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel
agent.sources.r1.channels = c1 c2
agent.sinks.k1.channel = c1

## hdfs
#Describe the sink
agent.sinks.k2.type = hdfs
agent.sinks.k2.hdfs.path = hdfs://master:9000/data/flume/tail
agent.sinks.k2.hdfs.fileType=DataStream
agent.sinks.k2.hdfs.writeFormat=Text
#agent.sinks.k2.hdfs.rollInterval = 0
#agent.sinks.k2.hdfs.rollSize = 134217728
#agent.sinks.k2.hdfs.rollCount = 1000000
agent.sinks.k2.hdfs.batchSize=10

## Use a channel which buffers events in memory 
agent.channels.c2.type = memory
#agent.channels.c1.capacity = 1000
#agent.channels.c2.transactionCapacity = 100

## Bind the source and sink to the channel
#agent.sources.r1.channels = c2
agent.sinks.k2.channel = c2

 

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值