kafka(三):flume和kafka集成实例

38 篇文章 2 订阅
25 篇文章 2 订阅

1.环境

flume1.6.0+kafka_2.10-0.8.2.1+zookeeper-3.4.5

2.flume配置

(1)flume从hadoop:44444端口接受信息,传送给kafka

配置文件:avro-memory-kafka.conf

avro-memory-kafka.sources = avro-source
avro-memory-kafka.sinks = kafka-sink
avro-memory-kafka.channels = memory-channel

avro-memory-kafka.sources.avro-source.type = avro
avro-memory-kafka.sources.avro-source.bind = hadoop
avro-memory-kafka.sources.avro-source.port = 44444

avro-memory-kafka.sinks.kafka-sink.type = org.apache.flume.sink.kafka.KafkaSink
avro-memory-kafka.sinks.kafka-sink.brokerList = hadoop:9092
avro-memory-kafka.sinks.kafka-sink.topic = hello_topic
avro-memory-kafka.sinks.kafka-sink.batchSize = 5
avro-memory-kafka.sinks.kafka-sink.requiredAcks =1 

avro-memory-kafka.channels.memory-channel.type = memory

avro-memory-kafka.sources.avro-source.channels = memory-channel
avro-memory-kafka.sinks.kafka-sink.channel = memory-channel

(2)flume监控 /opt/datas/access.log 文件,并将文件改变传送给hadoop:44444端口

配置文件:exec-memory-avro.conf
 


exec-memory-avro.sources = exec-source
exec-memory-avro.sinks = avro-sink
exec-memory-avro.channels = memory-channel

exec-memory-avro.sources.exec-source.type = exec
exec-memory-avro.sources.exec-source.command = tail -F /opt/datas/access.log 
exec-memory-avro.sources.exec-source.shell = /bin/sh -c

exec-memory-avro.sinks.avro-sink.type = avro
exec-memory-avro.sinks.avro-sink.hostname = hadoop
exec-memory-avro.sinks.avro-sink.port = 44444

exec-memory-avro.channels.memory-channel.type = memory
exec-memory-avro.sources.exec-source.channels = memory-channel
exec-memory-avro.sinks.avro-sink.channel = memory-channel

3.kafka配置

单节点单broker配置:server.properties关键点:

broker.id=0

# The port the socket server listens on
port=9092

# Hostname the broker will bind to. If not set, the server will bind to all interfaces
host.name=hadoop

log.dirs=/opt/modules/kafka_2.10-0.8.2.1/data/0

# root directory for all kafka znodes.
zookeeper.connect=hadoop:2181/kafka08

4.开启

(1)zookeeper

(2)flume

先启动flume从44444端口接收

flume-ng agent \
--name avro-memory-kafka  \
--conf $FLUME_HOME/conf  \
--conf-file $FLUME_HOME/conf/avro-memory-kafka.conf \
-Dflume.root.logger=INFO,console

然后启动对文件access.log监控

bin/flume-ng agent \
--name exec-memory-avro  \
--conf conf  \
--conf-file conf/exec-memory-avro.conf \
-Dflume.root.logger=INFO,console

(3)kafka

开启单节点kafka

bin/kafka-server-start.sh -daemon config/server.properties &

开启消费者

bin/kafka-console-consumer.sh --zookeeper hadoop:2181/kafka08 --topic hello_topic --from-beginning

5.测试

echo hellospark1 >> /opt/datas/access.log
echo hellospark2 >> /opt/datas/access.log
echo hellospark3 >> /opt/datas/access.log

kafka消费者产生数据:

    hello hive
    liuming gerry tom
    liuming gerry tom
    liuming gerry tom
    liuming gerry tom
    liuming gerry tom
    liuming gerry tom
    hellospark1
    hellospark2
    hellospark3

成功!

 

 

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值