flume与kafka的整合

flume,kafka 是基于zookeeper的,前安装zookeeper

1 搭建kafka集群 详见 

注意:1各个节点的broker id 要进行分配 ,不能相同,建议 按顺序排列 0,1,2(我是三个节点)

   2 hostname = node1 port=9092

   3 zookeeper  connect = node1:2181,node2:2181,node3:2181

2 启动各个节点上的zookeeper  

zkServer.sh start ------>zkServer.sh status(查看状态)

 3 启动各个节点上 kafka 

进入根目录 ./bin/kafka-server-start.sh config/server.properties

4 创建Topic 和分配partition

./bin/kafka-topics.sh -zookeeper node1:2181,node2:2181,node3:2181 -topic topicTest -replication-factor 2 -partitions 5 --create

5 创建生产者(主节点)

./bin/kafka-console-producer.sh --broker-list node1:9092,node2:9092,node3:9092 --topic testTopic  

查看Topic

./bin/kafka-topics.sh -zookeeper node1:2181,node2:2181,node3:2181 -list

6 创建消费者(各个节点)

./bin/kafka-console-consumer.sh --zookeeper node1:2181,node2:2181,node3:2181 --from-beginning -topic testTopic  

7 验证消费能否接受到消息

在producer 窗口内写入数据,consumer端是否可以接受数据

附:kafka配置文件

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=0

# The port the socket server listens on
port=9092
# Hostname the broker will bind to. If not set, the server will bind to all interfaces
host.name=node1


# A comma seperated list of directories under which to store log files
log.dirs=/tmp/kafka-logs(查看写入文件日志)

zookeeper.connect=node1:2181,node2:2181,node3:2181

只以上做了修改


搭建flume

我选取的是单节点监控问价目录

1 flume安装详见flume搭建

2 配置文件 

a1.sources = r1

a1.sinks = k1
a1.channels = c1


# Describe/configure the source
a1.sources.r1.type = spooldir
a1.sources.r1.spoolDir = /opt/data/flumedata
a1.sources.r1.fileHeader = true
a1.sources.r1.interceptors = i1
a1.sources.r1.deserializer.maxLineLength = 102400
a1.sources.r1.interceptors.i1.type = timestamp
# Describe the sink
a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink  
a1.sinks.k1.topic = testTopic  
a1.sinks.k1.brokerList = node1:9092,node2:9092,node3:9092 


# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000000
a1.channels.c1.transactionCapacity = 10000


# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

 

3 启动flume服务

./bin/flume-ng agent -c ./conf/ -f conf/flume-conf.properties -Dflume.root.logger=DEBUG,console -n a1


4 向文件中写入数据

cp /opt/soft/liuliqiao/* /opt/data/flumedata/


5 同时查看consumer窗口

有数据接收



同时可以在日志文件查看

log.dirs=/tmp/kafka-logs

整合ok





评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值