flume与kafka整合

一. flume安装配置

2. fulme部署:上传rz,解压:tar zxvf apache-flume-1.7.0-bin.tar.gz

二. flume的kafka配置

1. 配置Kafka信息

    新建kafka-conf.properties,上传至flume的apache-flume-1.7.0-bin/conf目录下,配置内容如下:

#client
agent.channels=ch1
agent.sources=src1
agent.sinks=sk1

#define source monitor a logfile
agent.sources.src1.type=exec
agent.sources.src1.command= tail -F /data/jsp/log_producer/logs/producer.log
agent.sources.src1.channels=ch1

agent.channels.ch1.type=memory
agent.channels.ch1.capacity=10000
agent.channels.ch1.transactionCapacity=100

#define kafka receiver
agent.sinks.sk1.type=org.apache.flume.sink.kafka.KafkaSink
agent.sinks.sk1.brokerList= ip1:9092,ip2:9092,ip3:9092
agent.sinks.sk1.topic= kafkatest
agent.sinks.sk1.serializer.class=kafka.serializer.StringEncoder
agent.sinks.sk1.channel=ch1
agent.sinks.sk1.batchSize=20 注意:信息字段名称配置需仔细

需要注意以下三个重要配置:
agent.sources.src1.command=tail -F /data/jsp/log_producer/logs/producer.log tail监听日志文件
agent.sinks.sk1.brokerList=ip1:9092,ip2:9092,ip3:9092 kafka集群配置
agent.sinks.sk1.topic=kafkatest flume向kafkatest主题push数据

2. 配置JAVA

    修改cong目录下的flume-env.sh,主要配置参数JAVA_HOME和JAVA_OPTS,配置内容如下:

# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# If this file is placed at FLUME_CONF_DIR/flume-env.sh, it will be sourced
# during Flume startup.

# Enviroment variables can be set here.
export JAVA_HOME=/usr/java/jdk1.7.0_67-cloudera

# Give Flume more memory and pre-allocate, enable remote monitoring via JMX
# export JAVA_OPTS="-Xms100m -Xmx2000m -Dcom.sun.management.jmxremote"

# Let Flume write raw event data and configuration information to its log files for debugging
# purposes. Enabling these flags is not recommended in production,
# as it may result in logging sensitive user information or encryption secrets.
export JAVA_OPTS="-Xms1024m -Xmx1024m -Xss256k -Xmn512m -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:-UseGCOverheadLimit"

# Note that the Flume conf directory is always included in the classpath.
#FLUME_CLASSPATH=""
#set JAVA_HOME

至此flume的kafka配置完毕!

3. flume启动

flume启动命令: ./flume-ng agent --conf-file /opt/apache-flume-1.7.0-bin/conf/kafka-conf.properties -c /opt/apache-flume-1.7.0-bin/conf/ --name agent -Dflume.root.logger=DEBUG,console 注意:命令中绿色表示部分用的是绝对路径

4. flume测试脚本

(1) 测试脚本,模拟生产日志记录,保存为producer_log.sh,用chmod赋执行权限,执行命令./producer_log.sh,会向producer.log写日志。脚本编码如下:
---------------------------------------------------------------------------------------------------------------
for ((i=0;i<=1000;i++));
do  echo  "kafka_test-" +$i>> /data/jsp/log_producer/logs/producer.log ;
done
--------------------------------------------------------------------------------------------------------------
日志内容样式: kafka_fulume_test-+1

5. flume与kafka互动测试

    启动ZK和Kafka,在主题kafkatest,启动消费者服务,详见《Kafka-安装部署》
创建主题: ./kafka-topics.sh --create --zookeeper ip:2181 --replication-factor 1 --partitions 1 --topic kafkatest
启动消费者:./kafka-console-consumer.sh -- zookeeper ip1:2181,ip2:2181,ip3:2181 --topic kafkatest --from-beginning
消费者服务端console会打印日志内容!


  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值