flume安装

 

在/opt/module/flume/conf 目录下创建 file-flume-kafka.conf 文件:

a1.sources=r1 
a1.channels=c1 c2 

# configure source 
a1.sources.r1.type = TAILDIR 
a1.sources.r1.positionFile = /opt/module/flume/test/log_position.json 
a1.sources.r1.filegroups = f1 
a1.sources.r1.filegroups.f1 = /tmp/logs/app.+ 
a1.sources.r1.fileHeader = true 
a1.sources.r1.channels = c1 c2 

#interceptor 
a1.sources.r1.interceptors = i1 i2 
a1.sources.r1.interceptors.i1.type = com.atguigu.flume.interceptor.LogETLInterceptor$Builder 
a1.sources.r1.interceptors.i2.type = com.atguigu.flume.interceptor.LogTypeInterceptor$Builder 
a1.sources.r1.selector.type = multiplexing 
a1.sources.r1.selector.header = topic 
a1.sources.r1.selector.mapping.topic_start = c1 
a1.sources.r1.selector.mapping.topic_event = c2 

# configure channel 
a1.channels.c1.type = org.apache.flume.channel.kafka.KafkaChannel 
a1.channels.c1.kafka.bootstrap.servers = hadoop102:9092,hadoop103:9092,hadoop104:9092 
a1.channels.c1.kafka.topic = topic_start 
a1.channels.c1.parseAsFlumeEvent = false 
a1.channels.c1.kafka.consumer.group.id = flume-consumer 

a1.channels.c2.type = org.apache.flume.channel.kafka.KafkaChannel 
a1.channels.c2.kafka.bootstrap.servers = hadoop102:9092,hadoop103:9092,hadoop104:9092 
a1.channels.c2.kafka.topic = topic_event 
a1.channels.c2.parseAsFlumeEvent = false 
a1.channels.c2.kafka.consumer.group.id = flume-consumer 

拦截器步骤:

 

Flume 的 ETL 和分类型拦截器
本项目中自定义了两个拦截器,分别是:ETL 拦截器、日志类型区分拦截器。 

需要先将打好的包放入到 hadoop102 的/opt/module/flume/lib 文件夹下面。

下载地址:

https://wws.lanzous.com/iJjmodxy4ti

分发 Flume 到 hadoop103、hadoop104 

xsync flume/ 

启动:

bin/flume-ng agent --name a1 --conf-file conf/file-flume-kafka.conf & 

日志采集 Flume 启动停止脚本
创建脚本 f1.sh 

#! /bin/bash 

case $1 in 
"start"){ 
    for i in hadoop102 hadoop103 
    do 
        echo " --------启动 $i 采集flume-------" 
        ssh $i "nohup /opt/module/flume/bin/flume-ng agent --conf-file /opt/module/flume/conf/file-flume-kafka.conf --name a1 -Dflume.root.logger=INFO,LOGFILE >/opt/module/flume/test1 2>&1 &" 
        done 
};; 
"stop"){ 
    for i in hadoop102 hadoop103 
    do 
        echo " --------停止 $i 采集flume-------" 
        ssh $i "ps -ef | grep file-flume-kafka | grep -v grep |awk '{print \$2}' | xargs kill" 
    done 
};; 
esac 
chmod 777 f1.sh 
f1.sh start 
 f1.sh stop 

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值