Flume配置案例(二)-HDFS篇

 官方文档(中文)

Flume 1.9用户手册中文版 — 可能是目前翻译最完整的版本了

官方文档(英文)

Flume 1.9.0 User Guide — Apache Flume

1、taildir-HDFS

在/usr/local/soft/apache-flume-1.9.0-bin/conf/目录下创建flume-taildir-hdfs.properties文件,内容如下

#定义Flume Agent
a1.sources = c1
a1.sinks = s1
a1.channels = n1

#定义Source——Taildir Source
a1.sources.c1.type = TAILDIR
a1.sources.c1.positionFile = /usr/local/soft/flume-1.11.0/log/taildir_position.json
a1.sources.c1.filegroups = f1
a1.sources.c1.filegroups.f1 = /usr/local/data/log/.*log
a1.sources.c1.batchSize = 1000
a1.sources.c1.bufferMaxLineLength = 100000
a1.sources.c1.processSubdirectories = true
a1.sources.c1.deserializer.maxLineLength = 8192
a1.sources.c1.interceptors = i1
a1.sources.c1.interceptors.i1.type = timestamp

#定义Sink——HDFS Sink
a1.sinks.s1.type = hdfs
a1.sinks.s1.hdfs.path = hdfs://hadoop100:9000/logs/flume/%y-%m-%d
a1.sinks.s1.hdfs.filePrefix = events-
a1.sinks.s1.hdfs.fileType = DataStream
a1.sinks.s1.hdfs.writeFormat = Text
a1.sinks.s1.hdfs.rollInterval = 3600
a1.sinks.s1.hdfs.rollSize = 0
a1.sinks.s1.hdfs.rollCount = 0
a1.sinks.s1.hdfs.batchSize = 1000


#定义Channel——Memory Channel
a1.channels.n1.type = memory
a1.channels.n1.capacity = 10000
a1.channels.n1.transactionCapacity = 1000

#连接 Source 和 Sink
a1.sources.c1.channels = n1
a1.sinks.s1.channel = n1

执行

在/usr/local/soft/apache-flume-1.9.0-bin/创建log文件夹,并在log中创建taildir_position.json文件,用于记录读取文件的偏移量,以便在代理重新启动时继续读取文件而不会重复读取已经读取过的内容。/usr/local/data/log/demo.log ,当文件有改动时,把改动上传到hdfs

cd /usr/local/soft/apache-flume-1.9.0-bin/conf/
flume-ng agent -n a1 -c conf -f flume-taildir-hdfs.properties -Dflume.root.logger=DEBUG,console

2、文件夹-HDFS

在/usr/local/soft/apache-flume-1.9.0-bin/conf/目录下创建flume-spoolDir-hdfs.properties文件,内容如下

a1.sources = r3 
a1.sinks = k3 
a1.channels = c3 
 
# Describe/configure the source 
# spooldir:对指定目录进行实时监控,如发现目录新增文件,立刻收集并发送 
a1.sources.r3.type = spooldir 
# 要监控的目录 
a1.sources.r3.spoolDir = /usr/local/data/logs 
# 指定监控目录下的文件上传之后,文件名最后追加的后缀名为: .COMPLETED 
a1.sources.r3.fileSuffix = .COMPLETED 
a1.sources.r3.fileHeader = true 
#忽略所有以.tmp结尾的文件,不上传 
a1.sources.r3.ignorePattern = ([^ ]*\.tmp) 
 
# Describe the sink 
a1.sinks.k3.type = hdfs 
a1.sinks.k3.hdfs.path = hdfs://hadoop100:9000/logs/dirhdfs/ 
#上传文件的前缀 
a1.sinks.k3.hdfs.filePrefix = upload- 
#是否按照时间滚动文件夹 
a1.sinks.k3.hdfs.round = true 
#多少时间单位创建一个新的文件夹 
a1.sinks.k3.hdfs.roundValue = 1 
#重新定义时间单位 
a1.sinks.k3.hdfs.roundUnit = hour 
#是否使用本地时间戳 
a1.sinks.k3.hdfs.useLocalTimeStamp = true 
#积攒多少个Event才flush到HDFS一次 
a1.sinks.k3.hdfs.batchSize = 100 
#设置文件类型,可支持压缩 
a1.sinks.k3.hdfs.fileType = DataStream 
#多久生成一个新的文件 
a1.sinks.k3.hdfs.rollInterval = 600 
#设置每个文件的滚动大小大概是128M 
a1.sinks.k3.hdfs.rollSize = 134217700 
#文件的滚动与Event数量无关 
a1.sinks.k3.hdfs.rollCount = 0 
#最小冗余数 
a1.sinks.k3.hdfs.minBlockReplicas = 1 
 
# Use a channel which buffers events in memory 
a1.channels.c3.type = memory 
a1.channels.c3.capacity = 1000 
a1.channels.c3.transactionCapacity = 100 
 
# Bind the source and sink to the channel 
a1.sources.r3.channels = c3 
a1.sinks.k3.channel = c3

执行

cd /usr/local/soft/apache-flume-1.9.0-bin/conf/
flume-ng agent -n a1 -c conf -f flume-spoolDir-hdfs.properties -Dflume.root.logger=DEBUG,console

注释:在使用Spooling Directory Source时

  1. 这种方式只监控目录下新增的文件,并不监控已存在文件的修改。
  2. 上传完成的文件会以.COMPLETED结尾
  3. 被监控文件夹每500毫秒扫描一次文件夹变动

每次flume进程启动,当监控目录下新增文件时,才在hdfs上产生一个临时文件

3、exec-HDFS 

# Name the components on this agent
a2.sources = r2
a2.sinks = k2
a2.channels = c2

# Describe/configure the source
# exec:表示执行命令
a2.sources.r2.type = exec
# tail -F:实时的查看文件
a2.sources.r2.command = tail -F /opt/module/hive/logs/hive.log
# 通过bash执行命令。bash test.sh 表示执行一个脚本,-c 后面跟字符串,表示执行的命令
a2.sources.r2.shell = /bin/bash -c

# Describe the sink
a2.sinks.k2.type = hdfs
# %Y%m%d/%H 指定目录格式为 年月日/小时
a2.sinks.k2.hdfs.path = hdfs://linux01:8020/flume/%Y%m%d/%H
#上传文件的前缀
a2.sinks.k2.hdfs.filePrefix = logs-
#是否按照时间滚动文件夹
a2.sinks.k2.hdfs.round = true
#多少时间单位创建一个新的文件夹,单位由roundUnit决定
a2.sinks.k2.hdfs.roundValue = 1
#重新定义时间单位
a2.sinks.k2.hdfs.roundUnit = hour
#是否使用本地时间戳
a2.sinks.k2.hdfs.useLocalTimeStamp = true
#积攒多少个Event才flush到HDFS一次
a2.sinks.k2.hdfs.batchSize = 1000
#设置文件类型,可支持压缩
a2.sinks.k2.hdfs.fileType = DataStream
#多久生成一个新的文件,单位为秒
a2.sinks.k2.hdfs.rollInterval = 600
#设置每个文件的滚动大小(128M),单位为byte,
a2.sinks.k2.hdfs.rollSize = 134217700
#文件的滚动与Event数量无关
a2.sinks.k2.hdfs.rollCount = 0
#最小副本数
a2.sinks.k2.hdfs.minBlockReplicas = 1

# Use a channel which buffers events in memory
a2.channels.c2.type = memory
a2.channels.c2.capacity = 1000
a2.channels.c2.transactionCapacity = 100

# Bind the source and sink to the channel
a2.sources.r2.channels = c2
a2.sinks.k2.channel = c2

 4、kafka-hdfs

创建flume_kafka-hdfs.properties

a1.sources=r1
a1.sinks=k1
a1.channels=c1
​
a1.sources.r1.type=org.apache.flume.source.kafka.KafkaSource
a1.sources.r1.kafka.bootstrap.servers=hadoop100:9092
a1.sources.r1.kafka.topics=app-crawler
​
a1.sinks.k1.type=hdfs
a1.sinks.k1.hdfs.path=hdfs://hadoop100:9000/flume
a1.sinks.k1.hdfs.writeFormat = Text
a1.sinks.k1.hdfs.fileType = DataStream
a1.sinks.k1.hdfs.rollInterval = 10
a1.sinks.k1.hdfs.rollSize = 0
a1.sinks.k1.hdfs.rollCount = 0
a1.sinks.k1.hdfs.filePrefix = %Y-%m-%d-%H-%M-%S
a1.sinks.k1.hdfs.useLocalTimeStamp = true
​
​
a1.channels.c1.type=memory
a1.channels.c1.capacity = 15000
a1.channels.c1.transactionCapacity = 15000
​
a1.sources.r1.channels=c1
a1.sinks.k1.channel=c1
a1.sinks.k1.channel=c1

启动

start-all.sh
hadoop fs -mkdir /flume
kafka-console-producer.sh --broker-list hadoop100:9092 --topic app-crawler
cd /usr/local/soft/apache-flume-1.9.0-bin/conf/
flume-ng agent -n a1 -c conf -f flume_kafka-hdfs.properties -Dflume.root.logger=DEBUG,console
  • 16
    点赞
  • 13
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

数智侠

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值