Flume:把log文件写入HDFS

00 选择Source

这里有两个选择:

  • exec Source
  • Spooling Directory Source
    如果使用exec的方式,因为log文件分割,可能存在跳过部分log文件,导致数据被忽略。
    所以选择第二种,spooldir的方式

01 选择Channel

memory channel

02 选择Sink

因为需要写入hdfs,选择hdfs sink

03 conf文件配置

#定义
#agent:agent
#sources:s1
#channels:c1
#sinks:k1 
agent.sources=s1
agent.channels = c1
agent.sinks = k1

#选择 Spooling Directory Source 方式
agent.sources.s1.type=spooldir
#配置s1的channel:c1
agent.sources.s1.channels=c1
#读取目录配置
agent.sources.s1.spoolDir=/var/log/flumeSpool
#把文件的绝对路径加到header中
agent.sources.s1.fileHeader = true
#文件过滤,只选择.log结尾的文件
agent.sources.s1.includePattern = ^(.)*\\.log$

#使用内存通道
agent.channels.c1.type=memory

#选择 hdfs sink
agent.sinks.k1.type = hdfs
#配置k1的channel:c1
agent.sinks.k1.channel = c1
#hdfs中存放的目录
agent.sinks.k1.hdfs.path = /flume/spool/%y-%m-%d/%H
#文件名前缀
agent.sinks.k1.hdfs.filePrefix = spool
#文件名后缀
agent.sinks.k1.hdfs.fileSuffix = .log
#开启四舍五入(时间参数)
agent.sinks.k1.hdfs.round = true
agent.sinks.k1.hdfs.roundValue = 1
agent.sinks.k1.hdfs.roundUnit = hour
#使用本地时间戳
agent.sinks.k1.hdfs.useLocalTimeStamp = true

04 其他准备工作

flume节点

创建目录:mkdir -p /var/log/flumeSpool

hdfs

创建目录:hadoop fs -mkdir -p /flume/spool/

# hadoop fs -ls /flume
Found 1 items
drwxr-xr-x   - root supergroup          0 2021-04-21 16:30 /flume/spool

模拟数据

echo "i am test log file." >>/var/log/flumeSpool/test-spool-1.log
echo "i am test log file." >>/var/log/flumeSpool/test-spool-2.log

05 测试过程

step 1

生成测试数据

# echo "i am test log file." >>/var/log/flumeSpool/test-spool-1.log

查看log目录

# ll /var/log/flumeSpool/
total 4
-rw-r----- 1 root root 20 Apr 22 10:06 test-spool-1.log.COMPLETED

查看hdfs目录

# hadoop fs -ls /flume/spool/*/*/*
-rw-r--r--   3 root supergroup        154 2021-04-22 10:06 /flume/spool/21-04-22/10/spool.1619057188656.log.tmp

flume日志

2021-04-22 10:06:26,976 (pool-5-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.readEvents(ReliableSpoolingFileEventReader.java:384)] 
Last read took us just up to a file boundary. Rolling to the next file, if there is one.
2021-04-22 10:06:26,976 (pool-5-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.rollCurrentFile(ReliableSpoolingFileEventReader.java:497)] 
Preparing to move file /var/log/flumeSpool/test-spool-1.log to /var/log/flumeSpool/test-spool-1.log.COMPLETED
2021-04-22 10:06:28,655 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.HDFSSequenceFile.configure(HDFSSequenceFile.java:63)] 
writeFormat = Writable, UseRawLocalFileSystem = false
2021-04-22 10:06:28,673 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:246)] 
Creating /flume/spool/21-04-22/10/spool.1619057188656.log.tmp

step 2

等待 30s

step 3

flume日志

2021-04-22 10:06:58,722 (hdfs-k1-roll-timer-0) [INFO - org.apache.flume.sink.hdfs.HDFSEventSink$1.run(HDFSEventSink.java:393)] 
Writer callback called.
2021-04-22 10:06:58,722 (hdfs-k1-roll-timer-0) [INFO - org.apache.flume.sink.hdfs.BucketWriter.doClose(BucketWriter.java:438)] 
Closing /flume/spool/21-04-22/10/spool.1619057188656.log.tmp
2021-04-22 10:06:58,774 (hdfs-k1-call-runner-7) [INFO - org.apache.flume.sink.hdfs.BucketWriter$7.call(BucketWriter.java:681)] 
Renaming /flume/spool/21-04-22/10/spool.1619057188656.log.tmp to /flume/spool/21-04-22/10/spool.1619057188656.log

再次查看hdfs目录

# hadoop fs -ls /flume/spool/*/*/*
-rw-r--r-- 3 root supergroup 154 2021-04-22 10:06 /flume/spool/21-04-22/10/spool.1619057188656.log

step4

重复以上步骤一次,目录情况:

# ll /var/log/flumeSpool/
total 8
-rw-r----- 1 root root 20 Apr 22 10:06 test-spool-1.log.COMPLETED
-rw-r----- 1 root root 20 Apr 22 10:15 test-spool-2.log.COMPLETED
# hadoop fs -ls /flume/spool/*/*/*
-rw-r--r-- 3 root supergroup 154 2021-04-22 10:06 /flume/spool/21-04-22/10/spool.1619057188656.log
-rw-r--r-- 3 root supergroup 154 2021-04-22 10:16 /flume/spool/21-04-22/10/spool.1619057757821.log

由于hdfs.round相关的三个参数设置,目录以小时为单位创建:/flume/spool/21-04-22/10/

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
好的,我可以为您提供一些关于使用Flume从Kafka采集数据写入HDFS代理的步骤: 1. 安装Flume和Kafka: 您需要在您的机器上安装Flume和Kafka。安装过程可以参考官方文档或者网上教程。 2. 配置Flume agent: 您需要配置Flume agent来从Kafka读取数据并写入HDFS代理。以下是一个简单的示例Flume配置文件: ``` agent.sources = kafka-source agent.channels = hdfs-channel agent.sinks = hdfs-sink agent.sources.kafka-source.type = org.apache.flume.source.kafka.KafkaSource agent.sources.kafka-source.kafka.bootstrap.servers = localhost:9092 agent.sources.kafka-source.kafka.topics = my-topic agent.sources.kafka-source.batchSize = 1000 agent.sources.kafka-source.batchDurationMillis = 1000 agent.channels.hdfs-channel.type = memory agent.channels.hdfs-channel.capacity = 10000 agent.channels.hdfs-channel.transactionCapacity = 1000 agent.sinks.hdfs-sink.type = hdfs agent.sinks.hdfs-sink.hdfs.path = /path/to/hdfs agent.sinks.hdfs-sink.hdfs.filePrefix = events- agent.sinks.hdfs-sink.hdfs.fileSuffix = .log agent.sinks.hdfs-sink.hdfs.rollInterval = 3600 agent.sinks.hdfs-sink.hdfs.rollSize = 0 agent.sinks.hdfs-sink.hdfs.rollCount = 10000 agent.sinks.hdfs-sink.channel = hdfs-channel ``` 这个配置文件定义了一个名为kafka-source的source,它从名为my-topic的Kafka主题中读取数据。数据被写入一个内存通道(memory channel),并由名为hdfs-sink的sink写入HDFS代理。 3. 运行Flume agent: 在您的机器上运行Flume agent,使用以下命令: ``` $ bin/flume-ng agent -n agent -c conf -f /path/to/flume.conf ``` 其中,/path/to/flume.conf是您的Flume配置文件的路径。 以上是使用Flume从Kafka采集数据写入HDFS代理的基本步骤,您可以根据您的需求进行修改和调整。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值