00 选择Source
这里有两个选择:
exec Source
Spooling Directory Source
如果使用exec
的方式,因为log文件分割,可能存在跳过部分log文件,导致数据被忽略。
所以选择第二种,spooldir
的方式
01 选择Channel
memory channel
02 选择Sink
因为需要写入hdfs,选择hdfs sink
03 conf文件配置
#定义
#agent:agent
#sources:s1
#channels:c1
#sinks:k1
agent.sources=s1
agent.channels = c1
agent.sinks = k1
#选择 Spooling Directory Source 方式
agent.sources.s1.type=spooldir
#配置s1的channel:c1
agent.sources.s1.channels=c1
#读取目录配置
agent.sources.s1.spoolDir=/var/log/flumeSpool
#把文件的绝对路径加到header中
agent.sources.s1.fileHeader = true
#文件过滤,只选择.log结尾的文件
agent.sources.s1.includePattern = ^(.)*\\.log$
#使用内存通道
agent.channels.c1.type=memory
#选择 hdfs sink
agent.sinks.k1.type = hdfs
#配置k1的channel:c1
agent.sinks.k1.channel = c1
#hdfs中存放的目录
agent.sinks.k1.hdfs.path = /flume/spool/%y-%m-%d/%H
#文件名前缀
agent.sinks.k1.hdfs.filePrefix = spool
#文件名后缀
agent.sinks.k1.hdfs.fileSuffix = .log
#开启四舍五入(时间参数)
agent.sinks.k1.hdfs.round = true
agent.sinks.k1.hdfs.roundValue = 1
agent.sinks.k1.hdfs.roundUnit = hour
#使用本地时间戳
agent.sinks.k1.hdfs.useLocalTimeStamp = true
04 其他准备工作
flume节点
创建目录:mkdir -p /var/log/flumeSpool
hdfs
创建目录:hadoop fs -mkdir -p /flume/spool/
# hadoop fs -ls /flume
Found 1 items
drwxr-xr-x - root supergroup 0 2021-04-21 16:30 /flume/spool
模拟数据
echo "i am test log file." >>/var/log/flumeSpool/test-spool-1.log
echo "i am test log file." >>/var/log/flumeSpool/test-spool-2.log
05 测试过程
step 1
生成测试数据
# echo "i am test log file." >>/var/log/flumeSpool/test-spool-1.log
查看log目录
# ll /var/log/flumeSpool/
total 4
-rw-r----- 1 root root 20 Apr 22 10:06 test-spool-1.log.COMPLETED
查看hdfs目录
# hadoop fs -ls /flume/spool/*/*/*
-rw-r--r-- 3 root supergroup 154 2021-04-22 10:06 /flume/spool/21-04-22/10/spool.1619057188656.log.tmp
flume日志
2021-04-22 10:06:26,976 (pool-5-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.readEvents(ReliableSpoolingFileEventReader.java:384)]
Last read took us just up to a file boundary. Rolling to the next file, if there is one.
2021-04-22 10:06:26,976 (pool-5-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.rollCurrentFile(ReliableSpoolingFileEventReader.java:497)]
Preparing to move file /var/log/flumeSpool/test-spool-1.log to /var/log/flumeSpool/test-spool-1.log.COMPLETED
2021-04-22 10:06:28,655 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.HDFSSequenceFile.configure(HDFSSequenceFile.java:63)]
writeFormat = Writable, UseRawLocalFileSystem = false
2021-04-22 10:06:28,673 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:246)]
Creating /flume/spool/21-04-22/10/spool.1619057188656.log.tmp
step 2
等待 30s
step 3
flume日志
2021-04-22 10:06:58,722 (hdfs-k1-roll-timer-0) [INFO - org.apache.flume.sink.hdfs.HDFSEventSink$1.run(HDFSEventSink.java:393)]
Writer callback called.
2021-04-22 10:06:58,722 (hdfs-k1-roll-timer-0) [INFO - org.apache.flume.sink.hdfs.BucketWriter.doClose(BucketWriter.java:438)]
Closing /flume/spool/21-04-22/10/spool.1619057188656.log.tmp
2021-04-22 10:06:58,774 (hdfs-k1-call-runner-7) [INFO - org.apache.flume.sink.hdfs.BucketWriter$7.call(BucketWriter.java:681)]
Renaming /flume/spool/21-04-22/10/spool.1619057188656.log.tmp to /flume/spool/21-04-22/10/spool.1619057188656.log
再次查看hdfs目录
# hadoop fs -ls /flume/spool/*/*/*
-rw-r--r-- 3 root supergroup 154 2021-04-22 10:06 /flume/spool/21-04-22/10/spool.1619057188656.log
step4
重复以上步骤一次,目录情况:
# ll /var/log/flumeSpool/
total 8
-rw-r----- 1 root root 20 Apr 22 10:06 test-spool-1.log.COMPLETED
-rw-r----- 1 root root 20 Apr 22 10:15 test-spool-2.log.COMPLETED
# hadoop fs -ls /flume/spool/*/*/*
-rw-r--r-- 3 root supergroup 154 2021-04-22 10:06 /flume/spool/21-04-22/10/spool.1619057188656.log
-rw-r--r-- 3 root supergroup 154 2021-04-22 10:16 /flume/spool/21-04-22/10/spool.1619057757821.log
由于hdfs.round
相关的三个参数设置,目录以小时为单位创建:/flume/spool/21-04-22/10/