一,准备环境
CentOs7,jdk1.7,hadoop -2.6.1, apache-flume-1.6.0-bin.tar.gz
二,编写配置文件
在/home/flume/conf的目录下 创建 配置文件
#定义三大组件的名称
agent1.sources = source1
agent1.sinks = sink1
agent1.channels = channel1
# 配置source组件
agent1.sources.source1.type = spooldir
agent1.sources.source1.spoolDir = /home/data
agent1.sources.source1.fileHeader = false
#配置拦截器
agent1.sources.source1.interceptors = i1
agent1.sources.source1.interceptors.i1.type = host
agent1.sources.source1.interceptors.i1.hostHeader = hostname
# 配置sink组件
agent1.sinks.sink1.type = hdfs
agent1.sinks.sink1.hdfs.path =hdfs://server1:9000/flume/collection/%y-%m-%d/%H-%M #按时间的格式命名
agent1.sinks.sink1.hdfs.filePrefix = access_log
agent1.sinks.sink1.hdfs.maxOpenFiles = 5000
agent1.sinks.sink1.hdfs.batchSize= 100
agent1.sinks.sink1.hdfs.fileType = DataStream
agent1.sinks.sink1.hdfs.writeFormat =Text
agent1.sinks.sink1.hdfs.rollSize = 102400
agent1.sinks.sink1.hdfs.rollCount = 1000000
agent1.sinks.sink1.hdfs.rollInterval = 60
agent1.sinks.sink1.hdfs.useLocalTimeStamp = true
# 配置channels组件
agent1.channels.channel1.type = memory
agent1.channels.channel1.keep-alive = 120
agent1.channels.channel1.capacity = 500000
agent1.channels.channel1.transactionCapacity = 600
# 配置组件关系
agent1.sources.source1.channels = channel1
agent1.sinks.sink1.channel = channel1
在/home下创建data文件夹
三,运行程序
在/home/flume 目录下运行代码
bin/flume-ng agent -c conf -f conf/hdfs-logger.conf -n agent1 -Dflume.root.logger=INFO,console
成功后,向data中添加txt文件。
四,查看结果
用HDFS查看Flume目录下的结果收集文件。
五,错误纠正
Resources are low on NN. Please add or free up more resources then turn off safe mode manually.
NOTE: If you turn off safe mode before adding resources, the NN will immediately return to safe mode.
Use "hdfs dfsadmin -safemode leave" to turn safe mode off.
在hadoop的目录下运行代码:
bin/hadoop dfsadmin -safemode leave