Flume采集文件到HDFS
在flume和Hadoop安装好的情况下
1.遇到的坑
在安装Hadoop时,配置 core-site.xml 文件一定要注意。
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
上述的value值使用的是主机名称(master)或者IP地址,不能使用localhost(亲测报错)或者127.0.0.1。
一直不停报错
2.flume配置文件
在flume文件下创建test测试文件,在test文件内写配置文件
vim test-flume-hdfs.conf
内容为
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = exec
a1.sources.r1.command = tail -F /usr/local/flume/test/test-flume-hdfs.log
a1.sources.r1.channels = c1
# Describe the sink
a1.sinks.k1.type = hdfs
a1.sinks.k1.channel = c1
a1.sinks.k1.hdfs.path = hdfs://(IP或主机名):9000/user/root/test/%y-%m-%d/%H-%M/
a1.sinks.k1.hdfs.filePrefix = events-
a1.sinks.k1.hdfs.round = true
a1.sinks.k1.hdfs.roundValue = 10
a1.sinks.k1.hdfs.roundUnit = minute
a1.sinks.k1.hdfs.rollInterval = 3
a1.sinks.k1.hdfs.rollSize = 20
a1.sinks.k1.hdfs.rollCount = 5
a1.sinks.k1.hdfs.batchSize = 1
a1.sinks.k1.hdfs.useLocalTimeStamp = true
#生成的文件类型,默认是Sequencefile,可用DataStream,则为普通文本
a1.sinks.k1.hdfs.fileType = DataStream
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
3.启动Hadoop
sbin/start-all.sh
4.启动flume
bin/flume-ng agent -c conf -f test/test-flume-hdfs.conf -n a1 -Dflume.root.logger=INFO,console
5.写测试文件
在flume的test文件下创建test-flume-hdfs.log文件并写入内容,保存
test flume-hdfs
hello flume hdfs
结果
两次写入的测试数据
下载下来为