1.向agent发送数据,并将数据输出至hdfs中
在$FLUME_HOME/conf目录下修改flume-conf.properties.template文件,复制并改名为flumetest2
在$FLUME_HOME/conf目录下修改flume-conf.properties.template文件,复制并改名为flumetest2
a1.sources= r1
a1.sinks= k1
a1.channels= c1
a1.sources.r1.type= exec
a1.sources.r1.channels= c1
a1.sources.r1.command= tail -F /home/hadoop/hadoop-2.6.0/logs/hadoop-hadoop-namenode-hadoop0.log
a1.sinks.k1.type= hdfs
a1.sinks.k1.channel= c1
a1.sinks.k1.hdfs.path= hdfs://hadoop0:9000/outputs
a1.sinks.k1.hdfs.filePrefix= events-
a1.sinks.k1.hdfs.round= true
a1.sinks.k1.hdfs.roundValue= 10
a1.sinks.k1.hdfs.roundUnit= minute
a1.sinks.k1.hdfs.rollSize= 4000000
a1.sinks.k1.hdfs.rollCount= 0
a1.sinks.k1.hdfs.writeFormat= Text
a1.sinks.k1.hdfs.fileType= DataStream
a1.sinks.k1.hdfs.batchSize= 10
a1.channels.c1.type= memory
a1.channels.c1.capacity= 1000
a1.channels.c1.transactionCapacity= 100
2.在flume的安装目录/flume-1.5.0-bin下运行
2.在flume的安装目录/flume-1.5.0-bin下运行
./bin/flume-ng agent --conf./conf/ --conf-file ./conf/flumetest2 --name a1-Dflume.root.logger=INFO,console
不断收集hadoop-user-namenode-user1.log的数据写入HDFS中
查看hdfs中/outputs中的文件
查看hdfs中/outputs中的文件
- hadoop fs -ls /outputs
- hadoop fs -cat /outputs/events-.1436691811727
cat /home/hadoop/hadoop-2.6.0/logs/hadoop-hadoop-namenode-hadoop0.log
hadoop fs -ls /outputs
3.可以看出,随着时间的推移,/home/hadoop/hadoop-2.6.0/logs/hadoop-hadoop-namenode-hadoop0.log日志内容不断更新,hadoop fs -ls /outputs目录也随之不断更新。flume日志采集正常工作。
本文参考转载至:http://f.dataguru.cn/thread-523804-1-1.html
本文参考转载至:http://f.dataguru.cn/thread-523804-1-1.html