(1)本地产生数据到Kafka(启动flume,采集数据命令)
nohup /opt/module/flume/bin/flume-ng agent --conf-file /opt/module/flume/conf/file-flume-kafka.conf --name a1 -Dflume.root.logger=INFO,LOGFILE >/dev/null 2>&1 &
(2)将Kafka里面的数据上传到hdfs上
nohup /opt/module/flume/bin/flume-ng agent --conf-file /opt/module/flume/conf/kafka-flume-hdfs.conf --name a1 -Dflume.root.logger=INFO,LOGFILE >/opt/module/flume/log.txt 2>&1 &
(3)只会采集新生成的数据,所以再次生成数据
java -classpath log-collector-1.0-SNAPSHOT-jarwithdependencies.jarcom.smalltiger.appclient.AppMain >/opt/module/test.log
(4)到hdfs上查看采集上传的数据