既然flume 操作 hdfs , 就需要对应的jar,就要到hadoop安装目录下的share目录下查找
使用winscp软件将jar文件考本到windows本地桌面上
将上面jar文件拷贝到flume安装目录下的lib目录下
接下来写flume-hdfs.conf配置文件
切换到flume目录的job目录 cd /usr/local/flume/job
创建flume-hdfs.conf文件 vim flume-hdfs.conf
#name the components on this agent
a2.sources = r2
a2.sinks = k2
a2.channels = c2
# Describe/configure the source
a2.sources.r2.type = exec
a2.sources.r2.command = tail -F /tmp/haitao/hive.log
a2.sources.r2.bind = hadoop002
a2.sources.r2.shell = /bin/bash -c
# Describe the sink
a2.sinks.k2.type = hdfs
a2.sinks.k2.hdfs.path = hdfs://hadoop002:9000/flume/%Y%m%d/%H
#上传文件的前缀
a2.sinks.k2.hdfs.filePrefix = logs-haitao-
#是否按照时间滚动文件夹
a2.sinks.k2.hdfs.round = true
#多少时间单位创建一个新的文件夹
a2.sinks.k2.hdfs.roundValue = 1
#重新定义时间单位
a2.sinks.k2.hdfs.roundUnit = hour
#是否使用本地时间戳
a2.sinks.k2.hdfs.useLocalTimeStamp = true
#积攒多少个Event才flush到HDFS一次
#a2.sinks.k2.hdfs.batchSize = 1000
#设置文件类型,可支持压缩
a2.sinks.k2.hdfs.fileType = DataStream
#多久生成一个新的文件
a2.sinks.k2.hdfs.rollInterval = 60
#设置每个文件的滚动大小
a2.sinks.k2.hdfs.rollSize = 134217700
#文件的滚动与Event数量无关
a2.sinks.k2.hdfs.rollCount = 0
#最小冗余数
a2.sinks.k2.hdfs.minBlockReplicas = 1
# Use a channel which buffers events in memory
a2.channels.c2.type = memory
a2.channels.c2.capacity = 1000
a2.channels.c2.transactionCapacity = 100
# Bind the source and sink to the channel
a2.sources.r2.channels = c2
a2.sinks.k2.channel = c2
执行监控配置
首先进入flume安装目录 cd /usr/local/flume
bin/flume-ng agent --conf conf/ --name a2 --conf-file job/flume-hdfs.conf
可以切换到hive日志保存的目录下 /tmp/用户名/ 目录 ,我当前的目录是 /tmp/haitao
执行如下命令 echo "请请zhi jing you you wo xin " >> hive.log
这样会手动的向hive.log保存信息
FR:徐海涛(hunk Xu)
QQ技术交流群:386476712
参考http://flume.apache.org/releases/content/1.9.0/FlumeUserGuide.html
FR:徐海涛(hunk Xu)
QQ技术交流群:386476712