flume采集本地数据到hdfs

原创 2016年08月30日 19:34:20

配置:

agent1.sources = spooldirSource
agent1.channels = fileChannel
agent1.sinks = hdfsSink

agent1.sources.spooldirSource.type=spooldir
agent1.sources.spooldirSource.spoolDir=/opt/flume
agent1.sources.spooldirSource.channels=fileChannel

agent1.sinks.hdfsSink.type=hdfs
agent1.sinks.hdfsSink.hdfs.path=hdfs://192.168.200.45:8020/flume/cys/%y-%m-%d
agent1.sinks.hdfsSink.hdfs.filePrefix=cys
agent1.sinks.sink1.hdfs.round = true
# Number of seconds to wait before rolling current file (0 = never roll based on time interval)
agent1.sinks.hdfsSink.hdfs.rollInterval = 3600
# File size to trigger roll, in bytes (0: never roll based on file size)
agent1.sinks.hdfsSink.hdfs.rollSize = 128000000
agent1.sinks.hdfsSink.hdfs.rollCount = 0
agent1.sinks.hdfsSink.hdfs.batchSize = 1000

#Rounded down to the highest multiple of this (in the unit configured using hdfs.roundUnit), less than current time.
agent1.sinks.hdfsSink.hdfs.roundValue = 1
agent1.sinks.hdfsSink.hdfs.roundUnit = minute
agent1.sinks.hdfsSink.hdfs.useLocalTimeStamp = true
agent1.sinks.hdfsSink.channel=fileChannel
agent1.sinks.hdfsSink.hdfs.fileType = DataStream


agent1.channels.fileChannel.type = file
agent1.channels.fileChannel.checkpointDir=/usr/share/apache-flume-1.5.0-bin/checkpoint
agent1.channels.fileChannel.dataDirs=/usr/share/apache-flume-1.5.0-bin/dataDir


执行:

[root@sdzn-cdh01 conf.dist]# flume-ng agent -f test1   -n agent1 -Dflume.root.logger=INFO,console


异常:

HDFSEventSink.java:463)] HDFS IO error
java.io.IOException: Failed on local exception: com.google.protobuf.InvalidProtocolBufferException: Protocol message end-group tag did not match expected tag.; Host Details : local host is: "sdzn-cdh01.zhiyoubao.com/192.168.200.45"; destination host is: "sdzn-cdh01.zhiyoubao.com":9000;
        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764)
        at org.apache.hadoop.ipc.Client.call(Client.java:1415)
        at org.apache.hadoop.ipc.Client.call(Client.java:1364)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
        at $Proxy19.create(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:287)
        at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at $Proxy20.create(Unknown Source)
        at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1645)
        at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1618)
        at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1543)
        at org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:396)
        at org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:392)
        at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
        at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:392)
        at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:336)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:908)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:889)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:786)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:775)
        at org.apache.flume.sink.hdfs.HDFSDataStream.doOpen(HDFSDataStream.java:86)
        at org.apache.flume.sink.hdfs.HDFSDataStream.open(HDFSDataStream.java:113)
        at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:273)
        at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:262)
        at org.apache.flume.sink.hdfs.BucketWriter$9$1.run(BucketWriter.java:706)
        at org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:183)
        at org.apache.flume.sink.hdfs.BucketWriter.access$1400(BucketWriter.java:59)
        at org.apache.flume.sink.hdfs.BucketWriter$9.call(BucketWriter.java:703)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
        at java.util.concurrent.FutureTask.run(FutureTask.java:138)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)^C



少jar包如图:

jar放入:

[root@sdzn-cdh01 jars]# pwd
/opt/cloudera/parcels/CDH-5.3.6-1.cdh5.3.6.p0.11/jars


异常二:


  1. 1) [ERROR - org.apache.flume.source.SpoolDirectorySource$SpoolDirectoryRunnable.run(SpoolDirectorySource.java:256)] FATAL: Spool Directory source r1: { spoolDir: /home/hadoop/flumeSpool-2 }: Uncaught exception in SpoolDirectorySource thread. Restart or reconfigure Flume to continue processing.  
  2. java.nio.charset.MalformedInputException: Input length = 1  
  3.     at java.nio.charset.CoderResult.throwException(CoderResult.java:281)  
  4.     at org.apache.flume.serialization.ResettableFileInputStream.readChar(ResettableFileInputStream.java:195)

数据格式有问题,修改数据格式即可!

解决连接:http://www.cnblogs.com/zhoujingyu/p/5315403.html


版权声明:本文为博主原创文章,欢迎诸位分享交流

flume安装配置-采集日志到hadoop存储

一、整体架构        flume其实就是一个日志采集agent,在每台应用服务器安装一个flume agent,然后事实采集日志到HDFS集群环境存储,以便后续使用hive或者pig等大数据...

让你快速认识flume及安装和使用flume1.5传输数据(日志)到hadoop2.2

转自:http://www.aboutyun.com/thread-7949-1-1.html 问题导读: 1.什么是flume? 2.如何安装flume? 3.flume的配置文件与其它软件...
  • lskyne
  • lskyne
  • 2014年07月07日 19:41
  • 24196

flume采集数据到hdfs

说明:flume1.5,hadoop2.2 1、配置JAVA_HOME和HADOOP_HOME 说明:HADOOP_HOME用于获取flume操作hdfs所需的jar和配置文件,如果不配置,也可以...

flume 采集数据到hdfs

前言:在两台机器上做flume 采集数据实验:hadoop05上安装flume 1.5.0版本,hadoop07上安装hadoop2.2.0版本 一、安装     前提:f...

大数据架构:flume-ng+Kafka+Storm+HDFS 实时系统组合

原文地址:http://www.aboutyun.com/thread-6855-1-1.html 个人观点:大数据我们都知道hadoop,但并不都是hadoop.我们该如何构建大数据库项目。对于离...

大数据架构:flume-ng+Kafka+Storm+HDFS 实时系统组合

大数据架构:flume-ng+Kafka+Storm+HDFS 实时系统组合 分类: big data综合知识2014-05-09 20:56 2474人阅读 评论(1) 收藏 举报 ...

大数据架构:flume-ng+Kafka+Storm+HDFS 实时系统组合

个人观点:大数据我们都知道hadoop,但并不都是hadoop.我们该如何构建大数据库项目。对于离线处理,hadoop还是比较适合的,但是对于实时性比较强的,数据量比较大的,我们可以采用Storm,那...
  • dxl342
  • dxl342
  • 2016年12月21日 17:20
  • 620

模拟客户端将数据flume存储在hdfs上

1.首先,需要在机器上安装flume工具 2.安装flume: 在文件夹 cd /usr/hdp/2.4.0.0-169/下创建flume文件目录 tar zxvf apache-fl...

hadoop从入门到放弃(一)之flume获取数据存入hdfs

一、解压flume到/hadoop/目录下 tar -zxvf apache-flume-1.6.0-bin.tar.gz -C /hadoop/ 二、配置flume配置文件 [hadoop@h...
内容举报
返回顶部
收藏助手
不良信息举报
您举报文章:flume采集本地数据到hdfs
举报原因:
原因补充:

(最多只允许输入30个字)