HDFS写入异常,追加文件第一次抛异常

org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.RecoveryInProgressException): Failed to APPEND_FILE /surveillance/2018-01-02/host21-host.log for DFSClient_NONMAPREDUCE_-1806248259_128 on 192.168.1.121 because lease recovery is in progress. Try again later.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:3145)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2905)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:3212)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:3181)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:767)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:432)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2345)

    at org.apache.hadoop.ipc.Client.call(Client.java:1469)
    at org.apache.hadoop.ipc.Client.call(Client.java:1400)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
    at com.sun.proxy.$Proxy42.append(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:313)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy43.append(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1756)
    at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1792)
    at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1785)
    at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:323)
    at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:319)
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
    at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:319)
    at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1163)
    at com.trimps.bolt.storage.HdfsStorageBolt.writeFile(HdfsStorageBolt.java:130)
    at com.trimps.bolt.storage.HdfsStorageBolt.writeModel(HdfsStorageBolt.java:64)
    at com.trimps.bolt.storage.AbstractStorageBolt.execute(AbstractStorageBolt.java:96)
    at org.apache.storm.daemon.executor$fn__5030$tuple_action_fn__5032.invoke(executor.clj:729)
    at org.apache.storm.daemon.executor$mk_task_receiver$fn__4951.invoke(executor.clj:461)
    at org.apache.storm.disruptor$clojure_handler$reify__4465.onEvent(disruptor.clj:40)
    at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:482)
    at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:460)
    at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73)
    at org.apache.storm.daemon.executor$fn__5030$fn__5043$fn__5096.invoke(executor.clj:848)
    at org.apache.storm.util$async_loop$fn__557.invoke(util.clj:484)
    at clojure.lang.AFn.run(AFn.java:22)
    at java.lang.Thread.run(Thread.java:748)

异常重现:
第一次启动程序调用的是create方法,创建新的文件,此时是正常运行的,没有问题。
然后手动停止程序,再次启动程序,调用的是append方法,追加内容到刚才创建的文件中,此时就会抛出前面打印的异常
主要原因是hdfs内部使用的是“契约”来管理文件的占用情况,类似句柄。
HDFS的Lease设定了两个时间限制:softLimit(默认1m),hardLimit(默认1h);
我的代码写文件的时候并没有每次都调用close方法,而是每天新建一个文件夹才关闭一次,减少每次获取对象建立连接的时间。
参考: http://jxy.me/2015/06/09/hdfs-data-visibility/
http://blog.csdn.net/androidlushangderen/article/details/52850349
http://www.cnblogs.com/ZisZ/p/3253570.html

评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值