FileSystem close Exception

1.  今天在写mr的时候,map完成之后好长时间没有执行reducer,也没报什么错误,只是提示任务失败。。。

解决方案:去jobtracker的日志中查看错误信息(hadoop-hadoop-jobtracker-steven.log),发现:

2014-05-09 17:42:46,811 ERROR org.apache.hadoop.hdfs.DFSClient: Failed to close file /ies/result/_logs/history/job_201405091001_0005_1399626116239_hadoop_IES%5FResult%5F2014-05-09+17%3A01%3A55
org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on /ies/result/_logs/history/job_201405091001_0005_1399626116239_hadoop_IES%5FResult%5F2014-05-09+17%3A01%3A55 File does not exist. Holder DFSClient_NONMAPREDUCE_1956983432_28 does not have any open files
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1999)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1990)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1899)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:783)
        at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)

        at org.apache.hadoop.ipc.Client.call(Client.java:1113)
        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
        at com.sun.proxy.$Proxy7.addBlock(Unknown Source)
        at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
        at com.sun.proxy.$Proxy7.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3720)
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3580)
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2783)
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3023)
2014-05-09 17:42:46,816 INFO org.apache.hadoop.mapred.JobTracker: Removing task 'attempt_201405091001_0005_m_000001_0'
2014-05-09 17:42:46,816 INFO org.apache.hadoop.mapred.JobTracker: Removing task 'attempt_201405091001_0005_m_000002_0'
2014-05-09 17:42:46,824 INFO org.apache.hadoop.mapred.JobHistory: Moving file:/home/hadoop/hadoop1.1.2/hadoop-1.2.1/logs/history/job_201405091001_0005_conf.xml to file:/home/hadoop/hadoop1.1.2/hadoop-1.2.1/logs/history/done/version-1/localhost_1399600913639_/2014/05/09/000000
2014-05-09 17:46:02,621 INFO org.apache.hadoop.mapred.TaskInProgress: Error from attempt_201405091001_0006_m_000000_2: Task attempt_201405091001_0006_m_000000_2 failed to report status for 600 seconds. Killing!
2014-05-09 17:46:02,621 INFO org.apache.hadoop.mapred.JobTracker: Removing task 'attempt_201405091001_0006_m_000000_2'
2014-05-09 17:46:02,622 INFO org.apache.hadoop.mapred.JobTracker: Adding task (TASK_CLEANUP) 'attempt_201405091001_0006_m_000000_2' to tip task_201405091001_0006_m_000000, for tracker 'tracker_steven:localhost/127.0.0.1:37363'
2014-05-09 17:46:05,041 INFO org.apache.hadoop.mapred.JobInProgress: Choosing a failed task task_201405091001_0006_m_000000
2014-05-09 17:46:05,042 INFO org.apache.hadoop.mapred.JobTracker: Adding task (MAP) 'attempt_201405091001_0006_m_000000_3' to tip task_201405091001_0006_m_000000, for tracker 'tracker_steven:localhost/127.0.0.1:37363'
2014-05-09 17:46:05,042 INFO org.apache.hadoop.mapred.JobTracker: Removing task 'attempt_201405091001_0006_m_000000_2'
错误原因:关闭文件失败,那就是FileSystem的事情了,我查看mr代码发现没有地方关闭文件啊,什么情况!!! 

在网上找到这么一片文章:主要是说,map中如果已经关闭了文件,但是当map执行完毕后调用cleanup关闭文件的时,将会导致程序出错。。。

Generally, you should not call fs.close() when you do a FileSystem.get(...). FileSystem.get(...) won't actually open a "new" FileSystem object. When you do a close() on that FileSystem, you will close it for any upstream process as well.

For example, if you close the FileSystem during a mapper, your MapReduce driver will fail when it again tries to close the FileSystem on cleanup.
但是问题是我没有在map执行cleanup函数的时候关闭Filesystem啊!!!  在细查代码发现我在setup中调用了:super.setup(context);难道是在这把文件流关闭了。于是把这行代码删除掉,程序正常了。。。   我很奇怪。。。


参考:http://stackoverflow.com/questions/20492278/hdfs-filesystem-close-exception



  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值