Hadoop程序运行中的Error(1)-Error: org.apache.hadoop.hdfs.BlockMissingException

15/03/18 09:59:21 INFO mapreduce.Job: Task Id : attempt_1426641074924_0002_m_000000_2, Status : FAILED
Error: org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: BP-35642051-192.168.199.91-1419581604721:blk_1073743091_2267 file=/filein/file_128M.txt
at org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode(DFSInputStream.java:882)
at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:563)
at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:793)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:840)
at java.io.DataInputStream.readFully(DataInputStream.java:195)
at java.io.DataInputStream.readFully(DataInputStream.java:169)
at com.mr.AESEn.DataRecordReader.nextKeyValue(DataRecordReader.java:94)
at org.apache.hadoop.mapred.MapTask N e w T r a c k i n g R e c o r d R e a d e r . n e x t K e y V a l u e ( M a p T a s k . j a v a : 533 ) a t o r g . a p a c h e . h a d o o p . m a p r e d u c e . t a s k . M a p C o n t e x t I m p l . n e x t K e y V a l u e ( M a p C o n t e x t I m p l . j a v a : 80 ) a t o r g . a p a c h e . h a d o o p . m a p r e d u c e . l i b . m a p . W r a p p e d M a p p e r NewTrackingRecordReader.nextKeyValue(MapTask.java:533) at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80) at org.apache.hadoop.mapreduce.lib.map.WrappedMapper NewTrackingRecordReader.nextKeyValue(MapTask.java:533)atorg.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)atorg.apache.hadoop.mapreduce.lib.map.WrappedMapperContext.nextKeyValue(WrappedMapper.java:91)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)

以上是MapReduce程序在运行过程中产生的Error-Error: org.apache.hadoop.hdfs.BlockMissingException。
在网上查了下,可能的原因有以下两种:一是datanode节点有break的,另一种是datanode之间的通信有问题。秉着如上的原因,我开始排错:

[hadoop@cMaster hadoop-2.5.2]$ ssh cSlave00
Last login: Tue Mar 17 08:38:10 2015 from missie-pc.lan
[hadoop@cSlave00 ~]$ jps
3952 Jps
2910 NodeManager

[hadoop@cMaster hadoop-2.5.2]$ ssh cSlave01
Last login: Tue Mar 17 08:38:13 2015 from missie-pc.lan
[hadoop@cSlave01 ~]$ jps
3051 NodeManager
2714 DataNode
4562 Jps

[hadoop@cMaster hadoop-2.5.2]$ ssh cSlave02
Last login: Tue Mar 17 08:38:15 2015 from missie-pc.lan
[hadoop@cSlave02 ~]$ jps
4154 Jps
2921 NodeManager

可以发现,cSlave00与cSlave02节点的DataNode都crash掉了。

于是:
1.关闭yarn与dfs
[hadoop@cMaster hadoop-2.5.2]$ sbin/stop-yarn.sh
[hadoop@cMaster hadoop-2.5.2]$ sbin/stop-dfs.sh

2.重新启动dfs与yarn
[hadoop@cMaster hadoop-2.5.2]$ sbin/start-dfs.sh
[hadoop@cMaster hadoop-2.5.2]$ sbin/start-yarn.sh

15/03/18 10:04:20 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
Starting namenodes on [cMaster]
cMaster: starting namenode, logging to /home/hadoop/hadoop-2.5.2/logs/hadoop-hadoop-namenode-cMaster.out
cSlave00: starting datanode, logging to /home/hadoop/hadoop-2.5.2/logs/hadoop-hadoop-datanode-cSlave00.out
cSlave02: starting datanode, logging to /home/hadoop/hadoop-2.5.2/logs/hadoop-hadoop-datanode-cSlave02.out
cSlave01: starting datanode, logging to /home/hadoop/hadoop-2.5.2/logs/hadoop-hadoop-datanode-cSlave01.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /home/hadoop/hadoop-2.5.2/logs/hadoop-hadoop-secondarynamenode-cMaster.out
15/03/18 10:04:35 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable

starting yarn daemons
starting resourcemanager, logging to /home/hadoop/hadoop-2.5.2/logs/yarn-hadoop-resourcemanager-cMaster.out
cSlave01: starting nodemanager, logging to /home/hadoop/hadoop-2.5.2/logs/yarn-hadoop-nodemanager-cSlave01.out
cSlave02: starting nodemanager, logging to /home/hadoop/hadoop-2.5.2/logs/yarn-hadoop-nodemanager-cSlave02.out
cSlave00: starting nodemanager, logging to /home/hadoop/hadoop-2.5.2/logs/yarn-hadoop-nodemanager-cSlave00.out

如此,便能解决之前出现的Error: org.apache.hadoop.hdfs.BlockMissingException问题了。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值