转载一篇关于EOFException异常的描述

I’ve noticed that occasionally a data node will be reported in the namenode web ui as “dead” for a minute or two and then move back to live. This morning I found this in the data node log

 

2011-08-30 06:41:54,538 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: writeBlock blk_7836987981877259019_6951898 received exception java.io.EOFException: while trying to read 65557 bytes

2011-08-30 06:41:54,538 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(10.1.10.40:50010, storageID=DS-148472958-127.0.0.1-50010-1306244056289, infoPort=50075, ipcPort=50020):DataXceiver

java.io.EOFException: while trying to read 65557 bytes

                at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readToBuf(BlockReceiver.java:270)

                at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readNextPacket(BlockReceiver.java:314)

                at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:378)

                at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:534)

                at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:417)

                at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:122)

2011-08-30 06:41:57,447 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Datanode 9 got response for connect ack  from downstream datanode with firstbadlink as10.1.10.46:50010

2011-08-30 06:41:57,448 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Datanode 9 forwarding connect ack to upstream firstbadlink is 10.1.10.46:50010

2011-08-30 06:41:57,487 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in receiveBlock for block blk_-8495252012538377530_6951901 java.io.EOFException: while trying to read 65557 bytes

2011-08-30 06:41:57,560 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: IOException in BlockReceiver.run():

java.nio.channels.ClosedByInterruptException

                at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)

                at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:341)

                at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:55)

                at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)

                at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:146)

                at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:107)

                at java.io.DataOutputStream.writeLong(DataOutputStream.java:207)

                at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PipelineAck.write(DataTransferProtocol.java:133)

                at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1003)

                at java.lang.Thread.run(Thread.java:662)

2011-08-30 06:41:57,561 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: checkDiskError: exception:

java.nio.channels.ClosedByInterruptException

                at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)

                at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:341)

                at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:55)

                at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)

                at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:146)

                at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:107)

                at java.io.DataOutputStream.writeLong(DataOutputStream.java:207)

                at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PipelineAck.write(DataTransferProtocol.java:133)

                at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1003)

                at java.lang.Thread.run(Thread.java:662)

2011-08-30 06:41:58,673 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder blk_-8495252012538377530_6951901 9 Exception java.nio.channels.ClosedByInterruptException

                at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)

                at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:341)

                at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:55)

                at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)

                at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:146)

                at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:107)

                at java.io.DataOutputStream.writeLong(DataOutputStream.java:207)

                at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PipelineAck.write(DataTransferProtocol.java:133)

                at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1003)

                at java.lang.Thread.run(Thread.java:662)

 

2011-08-30 06:42:31,439 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Received block blk_-1075666488998162673_6951900 src: /10.1.10.49:47483 dest: /10.1.10.40:50010 of size 13398279

2011-08-30 06:42:31,649 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_-8468812752051460822_6951935 src: /10.1.10.20:46923 dest: /10.1.10.40:50010

 

Does anyone have any idea what is happening or why?  Is this something I need to worry about?

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值