hadoop的datanode异常结束

集群datanode节点挂掉一个。错误如下:


2013-11-18 02:01:13,730 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(192.168.1.190:50010, storageID=DS-155659652-192.168.1.190-50010-1383619740465, infoPort=50075, ipcPort=50020):DataXceiver
java.io.InterruptedIOException: Interruped while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 0 millis timeout left.
        at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
        at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
        at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
        at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
        at java.io.BufferedInputStream.read1(BufferedInputStream.java:256)
        at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
        at java.io.DataInputStream.read(DataInputStream.java:132)
        at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readToBuf(BlockReceiver.java:292)
        at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readNextPacket(BlockReceiver.java:382)
        at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:403)
        at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:581)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:406)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:112)
        at java.lang.Thread.run(Thread.java:619)
2013-11-18 02:01:14,731 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
2013-11-18 02:01:28,233 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification succeeded for blk_106200183580401997_8995735
2013-11-18 02:01:28,233 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Exiting DataBlockScanner thread.
2013-11-18 02:01:28,234 INFO org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: Shutting down all async disk service threads...
2013-11-18 02:01:28,234 INFO org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: All async disk service threads have been shut down.
2013-11-18 02:01:28,242 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2013-11-18 02:01:28,243 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at node-128-190/192.168.1.190
************************************************************/

(DataXceiver.java:
代码,相应代码如下:

      long startTime = DataNode.now();
      switch ( op ) {
      case DataTransferProtocol.OP_READ_BLOCK:
        readBlock( in );
        datanode.myMetrics.addReadBlockOp(DataNode.now() - startTime);
        if (local)
          datanode.myMetrics.incrReadsFromLocalClient();
        else
          datanode.myMetrics.incrReadsFromRemoteClient();
        break;
      case DataTransferProtocol.OP_WRITE_BLOCK:
        writeBlock( in );
        datanode.myMetrics.addWriteBlockOp(DataNode.now() - startTime);
        if (local)
          datanode.myMetrics.incrWritesFromLocalClient();
是网络io问题,但网卡没问题,联想到磁盘IO.把log向上找去,发现如下错误:

2013-11-18 02:01:11,027 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DataNode.handleDiskError: Keep Running: false
2013-11-18 02:01:13,042 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DataNode is shutting down.
DataNode failed volumes:/data3/dfs/data/current;
2013-11-18 02:01:13,043 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in receiveBlock for block blk_-2665830487288
167569_12466770 java.io.IOException: Read-only file system
2013-11-18 02:01:13,043 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder blk_-2665830487288167569_12466770 1 : 
Thread is interrupted.
2013-11-18 02:01:13,043 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 1 for block blk_-2665830487288167569_1
2466770 terminating
2013-11-18 02:01:13,043 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: writeBlock blk_-2665830487288167569_12466770 received 
exception java.io.IOException: Read-only file system
2013-11-18 02:01:13,043 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(192.168.1.190:50010, storageID=D
S-155659652-192.168.1.190-50010-1383619740465, infoPort=50075, ipcPort=50020):DataXceiver
java.io.IOException: Read-only file system
        at java.io.FileOutputStream.writeBytes(Native Method)
        at java.io.FileOutputStream.write(FileOutputStream.java:260)
        at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:480)
        at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:581)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:406)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:112)
        at java.lang.Thread.run(Thread.java:619)
未作修改,重启datanode,发现如下错误。印证了上面的想法:

2013-11-18 14:58:56,962 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2013-11-18 14:58:57,005 INFO org.apache.hadoop.metrics2.impl.MetricsSinkAdapter: Sink ganglia started
2013-11-18 14:58:57,020 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2013-11-18 14:58:57,022 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2013-11-18 14:58:57,022 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
2013-11-18 14:58:57,124 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2013-11-18 14:58:57,308 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid directory in dfs.data.dir: can not create directory: /data3/dfs/data
2013-11-18 14:58:57,829 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: org.apache.hadoop.util.DiskChecker$DiskErrorException: Invalid value for volsFailed : 1 , Volumes tolerated : 0
        at org.apache.hadoop.hdfs.server.datanode.FSDataset.<init>(FSDataset.java:974)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:403)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:309)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1651)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1590)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1608)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1734)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1751)

2013-11-18 14:58:57,831 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: 

在hdfs-site.xml里屏蔽掉坏磁盘。搞定。






评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值