关于部分datanode不能正常启动的问题

通过web,50070端口查看时发现挂了一个节点,打开日志发现以下错误:

2014-10-11 14:42:51,415 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: using BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec

2014-10-11 14:42:51,421 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 50020: starting
2014-10-11 14:42:51,413 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2014-10-11 14:42:51,415 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
2014-10-11 14:42:51,415 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 50020: starting
2014-10-11 14:42:51,415 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 50020: starting
2014-10-11 14:42:51,432 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting Periodic block scanner.
2014-10-11 14:42:52,360 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Finished asynchronous block report scan in 963ms
2014-10-11 14:42:52,374 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Generated rough (lockless) block report in 939 ms
2014-10-11 14:42:52,418 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Reconciled asynchronous block report against current state in 44 ms
2014-10-11 14:42:53,636 INFO org.apache.hadoop.util.NativeCodeLoader: Loaded the native-hadoop library
2014-10-11 14:42:53,938 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification succeeded for blk_-109464409738880836_25874
2014-10-11 14:42:54,421 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeCommand action: DNA_REGISTER
2014-10-11 14:42:54,425 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Finished generating blocks being written report for 1 volumes in 0 seconds
2014-10-11 14:42:57,443 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Reconciled asynchronous block report against current state in 27 ms
2014-10-11 14:42:57,712 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DataNode is shutting down: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.protocol.UnregisteredDatanodeException: Data node 192.168.56.3:50010 is attempting to report storage ID DS-634285571-192.168.56.3-50010-1408285772278. Node 192.168.56.4:50010 is expected to serve this storage.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDatanode(FSNamesystem.java:4776)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.processReport(FSNamesystem.java:3628)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.blockReport(NameNode.java:1041)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:601)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)

    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
    at $Proxy5.blockReport(Unknown Source)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.offerService(DataNode.java:1026)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1527)
    at java.lang.Thread.run(Thread.java:722)

2014-10-11 14:42:57,722 INFO org.mortbay.log: Stopped SelectChannelConnector@0.0.0.0:50075
2014-10-11 14:42:57,730 INFO org.apache.hadoop.ipc.Server: Stopping server on 50020
2014-10-11 14:42:57,730 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 50020: exiting
2014-10-11 14:42:57,730 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 50020: exiting
2014-10-11 14:42:57,731 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 50020: exiting
2014-10-11 14:42:57,731 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 50020
2014-10-11 14:42:57,732 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
2014-10-11 14:42:57,760 INFO org.apache.hadoop.ipc.metrics.RpcInstrumentation: shut down
2014-10-11 14:42:57,761 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to exit, active threads is 1
2014-10-11 14:42:57,765 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(192.168.56.3:50010, storageID=DS-634285571-192.168.56.3-50010-1408285772278, infoPort=50075, ipcPort=50020):DataXceiveServer:java.nio.channels.AsynchronousCloseException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:205)
    at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:233)
    at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:99)
    at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:131)
    at java.lang.Thread.run(Thread.java:722)

2014-10-11 14:42:57,765 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting DataXceiveServer




出现原因:此节点是我直接copy另一个节点做出来的.,导致storageId一样,以至于启动失败

解决方案:将出现异常的那台机器的hadoop存储目录/home/hadoop/dfs/data目录删掉即可   或者将/home/hadoop/dfs/data/current/VERSION中ID中间部分更改为对应的DATANODE的IP(storageID=DS-634285571-192.168.56.3-50010-1408285772278),重新启动集群即可

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值