在linux下hadoop集群 DataNode不能启动的情况

一、格式化文件之后,重新启动hadoop集群,发现 DataNode不能启动了

这里也没有节点数据,而是空的,这个图是正常的。找了很久原因,发现是datanode没有启动

然后查询日志。发现这样的问题。

java.io.IOException: Incompatible clusterIDs in /local/bigdata/hadoop-3.3.6/data/datanode: namenode clusterID = CID-589c10c0-f245-44cd-8e82-728857dbab93; datanode clusterID = CID-b09fd669-d1d4-43e0-b230-5e36c89b192c
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:746)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(DataStorage.java:296)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadDataStorage(DataStorage.java:409)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:389)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:561)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:2059)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1995)
        at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:394)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:312)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:891)
        at java.lang.Thread.run(Thread.java:750)
2023-11-11 17:12:26,242 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid 46d415da-00d9-459f-9991-dd1889651a5a) service to node1/192.168.42.139:9000. Exiting. 
java.io.IOException: All specified directories have failed to load.
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:562)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:2059)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1995)
        at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:394)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:312)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:891)
        at java.lang.Thread.run(Thread.java:750)

最后重启把namenode和datanode里面数据删除掉。重新格式化,再启动hadoop集群,就解决了。

注意:服务器的时间需要统一。正常情况下是不允许多次格式化的。这只是在测试环境上才可以这样玩。生产环境需要把VERSION里面的数据拷贝到datanode里面.正常启动后是这样的

2023-11-11 17:16:54,918 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to scan all replicas for block pool BP-738434729-192.168.42.139-1699694199117: 32ms
2023-11-11 17:16:54,919 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding replicas to map for block pool BP-738434729-192.168.42.139-1699694199117 on volume /usr/local/bigdata/hadoop-3.3.6/data/datanode...
2023-11-11 17:16:54,919 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice: Replica Cache file: /usr/local/bigdata/hadoop-3.3.6/data/datanode/current/BP-738434729-192.168.42.139-1699694199117/current/replicas doesn't exist 
2023-11-11 17:16:54,955 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to add replicas to map for block pool BP-738434729-192.168.42.139-1699694199117 on volume /usr/local/bigdata/hadoop-3.3.6/data/datanode: 36ms
2023-11-11 17:16:54,955 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to add all replicas to map for block pool BP-738434729-192.168.42.139-1699694199117: 37ms
2023-11-11 17:16:54,956 INFO org.apache.hadoop.hdfs.server.datanode.checker.ThrottledAsyncChecker: Scheduling a check for /usr/local/bigdata/hadoop-3.3.6/data/datanode
2023-11-11 17:16:54,987 INFO org.apache.hadoop.hdfs.server.datanode.checker.DatasetVolumeChecker: Scheduled health check for volume /usr/local/bigdata/hadoop-3.3.6/data/datanode
2023-11-11 17:16:54,993 INFO org.apache.hadoop.hdfs.server.datanode.VolumeScanner: Now scanning bpid BP-738434729-192.168.42.139-1699694199117 on volume /usr/local/bigdata/hadoop-3.3.6/data/datanode
2023-11-11 17:16:54,994 WARN org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: dfs.datanode.directoryscan.throttle.limit.ms.per.sec set to value above 1000 ms/sec. Assuming default value of -1
2023-11-11 17:16:54,994 INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: Periodic Directory Tree Verification scan starting in 19374231ms with interval of 21600000ms and throttle limit of -1ms/s
2023-11-11 17:16:54,994 INFO org.apache.hadoop.hdfs.server.datanode.VolumeScanner: VolumeScanner(/usr/local/bigdata/hadoop-3.3.6/data/datanode, DS-53d37fc1-536f-4732-8e16-2fcc032838af): finished scanning block pool BP-738434729-192.168.42.139-1699694199117
2023-11-11 17:16:55,001 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool BP-738434729-192.168.42.139-1699694199117 (Datanode Uuid 31203931-5c6b-4f85-b7ba-969b97a14854) service to node1/192.168.42.139:9000 beginning handshake with NN
2023-11-11 17:16:55,004 INFO org.apache.hadoop.hdfs.server.datanode.VolumeScanner: VolumeScanner(/usr/local/bigdata/hadoop-3.3.6/data/datanode, DS-53d37fc1-536f-4732-8e16-2fcc032838af): no suitable block pools found to scan.  Waiting 1814399987 ms.
2023-11-11 17:16:55,051 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool BP-738434729-192.168.42.139-1699694199117 (Datanode Uuid 31203931-5c6b-4f85-b7ba-969b97a14854) service to node1/192.168.42.139:9000 successfully registered with NN
2023-11-11 17:16:55,052 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: For namenode node1/192.168.42.139:9000 using BLOCKREPORT_INTERVAL of 21600000msecs CACHEREPORT_INTERVAL of 10000msecs Initial delay: 0msecs; heartBeatInterval=3000
2023-11-11 17:16:55,054 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting IBR Task Handler.
2023-11-11 17:16:55,092 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: After receiving heartbeat response, updating state of namenode node1:9000 to active
2023-11-11 17:16:55,111 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Successfully sent block report 0x15efe4f510e6c2c2 with lease ID 0x943e06b7110c03c3 to namenode: node1/192.168.42.139:9000,  containing 1 storage report(s), of which we sent 1. The reports had 0 total blocks and used 1 RPC(s). This took 5 msecs to generate and 14 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.
2023-11-11 17:16:55,112 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Got finalize command for block pool BP-738434729-192.168.42.139-1699694199117
[root@node1 current]# cat /usr/local/bigdata/hadoop-3.3.6/data/datanode/current/VERSION
#Sat Nov 11 17:16:54 CST 2023
storageID=DS-53d37fc1-536f-4732-8e16-2fcc032838af
clusterID=CID-4b675e4e-9cb8-460c-b4c0-09457a19aa68
cTime=0
datanodeUuid=31203931-5c6b-4f85-b7ba-969b97a14854
storageType=DATA_NODE
layoutVersion=-57
[root@node1 current]# cat /usr/local/bigdata/hadoop-3.3.6/data/namenode/current/VERSION
#Sat Nov 11 17:16:39 CST 2023
namespaceID=1932319601
clusterID=CID-4b675e4e-9cb8-460c-b4c0-09457a19aa68
cTime=1699694199117
storageType=NAME_NODE
blockpoolID=BP-738434729-192.168.42.139-1699694199117
layoutVersion=-66
[root@node1 current]# jps
35922 NameNode
37156 NodeManager
36168 DataNode
36924 ResourceManager
51885 Jps

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值