2018-06-01 17:01:27,102 WARN org.apache.hadoop.hdfs.server.common.Storage: java.io.IOException: Incompatible clusterIDs in /home/hadoop/app/tmp/dfs/data: namenode clusterID = CID-64018bcc-836c-4a42-925e-2ebdabfd73ee; datanode clusterID = CID-a568c94d-0ad8-4f96-9eac-bd39caabe932
2018-06-01 17:01:27,103 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid unassigned) service to hadoop000/172.19.231.229:8020. Exiting.
java.io.IOException: All specified directories are failed to load.
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:478)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1394)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1355)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:317)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:228)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:829)
at java.lang.Thread.run(Thread.java:745)
2018-06-01 17:01:27,105 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool <registering> (Datanode Uuid unassigned) service to hadoop000/172.19.231.229:8020
2018-06-01 17:01:27,111 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool <registering> (Datanode Uuid unassigned)
2018-06-01 17:01:29,111 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2018-06-01 17:01:29,113 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0
2018-06-01 17:01:27,103 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid unassigned) service to hadoop000/172.19.231.229:8020. Exiting.
java.io.IOException: All specified directories are failed to load.
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:478)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1394)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1355)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:317)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:228)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:829)
at java.lang.Thread.run(Thread.java:745)
2018-06-01 17:01:27,105 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool <registering> (Datanode Uuid unassigned) service to hadoop000/172.19.231.229:8020
2018-06-01 17:01:27,111 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool <registering> (Datanode Uuid unassigned)
2018-06-01 17:01:29,111 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2018-06-01 17:01:29,113 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0
2018-06-01 17:01:29,116 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
这是我刚刚遇到的bug。通过查阅资料查询到原因如下
提醒两个cid不一致,原因是Hadoop启动后,在使用格式化namenode,会导致datanode和namenode的clusterID不一致
同时,给我提示的目录是/home/hadoop/app/tmp/dfs/data
所以我对第一反应是tmp下文件未覆盖。所以,解决方案是将tmp文件夹删除后,重新格式化hdfs即可。
检验的方式是在格式化之后,再次启动hdfs。jps一下看看当前进程是否包含namenode