hadoop datanode 启动出错
017-03-06 22:40:03,030 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /home/ym/software/hadoop/hadoop-2.6.4/dfs/datanode/in_use.lock acquired by nodename 3108@ubuntu
2017-03-06 22:40:03,034 WARN org.apache.hadoop.hdfs.server.common.Storage: java.io.IOException: Incompatible clusterIDs in /home/ym/software/hadoop/hadoop-2.6.4/dfs/datanode: namenode clusterID = CID-01dd2e0b-6758-42c8-9c33-123ee00a9e16; datanode clusterID = CID-a8b991fd-cafb-4ed5-a012-23d2f4e52f0b
2017-03-06 22:40:03,035 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid unassigned) service to ubuntu/172.19.12.172:9000. Exiting.
java.io.IOException: All specified directories are failed to load.
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:478)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1338)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1304)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:314)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:226)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:867)
at java.lang.Thread.run(Thread.java:745)
2017-03-06 22:40:03,037 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool <registering> (Datanode Uuid unassigned) service to ubuntu/172.19.12.172:9000
2017-03-06 22:40:03,044 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool <registering> (Datanode Uuid unassigned)
2017-03-06 22:40:05,044 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2017-03-06 22:40:05,047 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0
2017-03-06 22:40:05,050 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
一开始搭建hadoop+hbase环境是正常的,后面想自己安装一个zookeeper,安装后就各种问题,下面记录一下:
首先启动hadoop的时候,一定要保证以下进程启动:
找到问题后:解决方法如下:
解决方案:
(1)datanode的clusterID 和 namenode的clusterID 不匹配。解决办法:
将name/current下的VERSION中的clusterID复制到data/current下的VERSION中,覆盖掉原来的clusterID 让两个保持一致然后重启,启动后执行jps,查看进程
16864 NameNode
17331 ResourceManager
17780 Jps
17006 DataNode
17182 SecondaryNameNode
17471 NodeManager
2.如果以上办法还不行,就删除dfs.namenode.name.dir和dfs.datanode.data.dir 目录下的所有文件
产生原因,一开始配置文件中使用的是Ip,后来改成hostname中 配置的名字, 再次格式化了namenode引起的。
3,重新格式化:bin/hadoop namenode -format
4,启动