namenode和DataNode无法启动问题
3台虚拟机,指定
node01为NameNode,Resourcemanager
node02为SecondaryNameNode
单独启动HDFS
[root@node01 ~]# start-dfs.sh
22/08/21 12:55:46 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [node01]
node01: starting namenode, logging to /usr/apps/hadoop-2.7.4/logs/hadoop-root-namenode-node01.out
node03: starting datanode, logging to /usr/apps/hadoop-2.7.4/logs/hadoop-root-datanode-node03.out
node02: starting datanode, logging to /usr/apps/hadoop-2.7.4/logs/hadoop-root-datanode-node02.out
node01: starting datanode, logging to /usr/apps/hadoop-2.7.4/logs/hadoop-root-datanode-node01.out
Starting secondary namenodes [node02]
node02: starting secondarynamenode, logging to /usr/apps/hadoop-2.7.4/logs/hadoop-root-secondarynamenode-node02.out
22/08/21 12:56:08 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[root@node01 ~]# jps
3569 NameNode
3665 DataNode
4413 Jps
-----------------------------------------------
[root@node02 ~]# jps
2064 DataNode
2137 SecondaryNameNode
2204 Jps
------------------------------------------------
[root@node03 ~]# jps
1890 DataNode
1963 Jps
图上正确情况。
出现问题,DataNode或者NameNode不显示。经查日志cat /usr/apps/hadoop-2.7.4/logs/hadoop-root-datanode-node01.log,查看最后报错信息为
为org.apache.hadoop.hdfs.server.common.Storage: java.io.IOException: Incompatible clusterIDs in /usr/software/hadoop-2.6.0-cdh5.7.1/src/dfs/data: namenode clusterID = CID-60612a30-5886-4222-bc97-ece8e3d5b9d6; datanode clusterID = CID-14425489-e594-4458-8633-aa5ac7880299
一开始未解决不出现问题试过操作,多次格式化NameNode,并删除/tmp文件下的临时文件,或者重装hadoop.
多次格式化后hadoop nameNode -format 就会出现上述问题
————————————————
解决方案:将node01的NameNode的clusterID=各虚拟机DataNode的clusterID
修改地方:安装hadoop目录下:/usr/apps/hadoop/etc/hadoop/core-site.xml
···
<property>
<name>fs.defaultFS</name>
<value>hdfs://node01:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/data/hadoop</value>
</property>
··· 根据上面文件设置的指定Hadoop运行时生成的临时文件存放目录/usr/data/hadoop下 若DataNode没有,则去未出现虚拟机上/usr/data/hadoop/dfs/data/current下修改VERSION文件 若NameNode没有,则去未出现虚拟机上 /usr/data/hadoop/dfs/name/current下修改VERSION文件
遇到启动Yarn后dataNode 和NameNode不出现问题也是同样解决方式。