执行start-all.sh, datanode没有起来

1. 查看进程,从以下可以看出DataNode并没有启过来

[root@S1PA124 current]# jps
23614 Jps
9773 SecondaryNameNode
9440 NameNode
4480 NetworkServerControl
10080 NodeManager
14183 Bootstrap
9948 ResourceManager

2、查看datanode日志

2014-08-19 18:48:50,002 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering> (storage id unknown) service to S1PA124/10.58.22.221:9000 starting to offer service
2014-08-19 18:48:50,007 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2014-08-19 18:48:50,007 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
2014-08-19 18:48:50,251 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /root/install/hadoop/hdfs/data/in_use.lock acquired by nodename 9589@S1PA124
2014-08-19 18:48:50,253 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for block pool Block pool BP-1848960540-10.58.22.221-1408444185577 (storage id DS-1132615068-10.58.22.221-50010-1408443768577) service to S1PA124/10.58.22.221:9000
java.io.IOException: Incompatible clusterIDs in /root/install/hadoop/hdfs/data: namenode clusterID = CID-108975ad-89b8-4e89-902a-46f0fe4d0a7f; datanode clusterID = CID-901268b8-21d6-43dc-ab3c-0c5785560ae9
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:837)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:808)
        at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:280)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:222)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:664)
        at java.lang.Thread.run(Thread.java:744)
2014-08-19 18:48:50,255 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool BP-1848960540-10.58.22.221-1408444185577 (storage id DS-1132615068-10.58.22.221-50010-1408443768577) service to S1PA124/10.58.22.221:9000
2014-08-19 18:48:50,357 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool BP-1848960540-10.58.22.221-1408444185577 (storage id DS-1132615068-10.58.22.221-50010-1408443768577)
2014-08-19 18:48:52,359 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2014-08-19 18:48:52,361 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0
2014-08-19 18:48:52,363 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at S1PA124/10.58.22.221
************************************************************/

3、到hdfs-site.xml里配置的信息去找版本号

<configuration>
        <property>
                <name>dfs.name.dir</name>
                <value>/root/install/hadoop/hdfs/name</value>
        </property>
        <property>
                <name>dfs.data.dir</name>
                <value>/root/install/hadoop/hdfs/data</value>
        </property>
        <property>
                <name>dfs.replication</name>
                <value>1</value>
        </property>
</configuration>

4、查看Namenode的集群ID

[root@S1PA124 name]# cat current/
cat: current/: Is a directory
[root@S1PA124 name]# cd current/
[root@S1PA124 current]# ;s
-bash: syntax error near unexpected token `;'
[root@S1PA124 current]# ls
edits_0000000000000000001-0000000000000000001  edits_0000000000000000164-0000000000000000165  edits_0000000000000000184-0000000000000000185
edits_0000000000000000002-0000000000000000003  edits_0000000000000000166-0000000000000000167  edits_0000000000000000186-0000000000000000187
edits_0000000000000000004-0000000000000000004  edits_0000000000000000168-0000000000000000169  edits_inprogress_0000000000000000188
edits_0000000000000000005-0000000000000000006  edits_0000000000000000170-0000000000000000171  fsimage_0000000000000000185
edits_0000000000000000007-0000000000000000007  edits_0000000000000000172-0000000000000000173  fsimage_0000000000000000185.md5
edits_0000000000000000008-0000000000000000010  edits_0000000000000000174-0000000000000000175  fsimage_0000000000000000187
edits_0000000000000000011-0000000000000000092  edits_0000000000000000176-0000000000000000177  fsimage_0000000000000000187.md5
edits_0000000000000000093-0000000000000000135  edits_0000000000000000178-0000000000000000179  seen_txid
edits_0000000000000000136-0000000000000000161  edits_0000000000000000180-0000000000000000181  VERSION
edits_0000000000000000162-0000000000000000163  edits_0000000000000000182-0000000000000000183
[root@S1PA124 current]# cat VERSION 
#Tue Aug 19 18:29:45 CST 2014
namespaceID=489160979
clusterID=CID-108975ad-89b8-4e89-902a-46f0fe4d0a7f
cTime=0
storageType=NAME_NODE
blockpoolID=BP-1848960540-10.58.22.221-1408444185577
layoutVersion=-47
[root@S1PA124 current]# pwd
/root/install/hadoop/hdfs/name/current

5、查看Datanode的集群ID

[root@S1PA124 data]# cd current/
[root@S1PA124 current]# ls
BP-1599675853-10.58.50.110-1408439336890  VERSION
[root@S1PA124 current]# cat VERSION 
#Tue Aug 19 18:22:48 CST 2014
storageID=DS-1132615068-10.58.22.221-50010-1408443768577
clusterID=CID-901268b8-21d6-43dc-ab3c-0c5785560ae9
cTime=0
storageType=DATA_NODE
layoutVersion=-47
[root@S1PA124 current]# [wd
-bash: [wd: command not found
[root@S1PA124 current]# pwd
/root/install/hadoop/hdfs/data/current

6、把DataNode的集群ID号与NameNode的集群ID号一致

7、重启集群

[root@S1PA124 current]# start-all.sh 
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
14/08/20 09:45:11 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [S1PA124]
S1PA124: starting namenode, logging to /root/install/hadoop-2.2.0/logs/hadoop-root-namenode-S1PA124.out
fk01: starting datanode, logging to /root/install/hadoop-2.2.0/logs/hadoop-root-datanode-fk01.out
fulfillment: starting datanode, logging to /root/install/hadoop-2.2.0/logs/hadoop-root-datanode-fulfillment.out
S1PA124: starting datanode, logging to /root/install/hadoop-2.2.0/logs/hadoop-root-datanode-S1PA124.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /root/install/hadoop-2.2.0/logs/hadoop-root-secondarynamenode-S1PA124.out
14/08/20 09:45:26 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
starting resourcemanager, logging to /root/install/hadoop-2.2.0/logs/yarn-root-resourcemanager-S1PA124.out
fk01: starting nodemanager, logging to /root/install/hadoop-2.2.0/logs/yarn-root-nodemanager-fk01.out
fulfillment: starting nodemanager, logging to /root/install/hadoop-2.2.0/logs/yarn-root-nodemanager-fulfillment.out
S1PA124: starting nodemanager, logging to /root/install/hadoop-2.2.0/logs/yarn-root-nodemanager-S1PA124.out
[root@S1PA124 current]# jps
24310 NameNode
4480 NetworkServerControl
24893 ResourceManager
25354 Jps
25026 NodeManager
14183 Bootstrap
24478 DataNode
24669 SecondaryNameNode

8、总结原因

从日志中可以看出,原因是因为datanode的clusterID 和 namenode的clusterID 不匹配。

打开hdfs-site.xml里配置的datanode和namenode对应的目录,分别打开current文件夹里的VERSION,可以看到clusterID项正如日志里记录的一样,确实不一致,修改datanode里VERSION文件的clusterID 与namenode里的一致,再重新启动(执行start-all.sh)再执行jps命令可以看到datanode已正常启动。

出现该问题的原因:在第一次格式化dfs后,启动并使用了hadoop,后来又重新执行了格式化命令(hdfs namenode -format),这时namenode的clusterID会重新生成,而datanode的clusterID 保持不变。

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值