hadoop2.3.0错误之Configured Capacity: 0 (0 B)Present Capacity: 0 (0 B) DFS Remaining: 0 (0 B) DFS Used:

发现自己真是。。。

今天的有一次出现了之前的错误:

hadoop@hapmaster:~/hadoop-2.3.0/sbin$ hdfs dfsadmin -report
Configured Capacity: 0 (0 B)
Present Capacity: 0 (0 B)
DFS Remaining: 0 (0 B)
DFS Used: 0 (0 B)
DFS Used%: NaN%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Datanodes available: 0 (0 total, 0 dead)

hadoop@hapmaster:~/hadoop-2.3.0/sbin$ 
昨天遇到这个错误是因为多次hdfs namenode -format导致namespaceID不同,删掉datanode配置的dfs.data.dir目录后就好了。

但这次的区别在于datanode已经启动了:

hadoop@hapmaster:~/hadoop-2.3.0/sbin$ start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [hapmaster]
hapmaster: namenode running as process 15828. Stop it first.
hapslave4: datanode running as process 8675. Stop it first.
hapslave3: datanode running as process 8854. Stop it first.
hapslave1: datanode running as process 9104. Stop it first.
hapslave2: datanode running as process 8986. Stop it first.
Starting secondary namenodes [hapmaster]
hapmaster: secondarynamenode running as process 16151. Stop it first.
starting yarn daemons
resourcemanager running as process 16309. Stop it first.
hapslave4: nodemanager running as process 8895. Stop it first.
hapslave1: nodemanager running as process 9324. Stop it first.
hapslave3: nodemanager running as process 9079. Stop it first.
hapslave2: nodemanager running as process 9202. Stop it first.
难道是网络连接有什么问题?
ping一下,namenode可以ping通datenode,反过来则不行。发现ip不在一个段,修改,重启,好了。

  • 2
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值