hadoop配置好之后启服务,jps能看到datanode进程,可是后台的datanode日志有如下错误,且50070端口上也是没有活的节点

2015-04-22 14:17:29,908 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: For namenode master/192.168.1.100:53310 using DELETEREPORT_INTERVAL of 300000 msec  BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec; heartBeatInterval=3000 2015-04-22 14:17:29,908 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in BPOfferService for Block pool BP-172857601-192.168.1.100-1429683180778 (storage id DS-1882029846-192.168.1.100-50010-1429520454466) service to master/192.168.1.100:53310 java.lang.NullPointerException         at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:439)         at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:525)         at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:676)         at java.lang.Thread.run(Thread.java:744) 2015-04-22 14:17:34,910 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: For namenode master/192.168.1.100:53310 using DELETEREPORT_INTERVAL of 300000 msec  BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec; heartBeatInterval=3000 2015-04-22 14:17:34,910 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in BPOfferService for Block pool BP-172857601-192.168.1.100-1429683180778 (storage id DS-1882029846-192.168.1.100-50010-1429520454466) service to master/192.168.1.100:53310 java.lang.NullPointerException         at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:439)         at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:525)         at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:676)         at java.lang.Thread.run(Thread.java:744)

原因可能是我之前有四台机器,每台机器有6块磁盘,其中有一台机器的磁盘目录与另外3台目录名称不一样,当时创建dfs.data.dir目录的时候,好像没创建成功,所以就做各种格式化操作了。
解决办法
1、先把集群停掉
2、删除hadoop.tmp.dir、dfs.name.dir、dfs.journalnode.edits.dir 等配置目录
3、删除dfs.data.dir目录
4、重新执行如下步骤
##在每个节点上把zookeeper服务启动 
 zkServer.sh start   
##在某一namenode节点上执行如下命令,创建命名空间  
<span style="font-family: 'microsoft yahei'; line-height: 30.3999996185303px;"> hdfs zkfc -formatZK   </span>
<span style="font-family: 'microsoft yahei'; line-height: 30.3999996185303px;">##在每个节点用如下命令启日志程序   </span>
<span style="font-family: 'microsoft yahei'; line-height: 30.3999996185303px;">hadoop-daemon.sh start journalnode  </span>
<span style="font-family: 'microsoft yahei'; line-height: 30.3999996185303px;"> ##在主namenode节点格式化namenode和journalnode目录  </span>
<span style="font-family: 'microsoft yahei'; line-height: 30.3999996185303px;"> hadoop namenode -format mycluster   </span>
<span style="font-family: 'microsoft yahei'; line-height: 30.3999996185303px;">##在主namenode节点启动namenode进程   </span>
<span style="font-family: 'microsoft yahei'; line-height: 30.3999996185303px;">hadoop-daemon.sh start namenode   </span>
<span style="font-family: 'microsoft yahei'; line-height: 30.3999996185303px;">##如下命令是把备namenode节点的目录格式化并把元数据从主namenode节点拷贝过来(在备节点执行)  </span>
<span style="font-family: 'microsoft yahei'; line-height: 30.3999996185303px;"> hdfs namenode -bootstrapStandby  </span>
<span style="font-family: 'microsoft yahei'; line-height: 30.3999996185303px;"> ##启动备节点的namenode   </span>
<span style="font-family: 'microsoft yahei'; line-height: 30.3999996185303px;">hadoop-daemon.sh start namenode   </span>
<span style="font-family: 'microsoft yahei'; line-height: 30.3999996185303px;">##在两个namenode节点都启动zkfc服务  </span>
<span style="font-family: 'microsoft yahei'; line-height: 30.3999996185303px;"> hadoop-daemon.sh start zkfc   </span>
<span style="font-family: 'microsoft yahei'; line-height: 30.3999996185303px;">##在所有的datanode节点上启动datanode   </span>
<span style="font-family: 'microsoft yahei'; line-height: 30.3999996185303px;">hadoop-daemon.sh start datanode  </span>
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值