hadoop启动后jps查看总是不显示namenode进程,然后重新格式化hdfs

有时候hadoop老是抽风,启动服务后用jps查看总是无法显示相应的进程,现在我们就在原来的基础上进行重新格式化(已亲测可以实现):
重新格式化hdfs系统的方法:


(1)查看hdfs-site.xml:

<property>
          <name>dfs.namenode.name.dir</name>
          <value>/usr/local/hadoop/hadoop-2.4.0/hdfs/name</value>
          <description>namenode上存储hdfs名字空间元数据</description>  
    </property>
    <property>
          <name>dfs.datanode.data.dir</name>
          <value>/usr/local/hadoop/hadoop-2.4.0/hdfs/data</value>
          <description>datanode上数据块的物理存储位置</description>
    </property>

将dfs.namenode.name.dir和dfs.datanode.data.dir目录下的内容全部删除


(2)查看core-site.xml:

<property>
          <name>hadoop.tmp.dir</name>
          <value>/usr/local/hadoop/hadoop-2.4.0/hadoop_tmp</value>
          <description>namenode上本地的hadoop临时文件夹</description>  
</property>

将hadoop.tmp.dir目录下的内容全部删除


(3)在hadoop启动的情况下:
重新执行命令:hadoop namenode -format
最后出现如下提示表面格式化完毕且格式化成功:

Re-format filesystem in QJM to [192.168.2.113:8485, 192.168.2.114:8485, 192.168.2.115:8485] ? (Y or N) Y
15/07/31 10:02:05 INFO common.Storage: Storage directory /usr/local/hadoop/hadoop-2.4.0/hdfs/name has been successfully formatted.
15/07/31 10:02:06 INFO namenode.FSImage: Saving image file /usr/local/hadoop/hadoop2.4.0/hdfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
15/07/31 10:02:06 INFO namenode.FSImage: Imagefile/usr/local/hadoop/hadoop-2.4.0/hdfs/name/current/fsimage.ckpt_0000000000000000000 of size 196 bytes saved in 0 seconds.
15/07/31 10:02:06 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
15/07/31 10:02:06 INFO util.ExitUtil: ***Exiting with status 0***
15/07/31 10:02:06 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop01/192.168.2.111
************************************************************/

注意:
1.原来的数据全部被清空了。产生了一个新的hdfs。
2.确认格式化前已经删除对应目录下的内容,且启动了hdfs。
3.如果在hadoop没有启动的情况下就执行hadoop namenode -format会出现错误:

15/07/31 09:53:58 INFO ipc.Client: Retrying connect to server: hadoop03/192.168.2.113:8485. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
15/07/31 09:53:58 INFO ipc.Client: Retrying connect to server: hadoop05/192.168.2.115:8485. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
15/07/31 09:53:58 FATAL namenode.NameNode: Exception in namenode join
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs are ready for formatting. 2 exceptions thrown:
192.168.2.115:8485: Call From hadoop01/192.168.2.111 to hadoop05:8485 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
192.168.2.113:8485: Call From hadoop01/192.168.2.111 to hadoop03:8485 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
  • 6
    点赞
  • 16
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值