NameNode 格式化失败问题的解决

原创 2015年11月18日 18:32:55
15/11/18 17:59:30 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
15/11/18 17:59:30 INFO util.GSet: Computing capacity for map NameNodeRetryCache
15/11/18 17:59:30 INFO util.GSet: VM type       = 64-bit
15/11/18 17:59:30 INFO util.GSet: 0.029999999329447746% max memory 3.4 GB = 1.0 MB
15/11/18 17:59:30 INFO util.GSet: capacity      = 2^17 = 131072 entries
15/11/18 17:59:30 INFO namenode.NNConf: ACLs enabled? false
15/11/18 17:59:30 INFO namenode.NNConf: XAttrs enabled? true
15/11/18 17:59:30 INFO namenode.NNConf: Maximum size of an xattr: 16384
15/11/18 17:59:30 WARN ssl.FileBasedKeyStoresFactory: The property 'ssl.client.truststore.location' has not been set, no TrustStore will be loaded
15/11/18 17:59:32 INFO ipc.Client: Retrying connect to server: hadoop206/192.168.10.206:8485. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/11/18 17:59:32 INFO ipc.Client: Retrying connect to server: hadoop203/192.168.10.203:8485. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/11/18 17:59:32 INFO ipc.Client: Retrying connect to server: hadoop205/192.168.10.205:8485. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/11/18 17:59:33 INFO ipc.Client: Retrying connect to server: hadoop203/192.168.10.203:8485. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/11/18 17:59:33 INFO ipc.Client: Retrying connect to server: hadoop205/192.168.10.205:8485. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/11/18 17:59:33 INFO ipc.Client: Retrying connect to server: hadoop206/192.168.10.206:8485. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/11/18 17:59:34 INFO ipc.Client: Retrying connect to server: hadoop203/192.168.10.203:8485. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/11/18 17:59:34 INFO ipc.Client: Retrying connect to server: hadoop206/192.168.10.206:8485. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/11/18 17:59:34 INFO ipc.Client: Retrying connect to server: hadoop205/192.168.10.205:8485. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/11/18 17:59:35 INFO ipc.Client: Retrying connect to server: hadoop203/192.168.10.203:8485. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/11/18 17:59:35 INFO ipc.Client: Retrying connect to server: hadoop206/192.168.10.206:8485. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/11/18 17:59:35 INFO ipc.Client: Retrying connect to server: hadoop205/192.168.10.205:8485. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/11/18 17:59:36 INFO ipc.Client: Retrying connect to server: hadoop203/192.168.10.203:8485. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/11/18 17:59:36 INFO ipc.Client: Retrying connect to server: hadoop206/192.168.10.206:8485. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/11/18 17:59:36 INFO ipc.Client: Retrying connect to server: hadoop205/192.168.10.205:8485. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/11/18 17:59:37 INFO ipc.Client: Retrying connect to server: hadoop203/192.168.10.203:8485. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/11/18 17:59:37 INFO ipc.Client: Retrying connect to server: hadoop206/192.168.10.206:8485. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/11/18 17:59:37 INFO ipc.Client: Retrying connect to server: hadoop205/192.168.10.205:8485. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/11/18 17:59:38 INFO ipc.Client: Retrying connect to server: hadoop203/192.168.10.203:8485. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/11/18 17:59:38 INFO ipc.Client: Retrying connect to server: hadoop206/192.168.10.206:8485. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/11/18 17:59:38 INFO ipc.Client: Retrying connect to server: hadoop205/192.168.10.205:8485. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/11/18 17:59:39 INFO ipc.Client: Retrying connect to server: hadoop203/192.168.10.203:8485. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/11/18 17:59:39 INFO ipc.Client: Retrying connect to server: hadoop206/192.168.10.206:8485. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/11/18 17:59:39 INFO ipc.Client: Retrying connect to server: hadoop205/192.168.10.205:8485. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/11/18 17:59:40 INFO ipc.Client: Retrying connect to server: hadoop203/192.168.10.203:8485. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/11/18 17:59:40 INFO ipc.Client: Retrying connect to server: hadoop206/192.168.10.206:8485. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/11/18 17:59:40 INFO ipc.Client: Retrying connect to server: hadoop205/192.168.10.205:8485. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/11/18 17:59:41 INFO ipc.Client: Retrying connect to server: hadoop203/192.168.10.203:8485. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/11/18 17:59:41 INFO ipc.Client: Retrying connect to server: hadoop206/192.168.10.206:8485. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/11/18 17:59:41 WARN namenode.NameNode: Encountered exception during format: 
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs are ready for formatting. 1 exceptions thrown:
192.168.10.203:8485: Call From hadoop203/192.168.10.203 to hadoop203:8485 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:223)
at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:232)
at org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:875)
at org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:171)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:922)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1354)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1473)
15/11/18 17:59:41 INFO ipc.Client: Retrying connect to server: hadoop205/192.168.10.205:8485. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/11/18 17:59:41 FATAL namenode.NameNode: Exception in namenode join
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs are ready for formatting. 1 exceptions thrown:
192.168.10.203:8485: Call From hadoop203/192.168.10.203 to hadoop203:8485 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:223)
at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:232)
at org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:875)
at org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:171)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:922)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1354)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1473)
15/11/18 17:59:41 INFO util.ExitUtil: Exiting with status 1
15/11/18 17:59:41 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop203/192.168.10.203

************************************************************/

解决办法:

启动各个zookeeper(命令 ./zkServer.sh start ,再用启动设计好的 JournalNode进程(./ hadoop-daemon.sh start journalnode)。然后再进行格式化即可。

在其中一个namenode上格式化:./hdfs namenode -format -bjsxt  (bin目录下)

启动刚刚格式化的namenode(./hadoop-daemon.sh start namenode)(sbin目录下) 
 
在另一个没有格式化的namenode上执行:./hdfs namenode -bootstrapStandby  (bin)

启动第二个namenode  (./hadoop-daemon.sh start namenode)(    sbin)

在其中一个namenode上初始化zkfc:./hdfs zkfc -formatZK   (bin)
在四台虚拟机上输入jps


停止上面节点:./stop-all.sh   (sbin)
启动:./start-all.sh     (sbin)
访问  http://hadoop202:50070/     http://hadoop203:50070/


Hadoop namenode重新格式化需注意问题

hadoop重新格式化意味着集群的数据会被全部删除,格式化前需考虑数据备份或转移问题。格式化前需要进行一些特别的文件删除操作,否则将导致格式化后hadoop无法正常启动。...
  • gis_101
  • gis_101
  • 2016年10月15日 11:25
  • 8343

重新格式化HDFS的方法

重新格式化hdfs系统的方法: (1)查看hdfs-ste.xml: dfs.name.dir /home/hadoop/hdfs/name namenode上存储hdfs名字空间元...
  • yeruby
  • yeruby
  • 2014年03月19日 17:02
  • 23154

Hadoop namenode重新格式化需注意问题

http://blog.csdn.net/gis_101/article/details/52821946
  • u010458114
  • u010458114
  • 2017年04月11日 19:01
  • 1332

每次启动hadoop都要格式化namenode?

前面配置好hadoop后,在使用中发现一个问题。就是每次启动都需要格式化,不然namenode就无法启动。 在网上找到了相应的解决方法http://blog.csdn.net/bychjzh/arti...
  • leaderYU
  • leaderYU
  • 2014年03月13日 22:38
  • 4719

hadoop格式化namenode问题

hadoop
  • fqf_520
  • fqf_520
  • 2016年08月26日 11:43
  • 1318

hadoop namenode格式化问题汇总

hadoop namenode格式化问题汇总(持续更新)0 Hadoop集群环境3台rhel6.4,2个namenode+2个zkfc, 3个journalnode+zookeeper-server ...
  • cheungmine
  • cheungmine
  • 2015年05月16日 11:24
  • 2857

hadoop namenode 格式化问题

在Hadoop的HDFS部署好了之后并不能马上使用,而是先要对配置的文件系统进行格式化。在这里要注意两个概念,一个是文件系统,此时的文件系统在物理上还不存在,或许是网络磁盘来描述会更加合适;二就是格式...
  • maixia24
  • maixia24
  • 2013年10月27日 21:04
  • 1053

HDFS------hadoop namenode -format

集群搭建好了以后,通常我们会输入命令:/bin/hadoop namenode -format对hdfs进行格式化,那究竟格式化都做些什么具体的工作呢,怀着好奇心到源码里一探究竟。 首先从这行命令/...
  • shuhuai007
  • shuhuai007
  • 2011年09月02日 14:07
  • 3949

NameNode格式化失败问题的解决

NameNode格式化失败,查看日志,报如下错: 15/04/08 10:05:43 INFO namenode.NameNode: registered UNIX signal handlers f...
  • u014729236
  • u014729236
  • 2015年04月08日 18:40
  • 9204

hadoop namenode –format 错误

namenode 格式化过程失败 solution: 把/tmp下的Hadoop开关的临时文件删除 把/hadoop.tmp.dir目录清空 注:在每次执行hadoop na...
  • kankan_summer
  • kankan_summer
  • 2010年12月30日 14:37
  • 6197
内容举报
返回顶部
收藏助手
不良信息举报
您举报文章:NameNode 格式化失败问题的解决
举报原因:
原因补充:

(最多只允许输入30个字)