HDFS格式化报错

异常描述



[grant@zz_mars bin 16:08 ]$ ./hadoop namenode -format
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.


13/01/17 16:12:03 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = java.net.UnknownHostException: zz.chen: zz.chen
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 0.22.0
STARTUP_MSG:   classpath = /home/grant/hadoop-0.22.0/bin/../conf:/usr/java/jdk1.7.0_09/lib/tools.jar:/home/grant/hadoop-0.22.0/bin/..:/home/grant/hadoop-0.22.0/bin/../hadoop-common-0.22.0.jar:/home/grant/hadoop-0.22.0/bin/../hadoop-common-test-0.22.0.jar:/home/grant/hadoop-0.22.0/bin/../hadoop-hdfs-0.22.0.jar:/home/grant/hadoop-0.22.0/bin/../hadoop-hdfs-0.22.0-sources.jar:/home/grant/hadoop-0.22.0/bin/../hadoop-hdfs-ant-0.22.0.jar:/home/grant/hadoop-0.22.0/bin/../hadoop-hdfs-test-0.22.0.jar:/home/grant/hadoop-0.22.0/bin/../hadoop-hdfs-test-0.22.0-sources.jar:/home/grant/hadoop-0.22.0/bin/../hadoop-mapred-0.22.0.jar:/home/grant/hadoop-0.22.0/bin/../hadoop-mapred-0.22.0-sources.jar:/home/grant/hadoop-0.22.0/bin/../hadoop-mapred-examples-0.22.0.jar:/home/grant/hadoop-0.22.0/bin/../hadoop-mapred-test-0.22.0.jar:/home/grant/hadoop-0.22.0/bin/../hadoop-mapred-tools-0.22.0.jar:/home/grant/hadoop-0.22.0/bin/../lib/ant-1.6.5.jar:/home/grant/hadoop-0.22.0/bin/../lib/ant-1.7.1.jar:/home/grant/hadoop-0.22.0/bin/../lib/ant-launcher-1.7.1.jar:/home/grant/hadoop-0.22.0/bin/../lib/asm-3.3.jar:/home/grant/hadoop-0.22.0/bin/../lib/aspectjrt-1.6.5.jar:/home/grant/hadoop-0.22.0/bin/../lib/aspectjtools-1.6.5.jar:/home/grant/hadoop-0.22.0/bin/../lib/avro-1.5.3.jar:/home/grant/hadoop-0.22.0/bin/../lib/avro-compiler-1.5.3.jar:/home/grant/hadoop-0.22.0/bin/../lib/avro-ipc-1.5.3.jar:/home/grant/hadoop-0.22.0/bin/../lib/commons-cli-1.2.jar:/home/grant/hadoop-0.22.0/bin/../lib/commons-codec-1.4.jar:/home/grant/hadoop-0.22.0/bin/../lib/commons-collections-3.2.1.jar:/home/grant/hadoop-0.22.0/bin/../lib/commons-daemon-1.0.1.jar:/home/grant/hadoop-0.22.0/bin/../lib/commons-el-1.0.jar:/home/grant/hadoop-0.22.0/bin/../lib/commons-httpclient-3.1.jar:/home/grant/hadoop-0.22.0/bin/../lib/commons-lang-2.5.jar:/home/grant/hadoop-0.22.0/bin/../lib/commons-logging-1.1.1.jar:/home/grant/hadoop-0.22.0/bin/../lib/commons-logging-api-1.1.jar:/home/grant/hadoop-0.22.0/bin/../lib/commons-net-1.4.1.jar:/home/grant/hadoop-0.22.0/bin/../lib/core-3.1.1.jar:/home/grant/hadoop-0.22.0/bin/../lib/ecj-3.5.1.jar:/home/grant/hadoop-0.22.0/bin/../lib/guava-r09.jar:/home/grant/hadoop-0.22.0/bin/../lib/hsqldb-1.8.0.10.jar:/home/grant/hadoop-0.22.0/bin/../lib/jackson-core-asl-1.7.3.jar:/home/grant/hadoop-0.22.0/bin/../lib/jackson-mapper-asl-1.7.3.jar:/home/grant/hadoop-0.22.0/bin/../lib/jasper-compiler-5.5.12.jar:/home/grant/hadoop-0.22.0/bin/../lib/jasper-runtime-5.5.12.jar:/home/grant/hadoop-0.22.0/bin/../lib/jdiff-1.0.9.jar:/home/grant/hadoop-0.22.0/bin/../lib/jets3t-0.7.1.jar:/home/grant/hadoop-0.22.0/bin/../lib/jetty-6.1.26.jar:/home/grant/hadoop-0.22.0/bin/../lib/jetty-util-6.1.26.jar:/home/grant/hadoop-0.22.0/bin/../lib/jsch-0.1.42.jar:/home/grant/hadoop-0.22.0/bin/../lib/jsp-2.1-glassfish-2.1.v20091210.jar:/home/grant/hadoop-0.22.0/bin/../lib/jsp-2.1-jetty-6.1.26.jar:/home/grant/hadoop-0.22.0/bin/../lib/jsp-api-2.1-glassfish-2.1.v20091210.jar:/home/grant/hadoop-0.22.0/bin/../lib/junit-4.8.1.jar:/home/grant/hadoop-0.22.0/bin/../lib/kfs-0.3.jar:/home/grant/hadoop-0.22.0/bin/../lib/log4j-1.2.16.jar:/home/grant/hadoop-0.22.0/bin/../lib/mockito-all-1.8.2.jar:/home/grant/hadoop-0.22.0/bin/../lib/mockito-all-1.8.5.jar:/home/grant/hadoop-0.22.0/bin/../lib/oro-2.0.8.jar:/home/grant/hadoop-0.22.0/bin/../lib/paranamer-2.3.jar:/home/grant/hadoop-0.22.0/bin/../lib/paranamer-ant-2.3.jar:/home/grant/hadoop-0.22.0/bin/../lib/paranamer-generator-2.3.jar:/home/grant/hadoop-0.22.0/bin/../lib/qdox-1.12.jar:/home/grant/hadoop-0.22.0/bin/../lib/servlet-api-2.5-20081211.jar:/home/grant/hadoop-0.22.0/bin/../lib/slf4j-api-1.6.1.jar:/home/grant/hadoop-0.22.0/bin/../lib/slf4j-log4j12-1.6.1.jar:/home/grant/hadoop-0.22.0/bin/../lib/snappy-java-1.0.3.2.jar:/home/grant/hadoop-0.22.0/bin/../lib/velocity-1.6.4.jar:/home/grant/hadoop-0.22.0/bin/../lib/xmlenc-0.52.jar:/home/grant/hadoop-0.22.0/bin/../lib/jsp-2.1/*.jar:/home/grant/hadoop-0.22.0/hdfs/bin/../conf:/home/grant/hadoop-0.22.0/hdfs/bin/../hadoop-hdfs-*.jar:/home/grant/hadoop-0.22.0/hdfs/bin/../lib/*.jar:/home/grant/hadoop-0.22.0/hdfs/bin/../hadoop-hdfs-*.jar:/home/grant/hadoop-0.22.0/hdfs/bin/../lib/*.jar
STARTUP_MSG:   build = http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.22/common -r 1207774; compiled by 'jenkins' on Sun Dec  4 00:57:22 UTC 2011
************************************************************/
13/01/17 16:12:03 WARN common.Util: Path /home/grant/hdfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
13/01/17 16:12:03 WARN common.Util: Path /home/grant/hdfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
13/01/17 16:12:03 INFO namenode.FSNamesystem: defaultReplication = 1
13/01/17 16:12:03 INFO namenode.FSNamesystem: maxReplication = 512
13/01/17 16:12:03 INFO namenode.FSNamesystem: minReplication = 1
13/01/17 16:12:03 INFO namenode.FSNamesystem: maxReplicationStreams = 2
13/01/17 16:12:03 INFO namenode.FSNamesystem: shouldCheckForEnoughRacks = false
13/01/17 16:12:03 INFO util.GSet: VM type       = 32-bit
13/01/17 16:12:03 INFO util.GSet: 2% max memory = 17.77875 MB
13/01/17 16:12:03 INFO util.GSet: capacity      = 2^22 = 4194304 entries
13/01/17 16:12:03 INFO util.GSet: recommended=4194304, actual=4194304
13/01/17 16:12:03 INFO metrics.MetricsUtil: Unable to obtain hostName
java.net.UnknownHostException: zz.chen: zz.chen
at java.net.InetAddress.getLocalHost(InetAddress.java:1438)
at org.apache.hadoop.metrics.MetricsUtil.getHostName(MetricsUtil.java:95)
at org.apache.hadoop.metrics.MetricsUtil.createRecord(MetricsUtil.java:84)
at org.apache.hadoop.security.UserGroupInformation$UgiMetrics.<init>(UserGroupInformation.java:103)
at org.apache.hadoop.security.UserGroupInformation.<clinit>(UserGroupInformation.java:183)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setConfigurationParameters(FSNamesystem.java:461)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:432)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1406)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1523)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1543)
Caused by: java.net.UnknownHostException: zz.chen
at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:866)
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1258)
at java.net.InetAddress.getLocalHost(InetAddress.java:1434)
... 9 more
13/01/17 16:12:03 INFO namenode.FSNamesystem: fsOwner=grant
13/01/17 16:12:03 INFO namenode.FSNamesystem: supergroup=supergroup
13/01/17 16:12:03 INFO namenode.FSNamesystem: isPermissionEnabled=true
13/01/17 16:12:03 INFO namenode.FSNamesystem: isBlockTokenEnabled=false blockKeyUpdateInterval=0 min(s), blockTokenLifetime=0 min(s)
13/01/17 16:12:03 INFO namenode.NameNode: Caching file names occuring more than 10 times 
13/01/17 16:12:04 INFO common.Storage: Saving image file /home/grant/hdfs/name/current/fsimage using no compression
13/01/17 16:12:04 INFO common.Storage: Image file of size 112 saved in 0 seconds.
13/01/17 16:12:04 INFO common.Storage: Storage directory /home/grant/hdfs/name has been successfully formatted.
13/01/17 16:12:04 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at java.net.UnknownHostException: zz.chen: zz.chen
************************************************************/



我的解决方案:在/etc/hosts文件里加一行 :

                           127.0.0.1     zz.chen



以下是转载:


在对HDFS格式化,执行hadoop namenode -format命令时,出现未知的主机名的问题,异常信息如下所示:

[plain]  view plain copy
  1.  [shirdrn@localhost bin]$ hadoop namenode -format  
  2. 11/06/22 07:33:31 INFO namenode.NameNode: STARTUP_MSG:   
  3. /************************************************************  
  4. STARTUP_MSG: Starting NameNode  
  5. STARTUP_MSG:   host = java.net.UnknownHostException: localhost.localdomain: localhost.localdomain  
  6. STARTUP_MSG:   args = [-format]  
  7. STARTUP_MSG:   version = 0.20.0  
  8. STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.20 -r 763504; compiled by 'ndaley' on Thu Apr  9 05:18:40 UTC 2009  
  9. ************************************************************/  
  10. Re-format filesystem in /tmp/hadoop/hadoop-shirdrn/dfs/name ? (Y or N) Y  
  11. 11/06/22 07:33:36 INFO namenode.FSNamesystem: fsOwner=shirdrn,shirdrn  
  12. 11/06/22 07:33:36 INFO namenode.FSNamesystem: supergroup=supergroup  
  13. 11/06/22 07:33:36 INFO namenode.FSNamesystem: isPermissionEnabled=true  
  14. 11/06/22 07:33:36 INFO metrics.MetricsUtil: Unable to obtain hostName  
  15. java.net.UnknownHostException: localhost.localdomain: localhost.localdomain  
  16.         at java.net.InetAddress.getLocalHost(InetAddress.java:1353)  
  17.         at org.apache.hadoop.metrics.MetricsUtil.getHostName(MetricsUtil.java:91)  
  18.         at org.apache.hadoop.metrics.MetricsUtil.createRecord(MetricsUtil.java:80)  
  19.         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.initialize(FSDirectory.java:73)  
  20.         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.<init>(FSDirectory.java:68)  
  21.         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:370)  
  22.         at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:853)  
  23.         at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:947)  
  24.         at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:964)  
  25. 11/06/22 07:33:36 INFO common.Storage: Image file of size 97 saved in 0 seconds.  
  26. 11/06/22 07:33:36 INFO common.Storage: Storage directory /tmp/hadoop/hadoop-shirdrn/dfs/name has been successfully formatted.  
  27. 11/06/22 07:33:36 INFO namenode.NameNode: SHUTDOWN_MSG:   
  28. /************************************************************  
  29. SHUTDOWN_MSG: Shutting down NameNode at java.net.UnknownHostException: localhost.localdomain: localhost.localdomain  
  30. ************************************************************/  
我们通过执行hostname命令可以看到:

[plain]  view plain copy
  1. [shirdrn@localhost bin]# hostname  
  2. localhost.localdomain  
也就是说,Hadoop在格式化HDFS的时候,通过hostname命令获取到的主机名是localhost.localdomain,然后在/etc/hosts文件中进行映射的时候,没有找到,看下我的/etc/hosts内容:

[plain]  view plain copy
  1. [root@localhost bin]# cat /etc/hosts  
  2. # Do not remove the following line, or various programs  
  3. # that require network functionality will fail.  
  4. 127.0.0.1               localhost       localhost  
  5. 192.168.1.103           localhost       localhost  
也就说,通过localhost.localdomain根本无法映射到一个IP地址,所以报错了。

此时,我们查看一下/etc/sysconfig/network文件:

[plain]  view plain copy
  1. NETWORKING=yes  
  2. NETWORKING_IPV6=yes  
  3. HOSTNAME=localhost.localdomain  

可见,执行hostname获取到这里配置的HOSTNAME的值。


解决方法


修改/etc/sysconfig/network中HOSTNAME的值为localhost,或者自己指定的主机名,保证localhost在/etc/hosts文件中映射为正确的IP地址,然后重新启动网络服务:

[plain]  view plain copy
  1. [root@localhost bin]# /etc/rc.d/init.d/network restart  
  2. Shutting down interface eth0:  [  OK  ]  
  3. Shutting down loopback interface:  [  OK  ]  
  4. Bringing up loopback interface:  [  OK  ]  
  5. Bringing up interface eth0:    
  6. Determining IP information for eth0... done.  
  7. [  OK  ]  

这时,再执行格式化HDFS命令,以及启动HDFS集群就正常了。

格式化:

[plain]  view plain copy
  1. [shirdrn@localhost bin]$ hadoop namenode -format  
  2. 11/06/22 08:02:37 INFO namenode.NameNode: STARTUP_MSG:   
  3. /************************************************************  
  4. STARTUP_MSG: Starting NameNode  
  5. STARTUP_MSG:   host = localhost/127.0.0.1  
  6. STARTUP_MSG:   args = [-format]  
  7. STARTUP_MSG:   version = 0.20.0  
  8. STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.20 -r 763504; compiled by 'ndaley' on Thu Apr  9 05:18:40 UTC 2009  
  9. ************************************************************/  
  10. 11/06/22 08:02:37 INFO namenode.FSNamesystem: fsOwner=shirdrn,shirdrn  
  11. 11/06/22 08:02:37 INFO namenode.FSNamesystem: supergroup=supergroup  
  12. 11/06/22 08:02:37 INFO namenode.FSNamesystem: isPermissionEnabled=true  
  13. 11/06/22 08:02:37 INFO common.Storage: Image file of size 97 saved in 0 seconds.  
  14. 11/06/22 08:02:37 INFO common.Storage: Storage directory /tmp/hadoop/hadoop-shirdrn/dfs/name has been successfully formatted.  
  15. 11/06/22 08:02:37 INFO namenode.NameNode: SHUTDOWN_MSG:   
  16. /************************************************************  
  17. SHUTDOWN_MSG: Shutting down NameNode at localhost/127.0.0.1  
  18. ************************************************************/  
启动:
[plain]  view plain copy
  1. [shirdrn@localhost bin]$ start-all.sh   
  2. starting namenode, logging to /home/shirdrn/eclipse/eclipse-3.5.2/hadoop/hadoop-0.20.0/logs/hadoop-shirdrn-namenode-localhost.out  
  3. localhost: starting datanode, logging to /home/shirdrn/eclipse/eclipse-3.5.2/hadoop/hadoop-0.20.0/logs/hadoop-shirdrn-datanode-localhost.out  
  4. localhost: starting secondarynamenode, logging to /home/shirdrn/eclipse/eclipse-3.5.2/hadoop/hadoop-0.20.0/logs/hadoop-shirdrn-secondarynamenode-localhost.out  
  5. starting jobtracker, logging to /home/shirdrn/eclipse/eclipse-3.5.2/hadoop/hadoop-0.20.0/logs/hadoop-shirdrn-jobtracker-localhost.out  
  6. localhost: starting tasktracker, logging to /home/shirdrn/eclipse/eclipse-3.5.2/hadoop/hadoop-0.20.0/logs/hadoop-shirdrn-tasktracker-localhost.out  
查看:
[plain]  view plain copy
  1. [shirdrn@localhost bin]$ jps  
  2. 8192 TaskTracker  
  3. 7905 DataNode  
  4. 7806 NameNode  
  5. 8065 JobTracker  
  6. 8002 SecondaryNameNode  
  7. 8234 Jps  
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值