1、未格式化Namenode
刚配置好集群后,都想运行看看效果。结果发现运行不了,是名称节点还未格式化。
hadoop namenode –format
2 、Retrying connect to server错误
通过http://192.168.11.233:50070/查看只有1个live node,查看Datanode日志发现,一直在尝试连接namenode。
由于/etc/hosts的第一项配置 127.0.01 hadoop01,节点根据配置监听127.0.0.1:9000端口。解决:修改/etc/hosts,去掉127.0.01中的域名配置。
************************************************************/ 2010-09-15 21:49:30,944 WARN org.apache.hadoop.hdfs.server.common.Util: Path /hadoop/hdfs/data should be specified as a URI in confi guration files. Please update hdfs configuration. 2010-09-15 21:49:31,120 INFO org.apache.hadoop.security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMa pping; cacheTimeout=300000 2010-09-15 21:49:32,311 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop04.qqtech/192.168.11.233:9000. Already tried 0 time(s). 2010-09-15 21:49:33,314 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop04.qqtech/192.168.11.233:9000. Already tried 1 time(s). 2010-09-15 21:49:34,316 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop04.qqtech/192.168.11.233:9000. Already tried 2 time(s). 2010-09-15 21:49:35,317 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop04.qqtech/192.168.11.233:9000. Already tried 3 time(s). 2010-09-15 21:49:36,319 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop04.qqtech/192.168.11.233:9000. Already tried 4 time(s). 2010-09-15 21:49:37,322 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop04.qqtech/192.168.11.233:9000. Already tried 5 time(s). 2010-09-15 21:49:38,324 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop04.qqtech/192.168.11.233:9000. Already tried 6 time(s). 2010-09-15 21:49:39,327 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop04.qqtech/192.168.11.233:9000. Already tried 7 time(s). 2010-09-15 21:49:40,330 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop04.qqtech/192.168.11.233:9000. Already tried 8 time(s). 2010-09-15 21:49:41,333 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop04.qqtech/192.168.11.233:9000. Already tried 9 time(s). 2010-09-15 21:49:41,336 INFO org.apache.hadoop.ipc.RPC: Server at hadoop04.qqtech/192.168.11.233:9000 not available yet, Zzzzz... 2010-09-15 21:49:43,341 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop04.qqtech/192.168.11.233:9000. Already tried 0 time(s). 2010-09-15 21:49:44,344 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop04.qqtech/192.168.11.233:9000. Already tried 1 time(s).
3、查看Browse the filesystem出现“网页无法显示”
看有的文章说是hosts文件的问题,先是修改了4台节点的hosts配置,删除了127.0.0.1 domainName的绑定,只留下127.0.0.1与localhost的映射关系,重启系统发现仍然不起作用。
然后尝试修改浏览器端的hosts配置C:/WINDOWS/system32/drivers/etc,增加节点IP与域名的映射,问题解决。
我是4台节点,hadoop04做namenode和datanode,其它hadoop01~03做datanode。
可以通过链接http://hadoop04.qqtech:50070/dfshealth.jsp访问Hadoop Namenode的状态。
也可以在数据节点上通过下面样式的链接访问(注意只是域名不同,每数据节点都可以提供这样的服务,内容一样)
http://hadoop01.qqtech:50075/browseDirectory.jsp?namenodeInfoPort=50070&dir=/
http://hadoop02.qqtech:50075/browseDirectory.jsp?namenodeInfoPort=50070&dir=/
http://hadoop03.qqtech:50075/browseDirectory.jsp?namenodeInfoPort=50070&dir=/
http://hadoop04.qqtech:50075/browseDirectory.jsp?namenodeInfoPort=50070&dir=/
4、数据节点datanode未关闭
在测试过程中需要频繁修改配置文件进行测试,会导致数据节点未关闭的情况发生。
hadoop在关闭服务时,需要读取master和slaves文件来决定有哪些节点需要关闭,所以在关闭前修改配置文件会导致在关闭时落下一些节点。如:
原slaves文件中:
hadoop01
hadoop02
hadoop03
hadoop04
在关闭系统关,slaves文件被改为
hadoop01
hadoop02
hadoop03
然后再执行stop-mapred.sh和stop-dfs.sh。这
时hadoop04节点未被关闭。按当前配置重新启动集群,通过DFS home页面查看,发现仍然存在hadoop04节点。原因是集群启动后,hadoop04仍会向namenode发送心跳,namenode接收到节点信息,重新把节点加到live node中。
附
配置好的Hadoop集群:1 Namenode 4Datanode
待续...