总是报出异常如下所示:
13/07/24 09:14:24 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/07/24 09:14:25 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
解决方案:
1.首先删除 /tmp/hadoop-username/dfs/目录下的东西,并格式化namenode
2.属性hadoop.tmp.dir是hadoop文件系统依赖的基础配置,很多路径都依赖它。它默认的位置是在/tmp/{$user}下面,在local和hdfs都会建有相同的目录,但是在/tmp路径下的存储是不安全的,因为linux一次重启,文件就可能被删除。导致namenode启动不起来。因此需要在core-site.xml下 添加如下字段
<property><name>hadoop.tmp.dir</name><value>/home/lee/tmp</value></property>
3,注意关机前需要stop-all.sh