[b]Directory /tmp/hadoop-lee/dfs/name is in an inconsistent state: storage directory DOES NOT exist or is NOT accesible[/b]
原因:
[url]http://lucene.472066.n3.nabble.com/Directory-tmp-hadoop-root-dfs-name-is-in-an-inconsistent-state-storage-directory-DOES-NOT-exist-or-ie-td812243.html[/url][quote]Normally this is due to the machine having been rebooted and /tmp being cleared out. You do not want to leave the Hadoop name node or data node storage in /tmp for this reason. Make sure you properly configure dfs.name.dir and dfs.data.dir to point to directories outside of /tmp and other directories that may be cleared on boot. [/quote]再参看下面的文档:
[url]http://hadoop.apache.org/docs/r1.1.1/hdfs-default.html[/url]可以知道
dfs.name.dir 的默认值为 ${hadoop.tmp.dir}/dfs/name
dfs.data.dir 的默认值为 ${hadoop.tmp.dir}/dfs/data
所以,在 $HADOOP/conf/core-site.xml 中更改 hadoop.tmp.dir 即可(hadoop.tmp.dir 默认为 /tmp/hadoop-${user.name}),如下面的例子:
[b]java.io.IOException: File /tmp/hadoop-lee/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1[/b]
原因:
[url]http://lucene.472066.n3.nabble.com/Directory-tmp-hadoop-root-dfs-name-is-in-an-inconsistent-state-storage-directory-DOES-NOT-exist-or-ie-td812243.html[/url][quote]Normally this is due to the machine having been rebooted and /tmp being cleared out. You do not want to leave the Hadoop name node or data node storage in /tmp for this reason. Make sure you properly configure dfs.name.dir and dfs.data.dir to point to directories outside of /tmp and other directories that may be cleared on boot. [/quote]再参看下面的文档:
[url]http://hadoop.apache.org/docs/r1.1.1/hdfs-default.html[/url]可以知道
dfs.name.dir 的默认值为 ${hadoop.tmp.dir}/dfs/name
dfs.data.dir 的默认值为 ${hadoop.tmp.dir}/dfs/data
所以,在 $HADOOP/conf/core-site.xml 中更改 hadoop.tmp.dir 即可(hadoop.tmp.dir 默认为 /tmp/hadoop-${user.name}),如下面的例子:
指定哪个目录合适那?待议。。。
完事之后记得 format 原 namenode 使异常不再出现(数据亦被格式化掉):
$ hadoop namenode -format
[b]java.io.IOException: File /tmp/hadoop-lee/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1[/b]
$ hadoop dfsadmin -safemode leave