Hadoop 伪分布式下更换默认hadoop.tmp.dir路径

hadoop.tmp.dir是hadoop文件系统依赖的基础配置,很多路径都依赖它。它默认的位置是在/tmp/{$user}下面,但是在/tmp路径下的存储是不安全的,因为linux一次重启,文件就可能被删除。

按照hadoop Getting Start中Single Node Setup一节中的步骤走下来之后,伪分布式已经运行起来了。怎么更改默认的hadoop.tmp.dir路径,并使其生效?请按照下面的步骤来:

1、编辑conf/core-site.xml,在里面加上如下属性:

<property>  
        <name>hadoop.tmp.dir</name>  
        <value>/home/had/hadoop/data</value>  
        <description>A base for other temporary directories.</description>  
</property> 



2、停止hadoop:   bin/stop-all.sh

3、重新格式化namenode节点。bin/hadoop namenode -format

       注意:此处至关重要,否则namenode会启动不起来。

4、启动 bin/start-all.sh

5、测试bin/hadoop fs -put conf conf



总结:第三步尤为重要,一开始我使用了错误的命令:bin/hadoop fs -format 来进行格式化,但一直报连不上服务器的错误:

    11/11/20 17:14:14 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 0 time(s).  
    11/11/20 17:14:15 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 1 time(s).  
    11/11/20 17:14:16 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 2 time(s).  
    11/11/20 17:14:17 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 3 time(s).  
    11/11/20 17:14:18 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 4 time(s).  
    11/11/20 17:14:19 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 5 time(s).  
    11/11/20 17:14:20 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 6 time(s).  
    11/11/20 17:14:21 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 7 time(s).  
    11/11/20 17:14:22 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 8 time(s).  
    11/11/20 17:14:23 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 9 time(s).  
    Bad connection to FS. command aborted. exception: Call to localhost/127.0.0.1:9000 failed on connection exception: java.net.ConnectException: Connection refused  



使用jps命令查看java进程发现没有NameNode的进程。

最后发现是命令用错了。应该使用bin/hadoop namenode -format 这个命令来格式化文件系统,并且应该是在启动hadoop之前进行操作。

总的来说,按照本文的步骤便可以对默认的hadoop.tmp.dir进行更改。

本文列出作者实践过程中出现的失误,以及它们的解决办法,希望对后来者有用。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值