Hadoop格式化namenode失败

hdfs namenode -format 时候提示 n1 s1连接不上去,

检查hosts配置正确

于是我怀疑防火墙没有关,然而防火墙也关闭了.

于是我看了下日志才发现tmp在hdfs-site.xml中 namenode和datanode临时存放的文件目录没有创建,但是hadoop格式化会自动创建的但是,看到提示于是我创建了 tmp的dfs目录,在执行一遍 -format 就ok了

提示错误

16/03/25 10:57:39 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/03/25 10:57:42 INFO ipc.Client: Retrying connect to server: n1/192.168.253.130:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
16/03/25 10:57:43 INFO ipc.Client: Retrying connect to server: n1/192.168.253.130:9000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
16/03/25 10:57:44 INFO ipc.Client: Retrying connect to server: n1/192.168.253.130:9000. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
16/03/25 10:57:45 INFO ipc.Client: Retrying connect to server: n1/192.168.253.130:9000. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
16/03/25 10:57:46 INFO ipc.Client: Retrying connect to server: n1/192.168.253.130:9000. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
16/03/25 10:57:47 INFO ipc.Client: Retrying connect to server: n1/192.168.253.130:9000. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
16/03/25 10:57:48 INFO ipc.Client: Retrying connect to server: n1/192.168.253.130:9000. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
16/03/25 10:57:49 INFO ipc.Client: Retrying connect to server: n1/192.168.253.130:9000. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)


查看日志发现 需要mkdir  tmp/dfs 就ok了

一下是自动部署HA脚本

#!/bin/sh


#ssh s1 'rm -rf /opt/src/*'
#ssh s2 'rm -rf /opt/src/*'
#isynchronize all config files
#ssh n1 'scp -r /opt/src/hadoop/ s1:/opt/src/'
#ssh n1 'scp -r /opt/src/hadoop/  s2:/opt/src/'


#stop all daemons
ssh n1 '/opt/src/hadoop/sbin/stop-all.sh'


#remove all files
ssh n1 'rm -rf /opt/src/hadoop/tmp'
ssh n1 'rm -rf /opt/src/hadoop/logs'
ssh s1 'rm -rf /opt/src/hadoop/tmp'
ssh s1 'rm -rf /opt/src/hadoop/logs'
ssh s2 'rm -rf /opt/src/hadoop/tmp'
ssh s2 'rm -rf /opt/src/hadoop/logs'




#start journalnodes cluster
ssh n1 '/opt/src/hadoop/sbin/hadoop-daemon.sh start journalnode'
ssh s1 '/opt/src/hadoop/sbin/hadoop-daemon.sh start journalnode'
ssh s2 '/opt/src/hadoop/sbin/hadoop-daemon.sh start journalnode'


#format one namenode
ssh n1 '/opt/src/hadoop/bin/hdfs namenode -format -clusterId mycluster'
ssh n1 '/opt/src/hadoop/sbin/hadoop-daemon.sh start namenode'


#format another namenode
ssh s1 '/opt/src/hadoop/bin/hdfs namenode -bootstrapStandby'
sleep 10
ssh s1 '/opt/src/hadoop/sbin/hadoop-daemon.sh start namenode'
sleep 10


#trigger n1 active
ssh n1 '/opt/src/hadoop/bin/hdfs haadmin -failover --forceactive s1 n1'


#start all datanodes
ssh n1 '/opt/src/hadoop/sbin/hadoop-daemons.sh start datanode'

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值