hadoop报错report: Call From xxx to xxx failed on connect

flume异常日志:

hdfs dfsadmin -report,报错如下:


“report: Call From slave1.hadoop/192.168.1.106 to namenode:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused”
其中namenode是主节点。于是去主节点上查看了一下端口信息,主节点上执行命令:

sudo netstat -nap|grep 90

发现9000端口的状态是TIME_WAIT,于是百度了一下,参考了一下高手说的方法,改了一下主节点的hosts文件重启hdfs,子节点上执行命令"hdfs dfsadmin -report"不再报错。原文地址:

http://blog.csdn.net/renfengjun/article/details/25320043

我的主节点hosts文件其中有一行如下:

::1    namenode        localhost6.localdomain6 localhost6

里面的namenode就是主节点主机名,把该行注释(前面加个#号)

#::1    namenode        localhost6.localdomain6 localhost6

重启(执行:$./stop-all.sh 再执行 $./start-all.sh)

在子节点上执行命令

hdfs dfsadmin -report

结果:

Java HotSpot(TM) Client VM warning: You have loaded library /usr/local/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c ', or link it with '-z noexecstack'.
15/04/11 13:02:36 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Configured Capacity: 0 (0 B)
Present Capacity: 0 (0 B)
DFS Remaining: 0 (0 B)
DFS Used: 0 (0 B)
DFS Used%: NaN%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------

未报先前的错误,Eclipse上reconnect后也不报错了,只是没东西。探索中。。。。。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值