hadoop报错report: Call From xxx to xxx failed on connect

flume异常日志:

hdfs dfsadmin -report,报错如下:


“report: Call From slave1.hadoop/192.168.1.106 to namenode:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused”
其中namenode是主节点。于是去主节点上查看了一下端口信息,主节点上执行命令:

sudo netstat -nap|grep 90

发现9000端口的状态是TIME_WAIT,于是百度了一下,参考了一下高手说的方法,改了一下主节点的hosts文件重启hdfs,子节点上执行命令"hdfs dfsadmin -report"不再报错。原文地址:

http://blog.csdn.net/renfengjun/article/details/25320043

我的主节点hosts文件其中有一行如下:

::1    namenode        localhost6.localdomain6 localhost6

里面的namenode就是主节点主机名,把该行注释(前面加个#号)

#::1    namenode        localhost6.localdomain6 localhost6

重启(执行:$./stop-all.sh 再执行 $./start-all.sh)

在子节点上执行命令

hdfs dfsadmin -report

结果:

Java HotSpot(TM) Client VM warning: You have loaded library /usr/local/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c ', or link it with '-z noexecstack'.
15/04/11 13:02:36 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Configured Capacity: 0 (0 B)
Present Capacity: 0 (0 B)
DFS Remaining: 0 (0 B)
DFS Used: 0 (0 B)
DFS Used%: NaN%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------

未报先前的错误,Eclipse上reconnect后也不报错了,只是没东西。探索中。。。。。
阅读更多
个人分类: hadoop flume
想对作者说点什么? 我来说一句

没有更多推荐了,返回首页

不良信息举报

hadoop报错report: Call From xxx to xxx failed on connect

最多只允许输入30个字

加入CSDN,享受更精准的内容推荐,与500万程序员共同成长!
关闭
关闭