关闭

hadoop报错report: Call From xxx to xxx failed on connect

778人阅读 评论(0) 收藏 举报
分类:

flume异常日志:

hdfs dfsadmin -report,报错如下:


“report: Call From slave1.hadoop/192.168.1.106 to namenode:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused”
其中namenode是主节点。于是去主节点上查看了一下端口信息,主节点上执行命令:

sudo netstat -nap|grep 90

发现9000端口的状态是TIME_WAIT,于是百度了一下,参考了一下高手说的方法,改了一下主节点的hosts文件重启hdfs,子节点上执行命令"hdfs dfsadmin -report"不再报错。原文地址:

http://blog.csdn.net/renfengjun/article/details/25320043

我的主节点hosts文件其中有一行如下:

::1    namenode        localhost6.localdomain6 localhost6

里面的namenode就是主节点主机名,把该行注释(前面加个#号)

#::1    namenode        localhost6.localdomain6 localhost6

重启(执行:$./stop-all.sh 再执行 $./start-all.sh)

在子节点上执行命令

hdfs dfsadmin -report

结果:

Java HotSpot(TM) Client VM warning: You have loaded library /usr/local/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c ', or link it with '-z noexecstack'.
15/04/11 13:02:36 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Configured Capacity: 0 (0 B)
Present Capacity: 0 (0 B)
DFS Remaining: 0 (0 B)
DFS Used: 0 (0 B)
DFS Used%: NaN%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------

未报先前的错误,Eclipse上reconnect后也不报错了,只是没东西。探索中。。。。。
0
0

查看评论
* 以上用户言论只代表其个人观点,不代表CSDN网站的观点或立场
    个人资料
    • 访问:408148次
    • 积分:6727
    • 等级:
    • 排名:第3492名
    • 原创:228篇
    • 转载:252篇
    • 译文:0篇
    • 评论:21条
    文章分类
    最新评论