刚开始学习Hadoop,不断遇到新问题,以后把遇到的新问题给记下来。
有时候大家会看到以下的信息,这表示没连上hdfs。
ximo@ubuntu:~$ hadoop fs -ls
11/11/08 10:59:32 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 0 time(s).
11/11/08 10:59:33 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 1 time(s).
11/11/08 10:59:34 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 2 time(s).
11/11/08 10:59:35 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 3 time(s).
11/11/08 10:59:36 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 4 time(s).
11/11/08 10:59:37 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 5 time(s).
有几种原因:
1)hadoop配置
主要是$HADOOP_HOME/conf/hdfs-site.xml、mapred-site.xml、core-site.xml中的配置是否正确,伪分布式模式可以参考前面的blog,或是网上的文章,一大堆一大堆的。
2)机器连不通
如果是分布式的,还要看hadoop客户端机器能不能ping通hdfs机器,注意hdfs的端口号
3)namenode没有启动
是否是namenode没有启动,
$stop-all.sh 如果出现no namenode stop则表示是namenode的问题
$hadoop namenode -format
$start-all.sh
4)其他。
有时候大家会看到以下的信息,这表示没连上hdfs。
ximo@ubuntu:~$ hadoop fs -ls
11/11/08 10:59:32 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 0 time(s).
11/11/08 10:59:33 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 1 time(s).
11/11/08 10:59:34 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 2 time(s).
11/11/08 10:59:35 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 3 time(s).
11/11/08 10:59:36 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 4 time(s).
11/11/08 10:59:37 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 5 time(s).
有几种原因:
1)hadoop配置
主要是$HADOOP_HOME/conf/hdfs-site.xml、mapred-site.xml、core-site.xml中的配置是否正确,伪分布式模式可以参考前面的blog,或是网上的文章,一大堆一大堆的。
2)机器连不通
如果是分布式的,还要看hadoop客户端机器能不能ping通hdfs机器,注意hdfs的端口号
3)namenode没有启动
是否是namenode没有启动,
$stop-all.sh 如果出现no namenode stop则表示是namenode的问题
$hadoop namenode -format
$start-all.sh
4)其他。