WARN mapred.JobClient: Error reading task outputNo route to host

主从节点的/etc/hosts文件配置如下

[hadoop@master ~]$ cat /etc/hosts

127.0.0.1 localhost localdomain.localhost

::1 localhost6.localdomain6 localhost6

192.168.100.11 master.oracle.com master

192.168.100.12 slave1.oracle.com slave1

[hadoop@master ~]$

在主从节点执行jps都正常:

[hadoop@master ~]$ jps // 在主节点执行

4690 Jps

2413 JobTracker

2350 SecondaryNameNode

2229 NameNode

[hadoop@master ~]$

[hadoop@slave1 ~]$ jps // 在从节点执行

2248 TaskTracker

27778 Jps

2175 DataNode

[hadoop@slave1 ~]$

[hadoop@master ~]$ hadoop dfs -ls // 在主从节点执行此命令也正常

Found 5 items

drwxr-xr-x - hadoop supergroup 0 2012-10-16 13:24 /user/hadoop/dfsdir

drwxr-xr-x - hadoop supergroup 0 2012-10-16 14:08 /user/hadoop/output

drwxr-xr-x - hadoop supergroup 0 2012-10-16 14:14 /user/hadoop/output1

drwxr-xr-x - hadoop supergroup 0 2012-10-16 14:40 /user/hadoop/output2

drwxr-xr-x - hadoop supergroup 0 2012-10-16 14:43 /user/hadoop/output3

[hadoop@master ~]$

可在测试map/reduce的时候报错:

[hadoop@master ~]$ hadoop jar hadoop-0.20.2-examples.jar wordcount dfsdir/test.txt output3

12/10/16 14:42:40 INFO input.FileInputFormat: Total input paths to process : 1

12/10/16 14:42:40 INFO mapred.JobClient: Running job: job_201210161256_0009

12/10/16 14:42:41 INFO mapred.JobClient: map 0% reduce 0%

12/10/16 14:42:50 INFO mapred.JobClient: map 100% reduce 0%

12/10/16 14:43:04 INFO mapred.JobClient: Task Id : attempt_201210161256_0009_r_000000_0, Status : FAILED

Shuffle Error: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out.

12/10/16 14:43:04 WARN mapred.JobClient: Error reading task outputNo route to host

12/10/16 14:43:04 WARN mapred.JobClient: Error reading task outputNo route to host

12/10/16 14:43:19 INFO mapred.JobClient: Task Id : attempt_201210161256_0009_r_000000_1, Status : FAILED

Shuffle Error: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out.

12/10/16 14:43:19 WARN mapred.JobClient: Error reading task outputNo route to host

12/10/16 14:43:19 WARN mapred.JobClient: Error reading task outputNo route to host

12/10/16 14:43:36 INFO mapred.JobClient: Task Id : attempt_201210161256_0009_r_000000_2, Status : FAILED

Shuffle Error: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out.

12/10/16 14:43:36 WARN mapred.JobClient: Error reading task outputNo route to host

12/10/16 14:43:36 WARN mapred.JobClient: Error reading task outputNo route to host

12/10/16 14:43:54 INFO mapred.JobClient: Job complete: job_201210161256_0009

12/10/16 14:43:54 INFO mapred.JobClient: Counters: 12

12/10/16 14:43:54 INFO mapred.JobClient: Job Counters

12/10/16 14:43:54 INFO mapred.JobClient: Launched reduce tasks=4

12/10/16 14:43:54 INFO mapred.JobClient: Launched map tasks=1

12/10/16 14:43:54 INFO mapred.JobClient: Data-local map tasks=1

12/10/16 14:43:54 INFO mapred.JobClient: Failed reduce tasks=1

12/10/16 14:43:54 INFO mapred.JobClient: FileSystemCounters

12/10/16 14:43:54 INFO mapred.JobClient: HDFS_BYTES_READ=12

12/10/16 14:43:54 INFO mapred.JobClient: FILE_BYTES_WRITTEN=62

12/10/16 14:43:54 INFO mapred.JobClient: Map-Reduce Framework

12/10/16 14:43:54 INFO mapred.JobClient: Combine output records=2

12/10/16 14:43:54 INFO mapred.JobClient: Map input records=1

12/10/16 14:43:54 INFO mapred.JobClient: Spilled Records=2

12/10/16 14:43:54 INFO mapred.JobClient: Map output bytes=20

12/10/16 14:43:54 INFO mapred.JobClient: Combine input records=2

12/10/16 14:43:54 INFO mapred.JobClient: Map output records=2

哪位大神遇到过此问题,给支个招吧!!!

问题终于解决:

[hadoop@master hadoop]$ hadoop jar hadoop-0.20.2-examples.jar wordcount dfsdir/test.txt output

12/10/16 15:30:10 INFO input.FileInputFormat: Total input paths to process : 1

12/10/16 15:30:11 INFO mapred.JobClient: Running job: job_201210161528_0001

12/10/16 15:30:12 INFO mapred.JobClient: map 0% reduce 0%

12/10/16 15:30:19 INFO mapred.JobClient: map 100% reduce 0%

12/10/16 15:30:31 INFO mapred.JobClient: map 100% reduce 100% // 执行成功了

12/10/16 15:30:33 INFO mapred.JobClient: Job complete: job_201210161528_0001

12/10/16 15:30:33 INFO mapred.JobClient: Counters: 17

12/10/16 15:30:33 INFO mapred.JobClient: Job Counters

12/10/16 15:30:33 INFO mapred.JobClient: Launched reduce tasks=1

12/10/16 15:30:33 INFO mapred.JobClient: Launched map tasks=1

12/10/16 15:30:33 INFO mapred.JobClient: Data-local map tasks=1

12/10/16 15:30:33 INFO mapred.JobClient: FileSystemCounters

12/10/16 15:30:33 INFO mapred.JobClient: FILE_BYTES_READ=30

12/10/16 15:30:33 INFO mapred.JobClient: HDFS_BYTES_READ=12

12/10/16 15:30:33 INFO mapred.JobClient: FILE_BYTES_WRITTEN=92

12/10/16 15:30:33 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=16

12/10/16 15:30:33 INFO mapred.JobClient: Map-Reduce Framework

12/10/16 15:30:33 INFO mapred.JobClient: Reduce input groups=2

12/10/16 15:30:33 INFO mapred.JobClient: Combine output records=2

12/10/16 15:30:33 INFO mapred.JobClient: Map input records=1

12/10/16 15:30:33 INFO mapred.JobClient: Reduce shuffle bytes=30

12/10/16 15:30:33 INFO mapred.JobClient: Reduce output records=2

12/10/16 15:30:33 INFO mapred.JobClient: Spilled Records=4

12/10/16 15:30:33 INFO mapred.JobClient: Map output bytes=20

12/10/16 15:30:33 INFO mapred.JobClient: Combine input records=2

12/10/16 15:30:33 INFO mapred.JobClient: Map output records=2

12/10/16 15:30:33 INFO mapred.JobClient: Reduce input records=2

[hadoop@master hadoop]$

错误原因:

[root@master ~]# cat /etc/sysconfig/network

NETWORKING=yes

HOSTNAME=master.oracle.com // 这是在报错时的主机名配置

GATEWAY=192.168.100.1

[root@master ~]#

[hadoop@master hadoop]$ hostname

master.oracle.com

[hadoop@master hadoop]$

我在/etc/sysconfig/network 中设置主机名的时候设置的是master.oracle.com,而在/etc/hosts 中配置的主机名为:

192.168.100.11 master

后来把主机名更改为如下:

[hadoop@master hadoop]$ cat /etc/sysconfig/network

NETWORKING=yes

HOSTNAME=master

GATEWAY=192.168.100.1

[hadoop@master hadoop]$

[hadoop@master hadoop]$ hostname

master

[hadoop@master hadoop]$

[hadoop@master hadoop]$ cat /etc/hosts

127.0.0.1 localhost localdomain.localhost

::1 localhost6.localdomain6 localhost6

192.168.100.11 master

192.168.100.12 slave1

[hadoop@master hadoop]$

其他节点的修改一样,注意这三地方的主机名都要一样:

/etc/sysconfig/network

/etc/hosts

$hostname

[@more@]

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/28254374/viewspace-1059607/,如需转载,请注明出处,否则将追究法律责任。

转载于:http://blog.itpub.net/28254374/viewspace-1059607/

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值