hadoop运行计算pi或running job卡住或 map 0% reduce 0%

版权声明:本文为博主柒晓白(邹涛)原创文章,未经博主允许不得转载,否则追究法律责任。 https://blog.csdn.net/ITBigGod/article/details/79951388

1.出现MapReduce任务运行到running job卡住

一:如下异常:

Starting Job
16/06/30 01:15:34 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.10.50:8032
16/06/30 01:15:35 INFO input.FileInputFormat: Total input paths to process : 2
16/06/30 01:15:35 INFO mapreduce.JobSubmitter: number of splits:2
16/06/30 01:15:35 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1467220503311_0001
16/06/30 01:15:35 INFO impl.YarnClientImpl: Submitted application application_1467220503311_0001
16/06/30 01:15:35 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1467220503311_0001/
16/06/30 01:15:35 INFO mapreduce.Job: Running job: job_1467220503311_0001

如图:
这里写图片描述

2.运行wordcount或者是计算pi值的时出现问题

显示
Map reduce job getting stuck at map 0% reduce 0%

~/tmp$ hadoop jar wordcount.jar WordCount /testhistory /outputtest/test
Warning: $HADOOP_HOME is deprecated.

13/08/29 16:12:34 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
13/08/29 16:12:35 INFO input.FileInputFormat: Total input paths to process : 3
13/08/29 16:12:35 INFO util.NativeCodeLoader: Loaded the native-hadoop library
13/08/29 16:12:35 WARN snappy.LoadSnappy: Snappy native library not loaded
13/08/29 16:12:35 INFO mapred.JobClient: Running job: job_201308291153_0015
13/08/29 16:12:36 INFO mapred.JobClient:  map 0% reduce 0%

二:分析解决

1:可能由于集群所在的空间磁盘不足所致。

2.yarn配置不完整所致。修改前:

<configration><property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configration>

添加下面的语句或者是将对应的地方修改为:

<property>
                <name>yarn.resourcemanager.hostname</name>
                <value>master</value>
        </property>
        <property>
                <name>yarn.nodemanager.aux-services</name>
                <value>mapreduce_shuffle</value>
        </property>
        <property>
                <name>yarn.log-aggregation-enable</name>
                <value>true</value>
        </property>
        <property>
                <name>yarn.log-aggregation.retain-seconds</name>
                <value>604800</value>
        </property>

重启集群start-all.sh即可!

阅读更多

没有更多推荐了,返回首页