hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.3.jar进程无法向下运行
1. 错误描述
执行jar包时,卡住不会动一般卡在两个地方:
- (1)第一个是提交不到集群
[breakpad@master hadoop]$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar wordcount /input /output
16/09/22 12:12:15 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.162.89:8032
16/09/22 12:12:16 INFO input.FileInputFormat: Total input paths to process : 1
16/09/22 12:12:16 INFO mapreduce.JobSubmitter: number of splits:1
16/09/22 12:12:17 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1474517485267_0001
16/09/22 12:12:17 INFO impl.YarnClientImpl: Submitted application application_1474517485267_0001
16/09/22 12:12:17 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1474517485267_0001/
16/09/22 12:12:17 INFO mapreduce.Job: Running job: job_1474517485267_0001
16/09/22 12:12:25 INFO mapreduce.Job: Job job_1474517485267_0001 running in uber mode : false
- (2)第二种是提交到集群之后,不会往下运行
[breakpad@master hadoop]$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar wordcount /input /output
16/09/22 12:12:15 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.162.89:8032
16/09/22 12:12:16 INFO input.FileInputFormat: Total input paths to process : 1
16/09/22 12:12:16 INFO mapreduce.JobSubmitter: number of splits:1
16/09/22 12:12:17 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1474517485267_0001
16/09/22 12:12:17 INFO impl.YarnClientImpl: Submitted application application_1474517485267_0001
16/09/22 12:12:17 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1474517485267_0001/
16/09/22 12:12:17 INFO mapreduce.Job: Running job: job_1474517485267_0001
16/09/22 12:12:25 INFO mapreduce.Job: Job job_1474517485267_0001 running in uber mode : false
16/09/22 12:12:25 INFO mapreduce.Job: map 0% reduce 0%
2. 问题说明:
这两种错误的本质是一样的,就是在运行jar包时,节点为期分配的内存不够,且也没有指定最大最小值。
3. 解决方法
- 在集群的yarn-site.xml中添加如下配置:
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>4096</value>
</property>
<property>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>2048</value>
</property>
<property>
<name>yarn.nodemanager.vmem-pmem-ratio</name>
<value>2.1</value>
</property>
- 重新启动集群,运行jar包即可