hadoop提交jar包卡住不会往下执行的解决方案


这里写图片描述
打开微信扫一扫,关注微信公众号【数据与算法联盟】


转载请注明出处:http://blog.csdn.net/gamer_gyt
博主微博:http://weibo.com/234654758
Github:https://github.com/thinkgamer


写在前边的话

这是一个很蛋疼的问题,说实话在以前玩这个hadoop集群,不管是伪分布式还是集群都没有注意过分配内存这个问题,即job执行时的内存分配,然后在今天遇到了,搞了好久

错误描述

执行jar包时,卡住不会动一般卡在两个地方
第一个是提交不到集群

[breakpad@master hadoop]$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar wordcount /input /output
16/09/22 12:12:15 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.162.89:8032
16/09/22 12:12:16 INFO input.FileInputFormat: Total input paths to process : 1
16/09/22 12:12:16 INFO mapreduce.JobSubmitter: number of splits:1
16/09/22 12:12:17 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1474517485267_0001
16/09/22 12:12:17 INFO impl.YarnClientImpl: Submitted application application_1474517485267_0001
16/09/22 12:12:17 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1474517485267_0001/
16/09/22 12:12:17 INFO mapreduce.Job: Running job: job_1474517485267_0001
16/09/22 12:12:25 INFO mapreduce.Job: Job job_1474517485267_0001 running in uber mode : false

第二种是提交到集群之后,不会往下运行

[breakpad@master hadoop]$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar wordcount /input /output
16/09/22 12:12:15 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.162.89:8032
16/09/22 12:12:16 INFO input.FileInputFormat: Total input paths to process : 1
16/09/22 12:12:16 INFO mapreduce.JobSubmitter: number of splits:1
16/09/22 12:12:17 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1474517485267_0001
16/09/22 12:12:17 INFO impl.YarnClientImpl: Submitted application application_1474517485267_0001
16/09/22 12:12:17 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1474517485267_0001/
16/09/22 12:12:17 INFO mapreduce.Job: Running job: job_1474517485267_0001
16/09/22 12:12:25 INFO mapreduce.Job: Job job_1474517485267_0001 running in uber mode : false
16/09/22 12:12:25 INFO mapreduce.Job:  map 0% reduce 0%

解决办法

这两种错误的本质是一样的,就是在运行jar包时,节点为期分配的内存不够,且也没有指定最大最小值
官网上有三个这样的配置项 yarn-site.xml

yarn.nodemanager.resource.memory-mb
8192
Amount of physical memory, in MB, that can be allocated for containers.
---
yarn.scheduler.minimum-allocation-mb
1024
The minimum allocation for every container request at the RM, in MBs. Memory requests lower than this will throw a InvalidResourceRequestException.
---
yarn.nodemanager.vmem-pmem-ratio
2.1
Ratio between virtual memory to physical memory when setting memory limits for containers. Container allocations are expressed in terms of physical memory, and virtual memory usage is allowed to exceed this allocation by this ratio.

这里我们在集群的yarn-site.xml中添加配置

<property>
    <name>yarn.nodemanager.resource.memory-mb</name>
    <value>4096</value>
</property>

<property>
    <name>yarn.scheduler.minimum-allocation-mb</name>
    <value>2048</value>
</property>

<property>
    <name>yarn.nodemanager.vmem-pmem-ratio</name>
    <value>2.1</value>
</property>

重新启动集群,运行jar包即可

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值