1.解决步骤:打开 /tmp/root/hive.log查看日志,没有发现什么有用的
2.开启mapereduce 的histroy; $sbin/mr_jobhistory_daemon.sh start historyserver
3.从jobhistory/job/job_1526113286554_0004的
Diagnostics: |
REDUCE capability required is more than the supported max container capability in the cluster. Killing the Job. reduceResourceRequest: <memory:16384, vCores:1> maxContainerCapability:<memory:8192, vCores:32>
Job received Kill while in RUNNING state.
|
---|
<property>
<name>yarn.scheduler.maximum-allocation-mb</name>
<value>10000</value>
</property>
这是说每个container最多申请的内存上限,会不会存在一个container申请过多资源的情况,只要没超过10Gyarn就不会管他,但是可能超过机器内存限制了,然后就被OOM killer杀死了
但是在mapred-site.xml中也有如下设置,
<property>
<name>yarn.app.mapreduce.am.command-opts</name>
<value>-Djava.net.preferIPv4Stack=true -Xmx100m</value>
</property>
<property>
<name>mapreduce.map.java.opts</name>
<value>-Djava.net.preferIPv4Stack=true -Xmx100m</value>
</property>
<property>
<name>mapreduce.reduce.java.opts</name>
<value>-Djava.net.preferIPv4Stack=true -Xmx100m</value>
</property>