FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
或 running beyond virtual memory limits. Current usage: 398.2 MB of 1 GB physical memory used; 3.9 GB of 2.1 GB virtual memory used. Killing container.
解决:在hive 命令行窗口临时设置以下参数:
set mapred.max.split.size=256000000; (如果依然报错,则降低每个maptask切片的大小,增加task的数量)
set mapreduce.map.memory.mb=4096; (map container内存分配)
set mapreduce.reduce.memory.mb=8192; (reduce container内存分配)
set yarn.scheduler.minimum-allocation-mb=2048; (单个容器container申请最小值,默认1G)
set mapred.child.java.opts=-Xmx3076m;
(运行map和reduce任务的客户端JVM堆大小【map和reduce任务能使用的最高物理内存】,默认200M),由于每一个container需要为map和reduce任务运行jvm,因此jvm堆大小应设置小于map和reduce 的memory,,一般设置为map,reduce内存大小的3/4,
yarn.nodemanager.resource.memory-mb 是限定节点使用机器最大内存,默认8192M。默认情况下RM允许最大AM申请Container资源为8192MB
默认的(“yarn.nodemanager.vmem-pmem-ratio“)设置为2.1,意味则map container或者reduce container分配的虚拟内存超过2.1倍的(“mapreduce.reduce.memory.mb“)或(“mapreduce.map.memory.mb“)就会被NM给KILL掉
set mapred.max.split.size=256000000;
set mapreduce.map.memory.mb=4096;
set mapreduce.reduce.memory.mb=8192;
set yarn.scheduler.minimum-allocation-mb=2048;
set mapred.child.java.opts=-Xmx3076m;
如果上述解决不了问题,设置,降低处理分片的大小:
set mapred.max.split.size=128000000;()
set mapreduce.map.memory.mb=4096;
set mapreduce.reduce.memory.mb=8192;
set yarn.scheduler.minimum-allocation-mb=2048;
set mapred.child.java.opts=-Xmx4096m;