错误描述:
SQL三张表做join运行出错;
用hive执行报错如下:
Diagnostic Messages for this Task:
Container [pid=27756,containerID=container_1460459369308_5864_01_000570] is running beyond physical memory limits. Current usage: 4.2 GB of 4 GB physical memory used; 5.0 GB of 16.8 GB virtual memory used. Killing container.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
用spark运行报错如下:
Error: org.apache.spark.SparkException: Job aborted due to stage failure: Task 369 in stage 1353.0 failed 4 times, most recent failure: Lost task 369.3 in stage 1353.0 (TID 212351, cnsz033139.app.paic.com.cn): ExecutorLostFailure (executor 689 exited caused by one of the running tasks) Reason: Container marked as failed: container_1460459369308_2154_01_000906 on host: cnsz033139.app.paic.com.cn. Exit status: 143. Diagnostics: Container

当执行SQL中的三表join操作时,遇到内存限制问题导致任务失败。错误信息显示,无论是使用Hive还是Spark,都遇到了Container超出物理内存限制并以143退出代码被杀。分析指出,这可能是reduce阶段内存不足引起。解决方案涉及检查和调整Yarn容器、Mapper和Reducer的内存配置,确保它们不超过物理和虚拟内存限制。
最低0.47元/天 解锁文章
961

被折叠的 条评论
为什么被折叠?



