Execution Error, return code 3 from org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask

执行hiveSQL出现以下报错:

[2023-12-18 12:26:40] [08S01][3] Error while processing statement: FAILED: Execution Error, return code 3 from org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask

以下是所有的报错信息:
第一种:

No Stats for demodata@order_detail, Columns: create_time, user_id, province_id, total_amount, product_id, product_num, id
No Stats for demodata@product_info, Columns: category_id, price, id, product_name
No Stats for demodata@province_info, Columns: id, province_name
Query ID = zhangflink_20231218122203_77d8d937-f362-4c80-ac88-14d68ff051e3
Total jobs = 1
2023-12-18 12:22:41,429 Log4j2-TF-2-AsyncLogger[AsyncContext@d041cf]-1 ERROR An exception occurred processing Appender FA org.apache.logging.log4j.core.appender.AppenderLoggingException: java.lang.OutOfMemoryError: GC overhead limit exceeded
	at org.apache.logging.log4j.core.config.AppenderControl.tryCallAppender(AppenderControl.java:165)
	at org.apache.logging.log4j.core.config.AppenderControl.callAppender0(AppenderControl.java:134)
	at org.apache.logging.log4j.core.config.AppenderControl.callAppenderPreventRecursion(AppenderControl.java:125)
	at org.apache.logging.log4j.core.config.AppenderControl.callAppender(AppenderControl.java:89)
	at org.apache.logging.log4j.core.config.LoggerConfig.callAppenders(LoggerConfig.java:542)
	at org.apache.logging.log4j.core.config.LoggerConfig.processLogEvent(LoggerConfig.java:500)
	at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:483)
	at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:471)
	at org.apache.logging.log4j.core.config.AwaitCompletionReliabilityStrategy.log(AwaitCompletionReliabilityStrategy.java:98)
	at org.apache.logging.log4j.core.async.AsyncLogger.actualAsyncLog(AsyncLogger.java:488)
	at org.apache.logging.log4j.core.async.RingBufferLogEvent.execute(RingBufferLogEvent.java:156)
	at org.apache.logging.log4j.core.async.RingBufferLogEventHandler.onEvent(RingBufferLogEventHandler.java:51)
	at org.apache.logging.log4j.core.async.RingBufferLogEventHandler.onEvent(RingBufferLogEventHandler.java:29)
	at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:129)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
	at java.util.Arrays.copyOf(Arrays.java:3332)
	at java.lang.String.concat(String.java:2032)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:365)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:363)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:362)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:411)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	at org.apache.logging.log4j.core.async.RingBufferLogEvent.getThrownProxy(RingBufferLogEvent.java:335)
	at org.apache.logging.log4j.core.pattern.ExtendedThrowablePatternConverter.format(ExtendedThrowablePatternConverter.java:63)
	at org.apache.logging.log4j.core.pattern.PatternFormatter.format(PatternFormatter.java:44)
	at org.apache.logging.log4j.core.layout.PatternLayout$PatternFormatterPatternSerializer.toSerializable(PatternLayout.java:385)
	at org.apache.logging.log4j.core.layout.PatternLayout.toText(PatternLayout.java:241)
	at org.apache.logging.log4j.core.layout.PatternLayout.encode(PatternLayout.java:226)
	at org.apache.logging.log4j.core.layout.PatternLayout.encode(PatternLayout.java:60)
	at org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.directEncodeEvent(AbstractOutputStreamAppender.java:197)
	at org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.tryAppend(AbstractOutputStreamAppender.java:190)
	at org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.append(AbstractOutputStreamAppender.java:181)
	at org.apache.logging.log4j.core.config.AppenderControl.tryCallAppender(AppenderControl.java:161)
	at org.apache.logging.log4j.core.config.AppenderControl.callAppender0(AppenderControl.java:134)
	at org.apache.logging.log4j.core.config.AppenderControl.callAppenderPreventRecursion(AppenderControl.java:125)
	at org.apache.logging.log4j.core.config.AppenderControl.callAppender(AppenderControl.java:89)
	at org.apache.logging.log4j.core.config.LoggerConfig.callAppenders(LoggerConfig.java:542)
	at org.apache.logging.log4j.core.config.LoggerConfig.processLogEvent(LoggerConfig.java:500)
	at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:483)
	at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:471)
	at org.apache.logging.log4j.core.config.AwaitCompletionReliabilityStrategy.log(AwaitCompletionReliabilityStrategy.java:98)
	at org.apache.logging.log4j.core.async.AsyncLogger.actualAsyncLog(AsyncLogger.java:488)
	at org.apache.logging.log4j.core.async.RingBufferLogEvent.execute(RingBufferLogEvent.java:156)
	at org.apache.logging.log4j.core.async.RingBufferLogEventHandler.onEvent(RingBufferLogEventHandler.java:51)

Execution failed with exit status: 1
Obtaining error information

Task failed!
Task ID:
  Stage-7

Logs:

FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask

第二种:
可以看出MAP任务已经执行完成报错,说明reduce内存不够导致。

Query ID = zhangflink_20231218130620_4c7f98e6-cc71-4a3d-831e-81e833dc1777
Total jobs = 2
Execution completed successfully
MapredLocal task succeeded
Launching Job 1 out of 2
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1701688230022_0080, Tracking URL = http://flinkv2:8088/proxy/application_1701688230022_0080/
Kill Command = /opt/software/hadoop/bin/mapred job  -kill job_1701688230022_0080
Hadoop job information for Stage-6: number of mappers: 5; number of reducers: 0
2023-12-18 13:06:37,339 Stage-6 map = 0%,  reduce = 0%
2023-12-18 13:07:07,842 Stage-6 map = 10%,  reduce = 0%, Cumulative CPU 83.39 sec
2023-12-18 13:07:11,034 Stage-6 map = 30%,  reduce = 0%, Cumulative CPU 96.59 sec
2023-12-18 13:07:14,340 Stage-6 map = 40%,  reduce = 0%, Cumulative CPU 104.47 sec
2023-12-18 13:07:21,930 Stage-6 map = 60%,  reduce = 0%, Cumulative CPU 129.75 sec
2023-12-18 13:07:23,017 Stage-6 map = 70%,  reduce = 0%, Cumulative CPU 132.26 sec
2023-12-18 13:07:28,398 Stage-6 map = 80%,  reduce = 0%, Cumulative CPU 147.12 sec
2023-12-18 13:07:58,541 Stage-6 map = 70%,  reduce = 0%, Cumulative CPU 140.98 sec
2023-12-18 13:08:02,903 Stage-6 map = 80%,  reduce = 0%, Cumulative CPU 142.96 sec
2023-12-18 13:08:09,323 Stage-6 map = 90%,  reduce = 0%, Cumulative CPU 152.76 sec
2023-12-18 13:08:25,271 Stage-6 map = 100%,  reduce = 0%, Cumulative CPU 165.87 sec
MapReduce Total cumulative CPU time: 2 minutes 45 seconds 870 msec
Ended Job = job_1701688230022_0080
Execution failed with exit status: 3
Obtaining error information

Task failed!
Task ID:
  Stage-7

Logs:

FAILED: Execution Error, return code 3 from org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask
MapReduce Jobs Launched: 
Stage-Stage-6: Map: 5   Cumulative CPU: 165.87 sec   HDFS Read: 1176096460 HDFS Write: 1851189912 SUCCESS
Total MapReduce CPU Time Spent: 2 minutes 45 seconds 870 msec

原因:

单个Map Task申请的container容器内存大小,其默认值为1024。
该值不能超出yarn.scheduler.maximum-allocation-mb和yarn.scheduler.minimum-allocation-mb规定的范围。此次计算所需的节点内存超出配置的内存值。第一种情况是map端内存不够导致,第二种是reduce端内存不够导致。

解决方法:

(1)修改SQL执行过程,减少节点内存的使用:

关闭hive的map join参数,默认是开启的:

hive.auto.convert.join=false

使用hive进行map join时, 节点内存不够就会报该类型错误。

(2)修改Map Task申请的container容器内存大小:

临时测试或者少数SQL执行可以直接对单个SQL执行前进行设置

修改Map端的内存

set  mapreduce.map.memory.mb=2048

修改reduce端的内存

set  mapreduce.reduce.memory.mb=2304;
也可以修改hive配置文件进行永久进行配置

在map-site.xml文件中配置以下属性限制Map和Reduce进程的内存大小:

修改Map端的内存

<property>

    <name>mapreduce.map.memory.mb</name>

    <value>2048</value>

</property>

修改reduce端的内存

<property>

     <name>mapreduce.reduce.memory.mb</name>

     <value>4096</value>

</property>

重新启动hadoop后生效

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值