FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask

2019-12-24T13:27:26,867 WARN [main] ql.Driver: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
2019-12-24T13:27:26,867 INFO [main] ql.Driver: WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
2019-12-24T13:27:26,868 INFO [main] ql.Driver: Query ID = whzy_20191224132723_11a6e7e7-4760-4841-b44f-ba1cbcce6027
2019-12-24T13:27:26,868 INFO [main] ql.Driver: Total jobs = 2
2019-12-24T13:27:26,894 INFO [main] ql.Driver: Launching Job 1 out of 2
2019-12-24T13:27:26,895 INFO [main] ql.Driver: Starting task [Stage-1:MAPRED] in serial mode
2019-12-24T13:27:26,953 INFO [main] exec.Utilities: Cache Content Summary for hdfs://mycluster/user/hive/warehouse/dept_partition/month=201710 length: 69 file count: 1 directory count: 1
2019-12-24T13:27:26,953 INFO [main] exec.Utilities: Cache Content Summary for hdfs://mycluster/user/hive/warehouse/dept_partition/month=201709 length: 69 file count: 1 directory count: 1
2019-12-24T13:27:26,953 INFO [main] exec.Utilities: BytesPerReducer=256000000 maxReducers=1009 totalInputFileSize=138
2019-12-24T13:27:26,953 INFO [main] exec.Task: Number of reduce tasks not specified. Estimated from input data size: 1
2019-12-24T13:27:26,954 INFO [main] exec.Task: In order to change the average load for a reducer (in bytes):
2019-12-24T13:27:26,954 INFO [main] exec.Task: set hive.exec.reducers.bytes.per.reducer=
2019-12-24T13:27:26,954 INFO [main] exec.Task: In order to limit the maximum number of reducers:
2019-12-24T13:27:26,954 INFO [main] exec.Task: set hive.exec.reducers.max=
2019-12-24T13:27:26,954 INFO [main] exec.Task: In order to set a constant number of reducers:
2019-12-24T13:27:26,954 INFO [main] exec.Task: set mapreduce.job.reduces=
2019-12-24T13:27:26,954 INFO [main] ql.Context: New scratch dir is hdfs://mycluster/tmp/hive/whzy/ef1eed97-e21c-486c-8714-dd7ec0f362df/hive_2019-12-24_13-27-23_542_7827106325817065633-1
2019-12-24T13:27:26,983 INFO [main] mr.ExecDriver: Using org.apache.hadoop.hive.ql.io.CombineHiveInputFormat
2019-12-24T13:27:26,985 INFO [main] exec.Utilities: Processing alias h d t hdt hdt_0-subquery1: h d t hdt hdt_0-subquery1: h d t hdt hdt_0-subquery1: h d t hdt hdt_0-subquery1:dept_partition
2019-12-24T13:27:26,985 INFO [main] exec.Utilities: Adding input file hdfs://mycluster/user/hive/warehouse/dept_partition/month=201709
2019-12-24T13:27:26,985 INFO [main] exec.Utilities: Content Summary hdfs://mycluster/user/hive/warehouse/dept_partition/month=201709length: 69 num files: 1 num directories: 1
2019-12-24T13:27:26,985 INFO [main] exec.Utilities: Processing alias h d t hdt hdt_0-subquery1: h d t hdt hdt_0-subquery1: h d t hdt hdt_0-subquery2: h d t hdt hdt_0-subquery2:dept_partition
2019-12-24T13:27:26,985 INFO [main] exec.Utilities: Adding input file hdfs://mycluster/user/hive/warehouse/dept_partition/month=201710
2019-12-24T13:27:26,985 INFO [main] exec.Utilities: Content Summary hdfs://mycluster/user/hive/warehouse/dept_partition/month=201710length: 69 num files: 1 num directories: 1
2019-12-24T13:27:26,986 INFO [main] ql.Context: New scratch dir is hdfs://mycluster/tmp/hive/whzy/ef1eed97-e21c-486c-8714-dd7ec0f362df/hive_2019-12-24_13-27-23_542_7827106325817065633-1
2019-12-24T13:27:27,079 INFO [main] exec.SerializationUtilities: Serializing MapWork using kryo
2019-12-24T13:27:27,401 INFO [ef1eed97-e21c-486c-8714-dd7ec0f362df main] Configuration.deprecation: mapred.submit.replication is deprecated. Instead, use mapreduce.client.submit.file.replication
2019-12-24T13:27:27,408 INFO [main] exec.SerializationUtilities: Serializing ReduceWork using kryo
2019-12-24T13:27:27,888 INFO [main] exec.Utilities: PLAN PATH = hdfs://mycluster/tmp/hive/whzy/ef1eed97-e21c-486c-8714-dd7ec0f362df/hive_2019-12-24_13-27-23_542_7827106325817065633-1/-mr-10006/a701b508-bbf8-409c-a041-4dd5f8656a3c/map.xml
2019-12-24T13:27:27,888 INFO [main] exec.Utilities: PLAN PATH = hdfs://mycluster/tmp/hive/whzy/ef1eed97-e21c-486c-8714-dd7ec0f362df/hive_2019-12-24_13-27-23_542_7827106325817065633-1/-mr-10006/a701b508-bbf8-409c-a041-4dd5f8656a3c/reduce.xml
2019-12-24T13:27:27,932 INFO [main] client.ConfiguredRMFailoverProxyProvider: Failing over to rm2
2019-12-24T13:27:28,243 WARN [main] mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
2019-12-24T13:27:28,815 INFO [main] exec.Utilities: PLAN PATH = hdfs://mycluster/tmp/hive/whzy/ef1eed97-e21c-486c-8714-dd7ec0f362df/hive_2019-12-24_13-27-23_542_7827106325817065633-1/-mr-10006/a701b508-bbf8-409c-a041-4dd5f8656a3c/map.xml
2019-12-24T13:27:28,815 INFO [main] io.CombineHiveInputFormat: Total number of paths: 2, launching 1 threads to check non-combinable ones.
2019-12-24T13:27:28,829 INFO [main] io.CombineHiveInputFormat: CombineHiveInputSplit creating pool for hdfs://mycluster/user/hive/warehouse/dept_partition/month=201709; using filter path hdfs://mycluster/user/hive/warehouse/dept_partition/month=201709
2019-12-24T13:27:28,830 INFO [main] io.CombineHiveInputFormat: CombineHiveInputSplit creating pool for hdfs://mycluster/user/hive/warehouse/dept_partition/month=201710; using filter path hdfs://mycluster/user/hive/warehouse/dept_partition/month=201710
2019-12-24T13:27:28,843 INFO [main] input.FileInputFormat: Total input paths to process : 2
2019-12-24T13:27:28,856 INFO [main] input.CombineFileInputFormat: DEBUG: Terminated node allocation with : CompletedNodes: 3, size left: 0
2019-12-24T13:27:28,856 INFO [main] input.CombineFileInputFormat: DEBUG: Terminated node allocation with : CompletedNodes: 3, size left: 0
2019-12-24T13:27:28,857 INFO [main] io.CombineHiveInputFormat: number of splits 2
2019-12-24T13:27:28,857 INFO [main] io.CombineHiveInputFormat: Number of all splits 2
2019-12-24T13:27:29,055 INFO [main] mapreduce.JobSubmitter: number of splits:2
2019-12-24T13:27:29,160 INFO [main] mapreduce.JobSubmitter: Submitting tokens for job: job_1577001488562_0009
2019-12-24T13:27:29,551 INFO [main] impl.YarnClientImpl: Submitted application application_1577001488562_0009
2019-12-24T13:27:29,596 INFO [main] mapreduce.Job: The url to track the job: http://slave1:8088/proxy/application_1577001488562_0009/
2019-12-24T13:27:29,597 INFO [main] exec.Task: Starting Job = job_1577001488562_0009, Tracking URL = http://slave1:8088/proxy/application_1577001488562_0009/
2019-12-24T13:27:29,597 INFO [main] exec.Task: Kill Command = /home/whzy/opt/hadoop-2.7.3/bin/hadoop job -kill job_1577001488562_0009
2019-12-24T13:27:38,137 INFO [main] exec.Task: Hadoop job information for Stage-1: number of mappers: 2; number of reducers: 1
2019-12-24T13:27:38,241 WARN [main] mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
2019-12-24T13:27:38,242 INFO [main] exec.Task: 2019-12-24 13:27:38,236 Stage-1 map = 0%, reduce = 0%
2019-12-24T13:27:45,923 INFO [main] exec.Task: 2019-12-24 13:27:45,923 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 1.04 sec
2019-12-24T13:28:14,335 INFO [main] exec.Task: 2019-12-24 13:28:14,334 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 1.04 sec
2019-12-24T13:28:15,395 INFO [main] exec.Task: MapReduce Total cumulative CPU time: 1 seconds 40 msec
2019-12-24T13:28:15,420 ERROR [main] exec.Task: Ended Job = job_1577001488562_0009 with errors
2019-12-24T13:28:15,423 INFO [Thread-27] Configuration.deprecation: mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
2019-12-24T13:28:15,423 ERROR [Thread-27] exec.Task: Error during job, obtaining debugging information…
2019-12-24T13:28:15,433 ERROR [Thread-28] exec.Task: Examining task ID: task_1577001488562_0009_m_000001 (and more) from job job_1577001488562_0009
2019-12-24T13:28:15,434 WARN [Thread-28] shims.HadoopShimsSecure: Can’t fetch tasklog: TaskLogServlet is not supported in MR2 mode.
2019-12-24T13:28:15,434 WARN [Thread-28] shims.HadoopShimsSecure: Can’t fetch tasklog: TaskLogServlet is not supported in MR2 mode.
2019-12-24T13:28:15,473 WARN [Thread-28] shims.HadoopShimsSecure: Can’t fetch tasklog: TaskLogServlet is not supported in MR2 mode.
2019-12-24T13:28:15,484 WARN [Thread-28] shims.HadoopShimsSecure: Can’t fetch tasklog: TaskLogServlet is not supported in MR2 mode.
2019-12-24T13:28:15,496 WARN [Thread-28] shims.HadoopShimsSecure: Can’t fetch tasklog: TaskLogServlet is not supported in MR2 mode.
2019-12-24T13:28:15,507 WARN [Thread-28] shims.HadoopShimsSecure: Can’t fetch tasklog: TaskLogServlet is not supported in MR2 mode.
2019-12-24T13:28:15,516 WARN [Thread-28] shims.HadoopShimsSecure: Can’t fetch tasklog: TaskLogServlet is not supported in MR2 mode.
2019-12-24T13:28:15,535 ERROR [Thread-27] exec.Task:
Task with the most failures(4):

Task ID:
task_1577001488562_0009_m_000000

URL:
http://0.0.0.0:8088/taskdetails.jsp?jobid=job_1577001488562_0009&tipid=task_1577001488562_0009_m_000000

Diagnostic Messages for this Task:
Container [pid=30726,containerID=container_e03_1577001488562_0009_01_000008] is running beyond virtual memory limits. Current usage: 144.0 MB of 1 GB physical memory used; 3.0 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_e03_1577001488562_0009_01_000008 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 30726 30717 30726 30726 (bash) 0 0 227934208 868 /bin/bash -c /home/whzy/opt/jdk1.8.0_231//bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx1024m -Djava.io.tmpdir=/home/whzy/opt/hadoop-2.7.3/tmp/nm-local-dir/usercache/whzy/appcache/application_1577001488562_0009/container_e03_1577001488562_0009_01_000008/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/whzy/opt/hadoop-2.7.3/logs/userlogs/application_1577001488562_0009/container_e03_1577001488562_0009_01_000008 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.7.146 44985 attempt_1577001488562_0009_m_000000_3 3298534883336 1>/home/whzy/opt/hadoop-2.7.3/logs/userlogs/application_1577001488562_0009/container_e03_1577001488562_0009_01_000008/stdout 2>/home/whzy/opt/hadoop-2.7.3/logs/userlogs/application_1577001488562_0009/container_e03_1577001488562_0009_01_000008/stderr
|- 30753 30726 30726 30726 (java) 310 10 2971377664 36004 /home/whzy/opt/jdk1.8.0_231//bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx1024m -Djava.io.tmpdir=/home/whzy/opt/hadoop-2.7.3/tmp/nm-local-dir/usercache/whzy/appcache/application_1577001488562_0009/container_e03_1577001488562_0009_01_000008/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/whzy/opt/hadoop-2.7.3/logs/userlogs/application_1577001488562_0009/container_e03_1577001488562_0009_01_000008 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.7.146 44985 attempt_1577001488562_0009_m_000000_3 3298534883336

Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143

问题原因 :内存空间不足
解决办法
在执行hive语句前加上
set mapreduce.map.memory.mb=1025;//只要大于1024,hive默认分配的内存分大一倍,也就是2048M
set mapreduce.reduce.memory.mb=1025;

或者设置mapred-site.xml 中的属性 默认是 200mb

<property>
  <name>mapred.child.java.opts</name>
  <value>-Xmx1024m</value>
 
</property>
  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值