Hive Metastore Server 迁移后hive任务执行失败

Hive Metastore Server 迁移后hive任务执行失败

启动过程如下:

Query ID = root_20190717154242_27560818-a11a-45fa-9ec2-1c524a237169
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 5
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Job running in-process (local Hadoop)
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/parquet-format-2.1.0-cdh5.14.2.jar!/shaded/parquet/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/parquet-hadoop-bundle-1.5.0-cdh5.14.2.jar!/shaded/parquet/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/parquet-pig-bundle-1.5.0-cdh5.14.2.jar!/shaded/parquet/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hive-exec-1.1.0-cdh5.14.2.jar!/shaded/parquet/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/jars/hive-jdbc-1.1.0-cdh5.14.2-standalone.jar!/shaded/parquet/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [shaded.parquet.org.slf4j.helpers.NOPLoggerFactory]
2019-07-17 15:42:45,703 Stage-1 map = 0%,  reduce = 0%
2019-07-17 15:42:51,150 Stage-1 map = 100%,  reduce = 0%
Ended Job = job_local50667707_0001 with errors
Error during job, obtaining debugging information...
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched:
Stage-Stage-1:  HDFS Read: 762462989 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec
WARN: The method class org.apache.commons.logging.impl.SLF4JLogFactory#release() was invoked.
WARN: Please see http://www.slf4j.org/codes.html#release for an explanation.
Jul 17, 2019 3:42:45 PM WARNING: parquet.hadoop.ParquetRecordReader: Can not initialize counter due to context is not a instance of TaskInputOutputContext, but is org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
Jul 17, 2019 3:42:45 PM INFO: parquet.hadoop.InternalParquetRecordReader: RecordReader initialized will read a total of 199905 records.
Jul 17, 2019 3:42:45 PM INFO: parquet.hadoop.InternalParquetRecordReader: at row 0. reading next block
Jul 17, 2019 3:42:46 PM INFO: parquet.hadoop.InternalParquetRecordReader: block read in memory in 627 ms. row count = 199905
Jul 17, 2019 3:42:47 PM WARNING: parquet.hadoop.ParquetRecordReader: Can not initialize counter due to context is not a instance of TaskInputOutputContext, but is org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
Jul 17, 2019 3:42:47 PM INFO: parquet.hadoop.InternalParquetRecordReader: RecordReader initialized will read a total of 199314 records.
Jul 17, 2019 3:42:47 PM INFO: parquet.hadoop.InternalParquetRecordReader: at row 0. reading next block
Jul 17, 2019 3:42:47 PM INFO: parquet.hadoop.InternalParquetRecordReader: block read in memory in 582 ms. row count = 199314
Jul 17, 2019 3:42:47 PM WARNING: parquet.hadoop.ParquetRecordReader: Can not initialize counter due to context is not a instance of TaskInputOutputContext, but is org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
Jul 17, 2019 3:42:48 PM INFO: parquet.hadoop.InternalParquetRecordReader: RecordReader initialized will read a total of 199310 records.
Jul 17, 2019 3:42:48 PM INFO: parquet.hadoop.InternalParquetRecordReader: at row 0. reading next block
Jul 17, 2019 3:42:48 PM INFO: parquet.hadoop.InternalParquetRecordReader: block read in memory in 574 ms. row count = 199310
Jul 17, 2019 3:42:49 PM WARNING: parquet.hadoop.ParquetRecordReader: Can not initialize counter due to context is not a instance of TaskInputOutputContext, but is org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
Jul 17, 2019 3:42:49 PM INFO: parquet.hadoop.InternalParquetRecordReader: RecordReader initialized will read a total of 199167 records.
Jul 17, 2019 3:42:49 PM INFO: parquet.hadoop.InternalParquetRecordReader: at row 0. reading next block
Jul 17, 2019 3:42:49 PM INFO: parquet.hadoop.InternalParquetRecordReader: block read in memory in 578 ms. row count = 199167

起初怀疑是脚本的问题。但是脚本拿到hive 客户端里执行又没有报错。
细心观察后发现这样一行:

Job running in-process (local Hadoop)

在该节点本地执行,并没有提交mapreduce到yarn,打yarn的web-ui上看了下也确实没有记录。
解决方法:添加hive的Gateway角色,否则无法提交到集群执行。本地受内存限制。
在这里插入图片描述

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值