hive出现java.lang.OutOfMemoryError: GC overhead limit exceeded

最近用sparksession写入hive数据时,但用hive查询统计行数报错了
select count(*) from test_20190417;错误如下:

Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
java.lang.OutOfMemoryError: GC overhead limit exceeded
    at com.google.protobuf.LiteralByteString.toString(LiteralByteString.java:148)
    at com.google.protobuf.ByteString.toStringUtf8(ByteString.java:572)
    at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$DatanodeIDProto.getHostName(HdfsProtos.java:1840)
    at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:382)
    at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:646)
    at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:815)
    at org.apache.hadoop.hdfs.protocolPB.PBHelper.convertLocatedBlock(PBHelper.java:1276)
    at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:1296)
    at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:1453)
    at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:1555)
    at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:1566)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing(ClientNamenodeProtocolTranslatorPB.java:581)
    at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy17.getListing(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:2095)
    at org.apache.hadoop.hdfs.DistributedFileSystem$DirListingIterator.hasNextNoFilter(DistributedFileSystem.java:986)
    at org.apache.hadoop.hdfs.DistributedFileSystem$DirListingIterator.hasNext(DistributedFileSystem.java:961)
    at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:304)
    at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:265)
    at org.apache.hadoop.hive.shims.Hadoop23Shims$1.listStatus(Hadoop23Shims.java:146)
    at org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:216)
    at org.apache.hadoop.mapred.lib.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:76)
    at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getSplits(HadoopShimsSecure.java:309)
    at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getCombineSplits(CombineHiveInputFormat.java:470)
    at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:571)
    at org.apache.hadoop.mapreduce.JobSubmitter.writeOldSplits(JobSubmitter.java:329)
    at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:320)
    at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196)
FAILED: Execution Error, return code -101 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask. GC overhead limit exceeded

查询资料,有说集群时间不同步的问题,我查看了集群的时间都是同步的,改过hive.optimize.sort.dynamic.partition,hive.optimize.skewjoin,hive.auto.convert.join,hive.ignore.mapjoin.hint,mapreduce.map.memory.mb,mapreduce.map.java.opts,mapreduce.reduce.memory.mb,mapreduce.reduce.java.opts等参数也还是报错,调整mapreduce.reduce.memory.mb与mapreduce.map.memory.mb会出现return code -1 from XXXXX的错误,调整另外的参数就一直是return code -101 from XXXXX的错误。
后面修改export HADOOP_HEAPSIZE=的值,也还是报错。
最后我把export HADOOP_CLIENT_OPTS="-Xmx512m HADOOP_CLIENT_OPTS",查询就正常了

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值