hadoop运行任务出错,Hive Runtime Error while processing row

Error: java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row {“loan_money_time”:“2021-01-10 14:54:02”,“loan_date”:“2021-01-10”,“loan_month”:“2021-01”,“product_type”:“–”,“channel_name”:“—”,“loan_way”:“–”,“credit_rlevel”:null,“first_loan”:“首借”,“user_group”:null,“client_id”:111111111,“province”:null,“diqu”:null,“id”:111222211,“loan_phase”:12,“due_state”:null,“expected_repay_date”:null,“actual_repay_date”:null,“onloan_state”:null,“rate”:“0.3528”,“rate_year”:“35.28%”,“total_amount”:“23000.0”,“onloan_amount”:null,“ondelay_day”:null,“ondelay_phase”:null,“maxdelay_day”:null,“maxexpre_day”:null,“onloan_interest”:null,“repaid_interest”:null,“repaid_penalty”:null,“close_date”:null,“acct_limit”:null,“ondelay_day2”:null,“ondelay_phase2”:null} at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:179) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:459) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) at org.apache.hadoop.mapred.YarnChildKaTeX parse error: Expected '}', got 'EOF' at end of input: …xec.MapOperatorMapOpCtx.forward(MapOperator.java:157) at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:497) … 9 more Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.ArrayIndexOutOfBoundsException at org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.processOp(ReduceSinkOperator.java:405) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815) at org.apache.hadoop.hive.ql.exec.GroupByOperator.forward(GroupByOperator.java:1007) at org.apache.hadoop.hive.ql.exec.GroupByOperator.processAggr(GroupByOperator.java:818) at org.apache.hadoop.hive.ql.exec.GroupByOperator.processKey(GroupByOperator.java:692) at org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:755) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815) at org.apache.hadoop.hive.ql.exec.CommonJoinOperator.internalForward(CommonJoinOperator.java:638) at org.apache.hadoop.hive.ql.exec.CommonJoinOperator.genAllOneUniqueJoinObject(CommonJoinOperator.java:670) at org.apache.hadoop.hive.ql.exec.CommonJoinOperator.checkAndGenObject(CommonJoinOperator.java:748) at org.apache.hadoop.hive.ql.exec.MapJoinOperator.processOp(MapJoinOperator.java:306) … 17 more Caused by: java.lang.ArrayIndexOutOfBoundsException at org.apache.hadoop.mapred.MapTask M a p O u t p u t B u f f e r MapOutputBuffer MapOutputBufferBuffer.write(MapTask.java:1460) at org.apache.hadoop.mapred.MapTask M a p O u t p u t B u f f e r MapOutputBuffer MapOutputBufferBuffer.write(MapTask.java:1356) at java.io.DataOutputStream.writeInt(DataOutputStream.java:197) at org.apache.hadoop.io.BytesWritable.write(BytesWritable.java:188) at org.apache.hadoop.io.serializer.WritableSerialization W r i t a b l e S e r i a l i z e r . s e r i a l i z e ( W r i t a b l e S e r i a l i z a t i o n . j a v a : 98 ) a t o r g . a p a c h e . h a d o o p . i o . s e r i a l i z e r . W r i t a b l e S e r i a l i z a t i o n WritableSerializer.serialize(WritableSerialization.java:98) at org.apache.hadoop.io.serializer.WritableSerialization WritableSerializer.serialize(WritableSerialization.java:98)atorg.apache.hadoop.io.serializer.WritableSerializationWritableSerializer.serialize(WritableSerialization.java:82) at org.apache.hadoop.mapred.MapTask M a p O u t p u t B u f f e r . c o l l e c t ( M a p T a s k . j a v a : 1156 ) a t o r g . a p a c h e . h a d o o p . m a p r e d . M a p T a s k MapOutputBuffer.collect(MapTask.java:1156) at org.apache.hadoop.mapred.MapTask MapOutputBuffer.collect(MapTask.java:1156)atorg.apache.hadoop.mapred.MapTaskOldOutputCollector.collect(MapTask.java:616) at org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.collect(ReduceSinkOperator.java:542) at org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.processOp(ReduceSinkOperator.java:400) … 27 more
数据已经脱敏。
很恼火,任务运行1个多小时报错,很浪费时间,我们设置一些配置参数,再试试。
mapset="set mapred.max.split.size=256000000;
set mapred.min.split.size.per.node=128000000;
set mapred.min.split.size.per.rack=128000000;
set hive.exec.reducers.bytes.per.reducer=1073741824; – 每个reduce处理的数据量,默认1GB
set hive.input.format=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat;
set hive.merge.mapfiles=true;
set hive.merge.mapredfiles=true;
set hive.map.aggr=true;
set hive.merge.size.per.task=256000000;
set hive.merge.smallfiles.avgsize=128000000;
set hive.exec.dynamic.partition.mode=nonstrict;
set hive.exec.dynamic.partition=true;
set hive.exec.max.dynamic.partitions.pernode=600000;
set hive.new.job.grouping.set.cardinality=6000;
set hive.groupby.skewindata=False;
set mapreduce.map.memory.mb=10240;
set mapreduce.reduce.memory.mb=10240;
set hive.auto.convert.join=true;
set mapred.reduce.tasks=150;
set mapreduce.job.jvm.numtasks=15;
set hive.vectorized.execution.enabled=false;

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值