2021-08-30 Could not get block locations. Source file XXXXXX

8 篇文章 0 订阅
Query Hive on Spark job[0] stages: [0]
Spark job[0] status = RUNNING
--------------------------------------------------------------------------------------
          STAGES   ATTEMPT        STATUS  TOTAL  COMPLETED  RUNNING  PENDING  FAILED
--------------------------------------------------------------------------------------
Stage-0                  0       RUNNING  12438         37      100    12301      70
--------------------------------------------------------------------------------------
STAGES: 00/01    [>>--------------------------] 0%    ELAPSED TIME: 989.76 s
--------------------------------------------------------------------------------------
FAILED: Execution Error, return code 3 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Spark job failed due to: Job aborted due to stage failure:
Aborting TaskSet 0.0 because task 52 (partition 52)
cannot run anywhere due to node and executor blacklist.
Most recent failure:
Lost task 52.1 in stage 0.0 (TID 168, a.b, executor 11): java.lang.RuntimeException: Error processing row: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row {"row_key":"0197009c7e571a001d28000","zdate":"2021-07-26 00:29:40","vid":81557,"lat":0,"lon":0,"spd":0,"dir":0,"alt":0,"zstate":786432,"alarm":0,"mile":42949671,"fuel":0,"wireless_signal_state":23,"gnss_num":0,"total_fuel":0,"statellite_infor":null,"h_alt":0,"h_speed":0,"h_dir":0,"h_time":0,"h_lat":0,"h_lon":0}
        at org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:146)
        at org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:48)
        at org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:27)
        at org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList.hasNext(HiveBaseFunctionResultList.java:85)
        at scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:42)
        at scala.collection.Iterator$class.foreach(Iterator.scala:891)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
        at org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$1$$anonfun$apply$12.apply(AsyncRDDActions.scala:127)
        at org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$1$$anonfun$apply$12.apply(AsyncRDDActions.scala:127)
        at org.apache.spark.SparkContext$$anonfun$38.apply(SparkContext.scala:2232)
        at org.apache.spark.SparkContext$$anonfun$38.apply(SparkContext.scala:2232)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
        at org.apache.spark.scheduler.Task.run(Task.scala:121)
        at org.apache.spark.executor.Executor$TaskRunner$$anonfun$11.apply(Executor.scala:407)
        at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1408)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:413)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row {"row_key":"0197009c7e571a001d28000","zdate":"2021-07-26 00:29:40","vid":81557,"lat":0,"lon":0,"spd":0,"dir":0,"alt":0,"zstate":786432,"alarm":0,"mile":42949671,"fuel":0,"wireless_signal_state":23,"gnss_num":0,"total_fuel":0,"statellite_infor":null,"h_alt":0,"h_speed":0,"h_dir":0,"h_time":0,"h_lat":0,"h_lon":0}
        at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:494)
        at org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:133)
        ... 18 more
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: Could not get block locations. Source file "test.demo1.hive-staging_hive_2021-08-30_19-16-24_549_3376039352418297486-1/_task_tmp.-ext-10002/zday=2021-07-26/_tmp.000052_1" - Aborting...block==null
        at org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:803)
        at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:882)
        at org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:95)
        at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:882)
        at org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:130)
        at org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:146)
        at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:484)
        ... 19 more
Caused by: java.io.IOException: Could not get block locations. Source file "test.demo1.hive-staging_hive_2021-08-30_19-16-24_549_3376039352418297486-1/_task_tmp.-ext-10002/zday=2021-07-26/_tmp.000052_1" - Aborting...block==null
        at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1477)
        at org.apache.hadoop.hdfs.DataStreamer.processDatanodeOrExternalError(DataStreamer.java:1256)
        at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:667)


Blacklisting behavior can be configured via spark.blacklist.*.

参考资料:
https://blog.csdn.net/lookqlp/article/details/88336851

mapred.task.timeout 200000 The number of milliseconds before a task will be terminated if it neither reads an input, writes an output, nor updates its status string.

set mapred.task.timeout=6000000;

暂且先修改资料中提到的参数后重新提交任务

任务顺利执行,成功

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值