场景:
1、hive on spark
2、开启了动态资源分配(set spark.dynamicAllocation.enabled = true)
结果/报错日志:
21/01/06 05:09:35 WARN cluster.YarnClusterScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
21/01/06 05:09:42 INFO yarn.ApplicationMaster: Final app status: FAILED, exitCode: 16
21/01/06 05:09:42 ERROR yarn.ApplicationMaster: RECEIVED SIGNAL 15: SIGTERM
21/01/06 05:09:42 INFO spark.SparkContext: Invoking stop() from shutdown hook
原因:
集群资源不足,且动态资源分配申请的executors、内存、核数过多
解决方式:
1、扩大集群资源
2、错峰调度任务
3、关闭动态资源分配(set spark.dynamicAllocation.enabled = false),并减少申请的相应资源
4、改为hive on MapReduce/TEZ