1.3.1 InsertIntoHiveTable类源码解析
1.3.1.1 背景

读取数据,经过处理后,最终写入 hive表,这里研究下写入原理。抛出如下几个问题?
1、task处理完数据后,如何将数据放到表的location目录下?
2、这类写入表的task,是如何从spark sql 逻辑计划/物理计划 转化成 task启动的?
1.3.1.2 spark sql 逻辑计划/物理计划 如何转化成 task(回答问题2)
driver端 调试日志如下(定位InsertIntoHiveTable物理执行算子的run方法):
run:84, InsertIntoHiveTable (org.apache.spark.sql.hive.execution)
sideEffectResult$lzycompute:108, DataWritingCommandExec (org.apache.spark.sql.execution.command)
sideEffectResult:106, DataWritingCommandExec (org.apache.spark.sql.execution.command)
executeCollect:120, DataWritingCommandExec (org.apache.spark.sql.execution.command) -- 物理执行计划触发执行
$anonfun$logicalPlan$1:229, Dataset (org.apache.spark.sql) -----org.apache.spark.sql.Dataset#logicalPlan
apply:-1, 76482793 (org.apache.spark.sql.Dataset$$Lambda$1656)
$anonfun$withAction$1:3618, Dataset (org.apache.spark.sql)
apply:-1, 277117675 (org.apache.spark.sql.Dataset$$Lambda$1657)
$anonfun$withNewExecutionId$5:100, SQLExecution$ (org.apache.spark.sql.execution)
apply:-1, 1668179857 (org.apache.spark.sql.execution.SQLExecution$$$Lambda$1665)
withSQLConfPropagated:160, SQLExecution$ (org.apache.spark.sql.execution)
$anonfun$withNewExecutionId$1:87, SQLExecution$ (org.apache.spark.sql.execution)
apply:-1, 216687255 (org.apache.spark.sql.execution.SQLExecution$$$Lambda$1658)
withActive:764, SparkSession (org.apache.spark.sql)
withNewExecutionId:64, SQLExecution$ (org.apache.spark.sql.execution)
withAction:3616, Dataset (org.apache.spark.sql)
<init>:229, Dataset (org.apache.spark.sql)
$anonfun$ofRows$2:100, Dataset$ (org.apache.spark.sql) --- org.apache.spark.sql.Dataset#ofRows
apply:-1, 2116006444 (org.apache.spark.sql.Dataset$$$Lambda$925)
withActive:764, SparkSession (org.apache.spark.sql)
ofRows:97, Dataset$ (org.apache.spark.sql)
$anonfu
Spark SQL转Task执行与InsertIntoHiveTable源码解析

本文深入探讨了Spark SQL如何将逻辑计划和物理计划转化为Task执行,特别是在InsertIntoHiveTable操作中,详细阐述了Task如何将数据写入Hive表的过程,包括创建临时目录、数据写入.staging目录,以及最终提交数据到Hive表的元数据更新。
最低0.47元/天 解锁文章
416

被折叠的 条评论
为什么被折叠?



