项目场景:
问题描述:
java.lang.RuntimeException: Caught Hive MetaException attempting to get partition metadata by filter from Hive. You can set the Spark configuration setting spark.sql.hive.manageFilesourcePartitions to false to work around this problem, however this will result in degraded performance.
原因分析:
Spark读取Hive分区配置错误
解决方案:
在spark的执行参数加上:
spark.sql.hive.manageFilesourcePartitions=false
修改脚本:
#!/bin/bash
# 环境变量
unset SPARK_HOME
export SPARK_HOME=$SPARK2_HOME
SEATUNNEL_HOME=/u/module/seatunnel-1.5.1
# 接收两个参数,第一个为要抽取的表,第二个为抽取时间
# 若输入的第一个值为first,不输入第二参数则直接退出脚本
if [[ $1 = first ]]; then
if [ -n "$2" ] ;then
do_date=$2
else
echo "请传入日期参数"
exit
fi
# 若输入的第一个值为all,不输入第二参数则取前一天
elif [[ $1 = all ]]; then
# 判断非空,如果不传时间默认取前一天数据,传时间就取设定,主要是用于手动传参
if [ -n "$2" ] ;then
do_date=$2
else
do_date=`date -d '-1 day' +%F`
fi
else
if [ -n "$2" ] ;then
do_date=$2
else
echo "请传入日期参数"
exit
fi
fi
echo "日期:$do_date"
import_conf(){
# 打印数据传输脚本并赋值
cat>$SEATUNNEL_HOME/jobs/hive2ck_test.conf<<!EOF
spark {
spark.sql.catalogImplementation = "hive"
spark.app.name = "hive2clickhouse"
spark.executor.instances = 4
spark.executor.cores = 4
spark.executor.memory = "4g"
spark.sql.hive.manageFilesourcePartitions=false
}
input {
hive {
# pre_sql = "select id,name,'${do_date}' as birthday from default.student"
# pre_sql = "select id,name,'${do_date}' as birthday from default.student"
pre_sql = "$1"
table_name = "$2"
}
}
filter {}
output {
clickhouse {
host = "$3"
database = "$4"
table = "$5"
# fields = ["id","name","birthday"]
fields = $6
username = "default"
password = ""
}
}
!EOF
$SEATUNNEL_HOME/bin/start-waterdrop.sh --config $SEATUNNEL_HOME/jobs/hive2ck_test.conf -e client -m 'local[4]'
}
import_test(){
import_conf "select id,name, birthday from cut.dwd_test where dt='${do_date}'" "dwd_test" "hadoop101:8123" "cut" "dwd_test" "[\"id\",\"name\",\"birthday\"]"
}
case $1 in
"all")
import_test
;;
esac
测试成功
参考链接:
https://issues.apache.org/jira/browse/SPARK-18680
https://www.coder.work/article/6202225