hive导入ClickHouse时Spark读取Hive分区错误解决

项目场景:

错误由来

问题描述:

java.lang.RuntimeException: Caught Hive MetaException attempting to get partition metadata by filter from Hive. You can set the Spark configuration setting spark.sql.hive.manageFilesourcePartitions to false to work around this problem, however this will result in degraded performance.

原因分析:

Spark读取Hive分区配置错误

解决方案:

在spark的执行参数加上:
spark.sql.hive.manageFilesourcePartitions=false

修改脚本:

#!/bin/bash

# 环境变量
unset SPARK_HOME
export SPARK_HOME=$SPARK2_HOME
SEATUNNEL_HOME=/u/module/seatunnel-1.5.1
# 接收两个参数,第一个为要抽取的表,第二个为抽取时间
# 若输入的第一个值为first,不输入第二参数则直接退出脚本
if [[ $1 = first ]]; then
  if [ -n "$2" ] ;then
   do_date=$2
  else 
   echo "请传入日期参数"
   exit
  fi 
# 若输入的第一个值为all,不输入第二参数则取前一天
elif [[ $1 = all ]]; then
    # 判断非空,如果不传时间默认取前一天数据,传时间就取设定,主要是用于手动传参
  if [ -n "$2" ] ;then
    do_date=$2
  else
    do_date=`date -d '-1 day' +%F`
  fi
else
  if [ -n "$2" ] ;then
   do_date=$2
  else 
   echo "请传入日期参数"
   exit
  fi 
fi

echo "日期:$do_date"

import_conf(){
  # 打印数据传输脚本并赋值
cat>$SEATUNNEL_HOME/jobs/hive2ck_test.conf<<!EOF
spark {
  spark.sql.catalogImplementation = "hive"
  spark.app.name = "hive2clickhouse"
  spark.executor.instances = 4
  spark.executor.cores = 4
  spark.executor.memory = "4g"
  spark.sql.hive.manageFilesourcePartitions=false
}

input {
    hive {
                # pre_sql = "select id,name,'${do_date}' as birthday from  default.student"
                 # pre_sql = "select id,name,'${do_date}' as birthday from  default.student"
                pre_sql = "$1"
                table_name = "$2"
    }
}

filter {}

output {
    clickhouse {
                host = "$3"
                database = "$4"
                table = "$5"
                # fields = ["id","name","birthday"]
                fields = $6
                username = "default"
                password = ""
    }
}

!EOF
$SEATUNNEL_HOME/bin/start-waterdrop.sh  --config $SEATUNNEL_HOME/jobs/hive2ck_test.conf -e client -m 'local[4]'
}


import_test(){
  import_conf "select id,name, birthday from  cut.dwd_test where dt='${do_date}'" "dwd_test" "hadoop101:8123" "cut" "dwd_test" "[\"id\",\"name\",\"birthday\"]"
}

case $1 in
  "all")
     import_test
;;
esac


测试成功

在这里插入图片描述参考链接:
https://issues.apache.org/jira/browse/SPARK-18680
https://www.coder.work/article/6202225

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值