org.apache.spark.sql.analysisexception: Table or view not found: `traintext`.`train`; line 1 pos 14;

org.apache.spark.sql.AnalysisException: Table or view not found: `traintext`.`train`; line 1 pos 14;

'Project [*]

+- 'UnresolvedRelation `traintext`.`train`

 

    at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)

    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:71)

    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:67)

    at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:128)

    at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$foreachUp$1.apply(TreeNode.scala:127)

    at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$foreachUp$1.apply(TreeNode.scala:127)

    at scala.collection.immutable.List.foreach(List.scala:381)

    at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:127)

    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.checkAnalysis(CheckAnalysis.scala:67)

    at org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:57)

    at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:48)

    at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:63)

    at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:592)

    at com.iflytek.test.ReadData$.main(ReadData.scala:24)

    at com.iflytek.test.ReadData.main(ReadData.scala)

    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

    at java.lang.reflect.Method.invoke(Method.java:498)

    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:738)

    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)

    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)

    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)

    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

    at org.apache.oozie.action.hadoop.SparkMain.runSpark(SparkMain.java:372)

    at org.apache.oozie.action.hadoop.SparkMain.run(SparkMain.java:282)

    at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:64)

    at org.apache.oozie.action.hadoop.SparkMain.main(SparkMain.java:82)

    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

    at java.lang.reflect.Method.invoke(Method.java:498)

    at org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:234)

    at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)

    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:459)

    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)

    at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)

    at java.security.AccessController.doPrivileged(Native Method)

    at javax.security.auth.Subject.doAs(Subject.java:422)

    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)

    at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

log4j:WARN No appenders could be found for logger (org.apache.spark.SparkContext).

log4j:WARN Please initialize the log4j system properly.

log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.

解决方案:

1.首先检查自己的代码问题,看看是否是代码的问题

 

object ReadData {

  def main(args: Array[String]): Unit = {

   

    val database=args.apply(0)

   

    val table=args.apply(1)

   

    val spark = SparkSession

                .builder()

                .appName("spark sql exmaple")

                .config("spark.sql.warehouse.dir", "/user/hive/warehouse")

                .enableHiveSupport()

                .getOrCreate()

   

    val sql="select * from "+database+"."+table

    val data=spark.sql(sql)

   

    data.show();

   

  }

}

2.看看自己的项目中是否配置hive-site.xml(重点,我自己就是这个错误)

那么去哪找呢?

去集群服务器上:find -name hive-site.xml

找到之后拷贝到项目的资源文件下面就可以了,打包的时候在项目的根目录下,会自动加载jar根目录下的hive-site.xml

为什么要添加:spark要查找hive中的数据,需要这个配置文件,里面是hive的一些信息。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值