Ubuntu 16.04 LTS上用IDEA编程操作hive报错:Table or view not found

在Ubuntu 16.04 LTS上使用IDEA进行Hive编程时遇到Table or view not found错误。问题在于IDEA无法正确读取Hive配置。解决方案是将$HIVE_HOME/conf复制到IDEA项目的resources文件夹中,或在main目录下新建resources并复制配置文件。完成此步骤后,IDEA能够成功访问Hive数据库。
摘要由CSDN通过智能技术生成

Table of Contents

        系统:

        环境:

        问题:

        原因:

        解决方法:

        参考连接:


        这个问题(Exception in thread "main" org.apache.spark.sql.AnalysisException: Table or view not found: emp2; line 1 pos 14     分割     Caused by: org.apache.spark.sql.catalyst.analysis.NoSuchTableException: Table or view 'emp2' not found in database 'default';)困扰挺久了,之前(去年)是学长帮忙配环境,这回自己捣鼓遇到许多问题。很多博客都在瞎扯淡,你抄我的、我抄你的,根本解决不了问题。问题、原因与解决方法的详细情况都写在下面了。

        系统:

        Ubuntu 16.04 LTS。

        环境:

        ①编译器:IDEA 191.7479.19;

        ②Hadoop:Hadoop 2.6.0;

        ③Spark:Spark-2.4.3-bin-2.6.0-cdh5.9.3;

        ④hive:apache-hive-3.1.1-bin;

        ⑤maven:apache-maven-3.6.1;

        ⑥MySQL:MySQL 5.7.26;

        ⑦连接器:mysql-connector-java-5.1.28。

        问题:

        在终端可以通过hive访问数据库中的数据,如图1所示;但当通过IDEA编程访问时则报错,如图2所示。

图1 hive成功访问数据库

 

图2 IDEA编程访问失败

 

        详细提示信息如下所示:

        ①show table:

log4j:WARN No appenders could be found for logger (org.apache.hadoop.util.Shell).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
19/07/26 11:03:13 INFO SparkContext: Running Spark version 2.4.3
19/07/26 11:03:13 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
19/07/26 11:03:14 INFO SparkContext: Submitted application: TestHive
19/07/26 11:03:14 INFO SecurityManager: Changing view acls to: hadoop001
19/07/26 11:03:14 INFO SecurityManager: Changing modify acls to: hadoop001
19/07/26 11:03:14 INFO SecurityManager: Changing view acls groups to: 
19/07/26 11:03:14 INFO SecurityManager: Changing modify acls groups to: 
19/07/26 11:03:14 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(hadoop001); groups with view permissions: Set(); users  with modify permissions: Set(hadoop001); groups with modify permissions: Set()
19/07/26 11:03:14 INFO Utils: Successfully started service 'sparkDriver' on port 33193.
19/07/26 11:03:14 INFO SparkEnv: Registering MapOutputTracker
19/07/26 11:03:14 INFO SparkEnv: Registering BlockManagerMaster
19/07/26 11:03:14 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
19/07/26 11:03:14 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
19/07/26 11:03:14 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-ec9ca4f5-2f81-4bc6-be79-0d46c2dc94da
19/07/26 11:03:14 INFO MemoryStore: MemoryStore started with capacity 1940.7 MB
19/07/26 11:03:14 INFO SparkEnv: Registering OutputCommitCoordinator
19/07/26 11:03:14 INFO Utils: Successfully started service 'SparkUI' on port 4040.
19/07/26 11:03:14 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://hadoop001:4040
19/07/26 11:03:14 INFO Executor: Starting executor ID driver on host localhost
19/07/26 11:03:14 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 45071.
19/07/26 11:03:14 INFO NettyBlockTransferService: Server created on hadoop001:45071
19/07/26 11:03:14 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
19/07/26 11:03:14 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, hadoop001, 45071, None)
19/07/26 11:03:14 INFO BlockManagerMasterEndpoint: Registering block manager hadoop001:45071 with 1940.7 MB RAM, BlockManagerId(driver, hadoop001, 45071, None)
19/07/26 11:03:14 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, hadoop001, 45071, None)
19/07/26 11:03:14 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, hadoop001, 45071, None)
19/07/26 11:03:14 INFO SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('file:/home/hadoop001/Documents/code/Scala/preprocessingData/spark-warehouse').
19/07/26 11:03:14 INFO SharedState: Warehouse path is 'file:/home/hadoop001/Documents/code/Scala/preprocessingData/spark-warehouse'.
19/07/26 11:03:15 INFO StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint
19/07/26 11:03:16 INFO HiveUtils: Initializing HiveMetastoreConnection version 1.2.1 using Spark classes.
19/07/26 11:03:16 IN
  • 2
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值