spark连接hive数据库

hive在执行查询sql时出现java.lang.IllegalArgumentException: Wrong FS: hdfs://node1:9000/user/hive/warehouse/test1.db/t1, expected: hdfs://cluster1

原因是hadoop由普通集群修改成了高可用集群后没有更改hive设置中warehouse在hdfs上的储存路径
修改hive-site.xml文件内hive.metastore.warehouse.dir的值

将之前的hdfs://k200:9000/user/hive/warehouse 修改为 hdfs://k131/user/hive/warehouse

(这里的hdfs://cluster1是Hadoop配置文件core-site.xml中的fs.defaultFS指定的值)

 1 <?xml version="1.0"?>
 2 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
 3 <configuration>
 4         <property>
 5             <name>javax.jdo.option.ConnectionURL</name>
 6             <value>jdbc:mysql://k131:3306/metastore? 
 7               createDatabaseIfNotExist=true</value>
 8             <description>JDBC connect string for a JDBC 
 9             metastore</description>
10         </property>
11 
12         <property>
13            <name>javax.jdo.option.ConnectionDriverName</name>
14            <value>com.mysql.jdbc.Driver</value>
15            <description>Driver class name for a JDBC 
16             metastore</description>
17          </property>
18 
19          <property>
20               <name>javax.jdo.option.ConnectionUserName</name>
21               <value>root</value>
22               <description>username to use against metastore 
23               database</description>
24          </property>
25 
26          <property>
27                <name>javax.jdo.option.ConnectionPassword</name>
28                <value>root</value>
29                <description>password to use against metastore 
30                database</description>
31         </property>
32 
33         <property>
34                <name>hive.cli.print.header</name>
35                <value>true</value>
36         </property>
37 
38         <property>
39                <name>hive.cli.print.current.db</name>
40                <value>true</value>
41         </property>
42         <property>
43                <name>hive.exec.mode.local.auto</name>
44                <value>true</value>
45         </property>
46 
47         <property>
48                 <name>hive.zookeeper.quorum</name>
49                 <value>k131</value>
50                 <description>The list of ZooKeeper servers to talk to. This is only needed for read/write locks.</description>
51                 </property>
52 
53         <property>
54               <name>hive.zookeeper.client.port</name>
55               <value>2181</value>
56               <description>The port of ZooKeeper servers to talk to. This is only needed for read/write locks.</description>
57         </property>
58 
59 </configuration>
hive-site.xml

spark 无法查看 hive 表中原来的内容,只能重新创建新表

hive (default)> select * from emp;
FAILED: SemanticException Unable to determine if hdfs://k200:9000/user/hive/warehouse/emp is encrypted: java.lang.IllegalArgumentException: Wrong FS: hdfs://k200:9000/user/hive/warehouse/emp, expected: hdfs://k131:9000
hive (default)>

 

转载于:https://www.cnblogs.com/Vowzhou/p/10882160.html

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值