Hiveserver2+beeline配合使用

一、hiveserver2的简介:

	hiveserver2简称hs2,它允许多个客户端并发链接hive。

Server + Client:
设计一个东西,通过客户端连接到server上去,需要哪些信息?ip+端口号port+host

beeline的使用:
beline -u jdbc:hive2://hadoop004:10000/ruoze_g6 -n

Server端:直接在/home/hadoop/hive/bin目录下,直接输入,./hiveserver2

[hadoop@hadoop004 conf]$ pwd
/home/hadoop/app/hive/conf
[hadoop@hadoop004 conf]$ hiveserver2
which: no hbase in (/home/hadoop/app/hue-3.9.0-cdh5.7.0/bin:/home/hadoop/app/apache-maven-3.5.4/bin:/home/hadoop/app/scala-2.11.12/bin:/home/hadoop/app/hadoop-2.6.0-cdh5.7.0/bin:/home/hadoop/app/hadoop-2.6.0-cdh5.7.0/sbin:/home/hadoop/app/zookeeper-3.4.6/bin:/usr/java/jdk1.8.0_45/bin:/home/hadoop/app/hive/bin:/home/hadoop/app/zookeeper-3.4.6/bin:/home/hadoop/app/hadoop-2.6.0-cdh5.7.0/bin:/home/hadoop/app/hadoop-2.6.0-cdh5.7.0/sbin:/home/hadoop/app/hive/bin:/usr/java/jdk1.7.0_45/bin:/usr/java/jdk1.7.0_45/jre/bin:/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin)
OK

OK

Client端:在hive的bin目录下执行如下命令:./beeline -u jdbc:hive2://hadoop004:10000/ruozeg6 -n hadoop

参数解读:10000是hs2的端口号,ruoze_g6是hive连接的数据库名;
-n 是链接server机器的用户名

1、[hadoop@hadoop004 bin]$ pwd
/home/hadoop/app/hive/bin
[hadoop@hadoop004 bin]$ ./beeline -u jdbc:hive2://hadoop004:10000/ruozeg6 -n hadoop

2、打印出如下信息说明已经连接成功
scan complete in 10ms
Connecting to jdbc:hive2://hadoop004:10000/ruozeg6
Connected to: Apache Hive (version 1.1.0-cdh5.7.0)
Driver: Hive JDBC (version 1.1.0-cdh5.7.0)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 1.1.0-cdh5.7.0 by Apache Hive
0: jdbc:hive2://hadoop004:10000/ruozeg6>

3、执行sql语句:
show tables;
INFO  : Compiling command(queryId=hadoop_20190702052323_89d3f962-7401-4939-9672-3b77a3559b18): show tables
INFO  : Semantic Analysis Completed
INFO  : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:tab_name, type:string, comment:from deserializer)], properties:null)
INFO  : Completed compiling command(queryId=hadoop_20190702052323_89d3f962-7401-4939-9672-3b77a3559b18); Time taken: 3.074 seconds
INFO  : Concurrency mode is disabled, not creating a lock manager
INFO  : Executing command(queryId=hadoop_20190702052323_89d3f962-7401-4939-9672-3b77a3559b18): show tables
INFO  : Starting task [Stage-0:DDL] in serial mode
INFO  : Completed executing command(queryId=hadoop_20190702052323_89d3f962-7401-4939-9672-3b77a3559b18); Time taken: 1.116 seconds
INFO  : OK
+-----------------------+--+
|       tab_name        |
+-----------------------+--+
| dept                  |
| emp                   |
| order_mult_partition  |
| order_partition       |
+-----------------------+--+
4 rows selected (5.18 seconds)

4、返回server端查看
OK
在client端每执行成功一个sql语句,server端都会输出一个okay

Connecting to jdbc:hive2://hadoop004:10000/ruozeg6
Connected to: Apache Hive (version 1.1.0-cdh5.7.0)

注意点:spark现在明确连到hive上,高级班spark怎么链接到hive上来了,一个坑
还支持并发访问。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值