详解:hiveserver2的使用与介绍

在学习的时候我们一般是使用hive的
客户端使用的是hive

[hadoop@hadoop001 app]$ jps
19856 RunJar
15408 DataNode
15281 NameNode
15572 SecondaryNameNode
19993 Jps
15722 ResourceManager
15823 NodeManager
[hadoop@hadoop001 app]$ ps -ef |grep 19856
hadoop   19856 19819  4 10:08 pts/0    00:00:10 /usr/java/jdk1.8.0_45/bin/java -Xmx256m -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/home/hadoop/app/hadoop-2.6.0-cdh5.7.0 -Dhadoop.id.str=hadoop -Dhadoop.root.logger=INFO,console -Djava.library.path=/home/hadoop/app/hadoop-2.6.0-cdh5.7.0/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx512m -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /home/hadoop/app/hive-1.1.0-cdh5.7.0/lib/hive-cli-1.1.0-cdh5.7.0.jar org.apache.hadoop.hive.cli.CliDriver
hadoop   20008 19968  0 10:12 pts/1    00:00:00 grep --color=auto 19856
[hadoop@hadoop001 app]$ 
19856 RunJar可以看到他就是hive的server端 

注意:

在启动hive的时候,你会看到 有一句
WARNING: Hive CLI is deprecated and migration to Beeline is recommended

[hadoop@hadoop001 bin]$ ./hive
ls: cannot access /home/hadoop/app/spark-2.4.0-bin-2.6.0-cdh5.7.0/lib/spark-assembly-*.jar: No such file or directory
which: no hbase in (/home/hadoop/app/sqoop-1.4.6-cdh5.7.0/bin:/home/hadoop/app/spark-2.4.0-bin-2.6.0-cdh5.7.0/bin:/home/hadoop/app/scala-2.11.8/bin:/home/hadoop/app/hive-1.1.0-cdh5.7.0/bin:/home/hadoop/app/hadoop-2.6.0-cdh5.7.0/bin:/usr/java/jdk1.8.0_45/bin:/opt/software/protobuf/bin:/bin:/opt/software/apache-maven-3.3.9/bin:/usr/java/jdk1.8.0_45/bin:/opt/software/apache-maven-3.3.9/bin:/usr/java/jdk1.8.0_45/bin:/usr/java/jdk1.8.0_45/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin)

Logging initialized using configuration in jar:file:/home/hadoop/app/hive-1.1.0-cdh5.7.0/lib/hive-common-1.1.0-cdh5.7.0.jar!/hive-log4j.properties
WARNING: Hive CLI is deprecated and migration to Beeline is recommended.
hive (default)>

hiveserver2的官网:https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients

介绍

HiveServer2(HS2)是一种允许客户端对Hive执行查询的服务。HiveServer2是HiveServer1的后续 版本,已被弃用。HS2支持多客户端并发和身份验证。它旨在为JDBC和ODBC等开放API客户端提供更好的支持。

HS2是作为组合服务运行的单个进程,其包括基于Thrift的Hive服务(TCP或HTTP)和用于Web UI 的Jetty Web服务器。

HS2架构

基于Thrift的Hive服务是HS2的核心,负责为Hive查询提供服务(例如,来自Beeline)。Thrift是用于构建跨平台服务的RPC框架。它的堆栈由4层组成:服务器,传输,协议和处理器。您可以在https://thrift.apache.org/docs/concepts找到有关这些图层的更多详细信息。

hiveserver2作用:

1.为hive提供了一种允许客户端远程访问的服务。

2.基于thrift协议,支持跨平台,跨编程语言对hive访问;

3.允许远程访问hive;

使用

Beeline Example
% bin/beeline 
Hive version 0.11.0-SNAPSHOT by Apache
beeline> !connect jdbc:hive2://localhost:10000 scott tiger
!connect jdbc:hive2://localhost:10000 scott tiger 
Connecting to jdbc:hive2://localhost:10000
Connected to: Hive (version 0.10.0)
Driver: Hive (version 0.10.0-SNAPSHOT)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://localhost:10000> show tables;
show tables;
[hadoop@hadoop001 bin]$ ./beeline
ls: cannot access /home/hadoop/app/spark-2.4.0-bin-2.6.0-cdh5.7.0/lib/spark-assembly-*.jar: No such file or directory
which: no hbase in (/home/hadoop/app/sqoop-1.4.6-cdh5.7.0/bin:/home/hadoop/app/spark-2.4.0-bin-2.6.0-cdh5.7.0/bin:/home/hadoop/app/scala-2.11.8/bin:/home/hadoop/app/hive-1.1.0-cdh5.7.0/bin:/home/hadoop/app/hadoop-2.6.0-cdh5.7.0/bin:/usr/java/jdk1.8.0_45/bin:/opt/software/protobuf/bin:/bin:/opt/software/apache-maven-3.3.9/bin:/usr/java/jdk1.8.0_45/bin:/opt/software/apache-maven-3.3.9/bin:/usr/java/jdk1.8.0_45/bin:/usr/java/jdk1.8.0_45/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin)
Beeline version 1.1.0-cdh5.7.0 by Apache Hive
beeline> !connect jdbc:hive2://hadoop001:10000 hadoop
scan complete in 1ms
Connecting to jdbc:hive2://localhost:10000
Enter password for jdbc:hive2://localhost:10000: ******
Error: Could not open client transport with JDBC Uri: jdbc:hive2://localhost:10000: java.net.ConnectException: Connection refused (state=08S01,code=0)
0: jdbc:hive2://hadoop001:10000/default > 

不建议以上的这么使用的方式

./beeline -u jdbc:hive2://hadoop001:10000/default -n hadoop

注意:在spark中也有beeline 所以在使用的时候,要在当前的bin目录下面使用
Connecting to jdbc:hive2://hadoop001:10000/default
这里有这句话。如果是spark就是连接到了spark,

[hadoop@hadoop001 bin]$ ./beeline -u jdbc:hive2://hadoop001:10000/default -n hadoop
ls: cannot access /home/hadoop/app/spark-2.4.0-bin-2.6.0-cdh5.7.0/lib/spark-assembly-*.jar: No such file or directory
which: no hbase in (/home/hadoop/app/sqoop-1.4.6-cdh5.7.0/bin:/home/hadoop/app/spark-2.4.0-bin-2.6.0-cdh5.7.0/bin:/home/hadoop/app/scala-2.11.8/bin:/home/hadoop/app/hive-1.1.0-cdh5.7.0/bin:/home/hadoop/app/hadoop-2.6.0-cdh5.7.0/bin:/usr/java/jdk1.8.0_45/bin:/opt/software/protobuf/bin:/bin:/opt/software/apache-maven-3.3.9/bin:/usr/java/jdk1.8.0_45/bin:/opt/software/apache-maven-3.3.9/bin:/usr/java/jdk1.8.0_45/bin:/usr/java/jdk1.8.0_45/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin)
scan complete in 2ms
Connecting to jdbc:hive2://hadoop001:10000/default
Connected to: Apache Hive (version 1.1.0-cdh5.7.0)
Driver: Hive JDBC (version 1.1.0-cdh5.7.0)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 1.1.0-cdh5.7.0 by Apache Hive
0: jdbc:hive2://hadoop001:10000/default>
0: jdbc:hive2://hadoop001:10000/default> show tables;

INFO  : OK
+----------------------------+--+
|          tab_name          |
+----------------------------+--+
| person                     |
| ruoze_emp                  |
| ruoze_helloworld           |
| ruoze_wc                   |
| ugc_contract_ext_50104_ex  |
+----------------------------+--+
5 rows selected (0.901 seconds)

[hadoop@hadoop001 bin]$ ./beeline -u jdbc:hive2://hadoop001:10000/default -n hadoop
-u是 数据库的url 要连接的JDBC URL。如果需要,参数值中的特殊字符应使用URL编码进行编码。
-n是用户名 要连接的用户名
-p是密码

换端口:

[hadoop@hadoop001 bin]$ ./hiveserver2 --hiveconf hive.server2.thrift.port=14000
配置中寻找这个参数
当你在用10000连接时是连接不上的

hiveserver2的配置

官网:https://cwiki.apache.org/confluence/display/Hive/Setting+Up+HiveServer2

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值