hadoop生态的kerberos认证系列4-hive

一、准备工作

停掉hadoop集群;
安装好kerberos认证服务;

二、hive配置

本文的节点名为node

1.创建主体文件/凭证

kadmin.local:  addprinc hive/node
kadmin.local:  ktadd -norandkey -k /usr/data//kerberos/keytab/hive.keytab hive/node

最好是按上面的重新建一个hive/node的凭证,并最好导入到相应的keytab文件里,但本文是用以前建的root/node凭证以及以前导出的keytab文件(如:/usr/data/kerberos/keytab/root.keytab)

2.修改hive的配置文件

修改hive-site.xml文件

<property>
       <name>hive.server2.enable.doAs</name>
       <value>true</value>
 </property>
 <property>
       <name>hive.server2.authentication</name>
       <value>KERBEROS</value>
 </property>
 <property>
       <name>hive.server2.authentication.kerberos.principal</name>
       <value>root/_HOST@EXAMPLE.COM</value>
 </property>
 <property>
       <name>hive.server2.authentication.kerberos.keytab</name>
       <value>/usr/data/kerberos/keytab/root.keytab</value>
 </property>
 <property>
       <name>hive.server2.authentication.spnego.keytab</name>
       <value>/usr/data/kerberos/keytab/root.keytab</value>
 </property>
 <property>
       <name>hive.server2.authentication.spnego.principal</name>
       <value>root/_HOST@EXAMPLE.COM</value>
 </property>
 <property>
       <name>hive.metastore.sasl.enabled</name>
       <value>true</value>
 </property>
 <property>
       <name>hive.metastore.kerberos.keytab.file</name>
       <value>/usr/data/kerberos/keytab/root.keytab</value>
 </property>
 <property>
       <name>hive.metastore.kerberos.principal</name>
       <value>root/_HOST@EXAMPLE.COM</value>
 </property>

3.修改hadoop的core-site.xml配置文件

若是之前没有设置的话,这里再重新设置:
添加如下内容:

<property>
  <name>hadoop.proxyuser.hive.users</name>
  <value>*</value>
</property>
<property>
  <name>hadoop.proxyuser.hive.hosts</name>
  <value>*</value>
</property>

若是添加了内容,则要重启HADOOP进行生效。
此处我也是用的以前的设置的:

<property>
  <name>hadoop.proxyuser.root.users</name>
  <value>*</value>
</property>
<property>
  <name>hadoop.proxyuser.root.hosts</name>
  <value>*</value>
</property>

三、验证

1.启动

#启动metastore
nohup hive --service metastore 2>/usr/data/hive/log/metastore/metastore.err &
#启动hiveserver2
nohup hiveserver2 2>/usr/data/hive/log/hiveserver2/hiveserver.err &

#或者用下面命令

#启动metastore
nohup hive --service metastore >/usr/data/hive/log/metastore/metastore.log &
#启动hiveserver2
nohup hiveserver2 >/usr/data/hive/log/hiveserver2/hiveserver.log &

若是都能启动成功,则正常

[root@node hadoop]# jps
94561 HMaster #HMaster、hbase
2830 NameNode #nn、hdfs
82193 RunJar #metastore、hive
92176 RunJar #hiveserver2、hive
94711 HRegionServer #HRegionServer、hbase
3352 ResourceManager #yarn
34104 QuorumPeerMain #zk
4314 RunJar
3228 Secur #即代表了datanode、hdfs
2973 SecondaryNameNode #snn、hdfs
3583 NodeManager #yarn
6879 JobHistoryServer #HistoryServer、yarn
106974 Jps

2.验证

beeline -u "jdbc:hive2://node:10000/default;principal=root/node@EXAMPLE.COM"

用hive配置文件中配置的用户凭证进行登录,并加上数据库名(如/default),另外引号不能去掉,不然会报错:

[root@node conf]# beeline -u jdbc:hive2://node:10000/default;principal=root/node@EXAMPLE.COM
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/hive/apache-hive-2.3.7-bin/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hadoop/hadoop-2.7.4/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Connecting to jdbc:hive2://node:10000/default
20/12/25 14:21:26 [main]: WARN jdbc.HiveConnection: Failed to connect to node:10000
Unknown HS2 problem when communicating with Thrift server.
Error: Could not open client transport with JDBC Uri: jdbc:hive2://node:10000/default: Peer indicated failure: Unsupported mechanism type PLAIN (state=08S01,code=0)
Beeline version 2.3.7 by Apache Hive
beeline> 

登录成功,并能进行正常操作:

[root@node conf]# beeline -u "jdbc:hive2://node:10000/default;principal=root/node@EXAMPLE.COM"
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/hive/apache-hive-2.3.7-bin/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hadoop/hadoop-2.7.4/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Connecting to jdbc:hive2://node:10000/default;principal=root/node@EXAMPLE.COM
Connected to: Apache Hive (version 2.3.7)
Driver: Hive JDBC (version 2.3.7)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 2.3.7 by Apache Hive
0: jdbc:hive2://node:10000/default> show tables;
+----------------------------------------------------+
|                      tab_name                      |
+----------------------------------------------------+
| kylin_intermediate_capacity_stats_resource_cube_9565437e_c2de_e276_ee4c_4cafd939c159 |
+----------------------------------------------------+
1 row selected (1.813 seconds)
0: jdbc:hive2://node:10000/default> 
  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值