非Kerberos环境
环境部署
- 将HBase Master/lib下的一下jar包复制到到hiveserver/lib下:
root@hzadg-mammut-platform7:/usr/ndp/current/hive_server2/lib/hive-jars# ls -alh
total 14M
drwxr-xr-x 2 root root 4.0K Apr 20 10:38 .
drwxr-xr-x 5 hive hadoop 12K Apr 20 10:40 ..
-rw-r--r-- 1 root root 1.3M Apr 20 10:09 hbase-client-1.2.6.jar
-rw-r--r-- 1 root root 568K Apr 20 10:09 hbase-common-1.2.6.jar
-rw-r--r-- 1 root root 99K Apr 20 10:09 hbase-hadoop2-compat-1.2.6.jar
-rw-r--r-- 1 root root 37K Apr 20 10:09 hbase-hadoop-compat-1.2.6.jar
-rw-r--r-- 1 root root 4.2M Apr 20 10:09 hbase-protocol-1.2.6.jar
-rw-r--r-- 1 root root 4.0M Apr 20 10:09 hbase-server-1.2.6.jar
-rw-r--r-- 1 root root 1.5M Apr 20 10:10 htrace-core-3.1.0-incubating.jar
-rw-r--r-- 1 root root 81K Apr 20 10:10 metrics-core-2.2.0.jar
-rw-r--r-- 1 root root 1.7M Apr 20 10:10 netty-all-4.0.23.Final.jar
- 在hiveserver的hive-site.xml中添加如下配置:
<property>
<name>hbase.zookeeper.quorum</name>
<value>hzadg-mammut-platform5.server.163.org,hzadg-mammut-platform7.server.163.org,hzadg-mammut-platform8.server.163.org</value>
</property>
<property>
<name>zookeeper.znode.parent</name>
<value>/hbase-unsecure</value>
</property>
- 重启hiveserver;
在Hive中创建HBase识别的表
CREATE TABLE hbase_table_1(key int, value string)
STORED BY'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
WITH SERDEPROPERTIES("hbase.columns.mapping" = ":key,cf1:val")
TBLPROPERTIES ("hbase.table.name"= "test_hive", "hbase.mapred.output.outputtable" ="test_hive");
use test;
CREATE TABLE t1 (x INT, y STRING);
INSERT INTO t1 VALUES (1, 'one'), (2, 'two'), (3, 'three');
INSERT OVERWRITE TABLE hbase_table_1 SELECT * FROM test.t1 WHERE x=1;
hbase(main):007:0> scan 'test_hive'
ROW COLUMN+CELL
1 column=cf1:val, timestamp=1524193916424, value=one
2 column=cf1:val, timestamp=1524193916424, value=two
3 column=cf1:val, timestamp=1524193916424, value=three
3 row(s) in 0.0950 seconds
注意:
* hbase.table.name
参数是可选的,是Hbase可识别的名字,如果不设置则和Hive表名一致;
* 在Hive中创建的和Hbase整合的表不支持load data导入数据,需要在Hive中创建中间表导入数据后采用insert方式导入数据。
Kerberos环境
部署
- 复制hbasemater中的jar至hiveserver/lib下;
- 在hiveserver的hive-site中修改如下:
# 更改配置项
hbase.coprocessor.region.classes=org.apache.hadoop.hbase.security.token.TokenProvider,org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint
# 删除配置项
<property>
<name>hbase.coprocessor.master.classes</name>
<value>org.apache.hadoop.hbase.security.access.AccessController</value>
</property>
<property>
<name>hbase.coprocessor.regionserver.classes</name>
<value>org.apache.hadoop.hbase.security.access.AccessController</value>
</property>
- 重启hiveserver;
参考:
* https://blog.csdn.net/hqwang4/article/details/77892683?utm_source=5ibc.net&utm_medium=referral
* https://www.cnblogs.com/skyl/p/4849163.html