HBase与Hive整合

整合背景

hbase是一种面向列簇的nosql数据库,主要用于存储结构化和非结构化的数据,但是原生不支持sql,同时由于良好的写入性能,主要用于实时数据的存储。

hive将hdfs文件映射为一张表,支持类SQL语句对其进行管理,不支持实时更新,主要用于离线数据仓库。

hive同时也提供了与hbase的集成,使得能够在hbase表上使用HQL语句进行查询。

配置步骤
hive版本:apache 1.2.2
hbase版本:apache 1.2.6

1.拷贝hbase相关jar包到hive lib目录下

 cp /root/software/hbase-1.2.6/lib/hbase-it-1.2.6.jar  /root/software/hive-1.2.2/lib
 cp /root/software/hbase-1.2.6/lib/hbase-server-1.2.6.jar  /root/software/hive-1.2.2/lib
 cp /root/software/hbase-1.2.6/lib/hbase-hadoop2-compat-1.2.6.jar  /root/software/hive-1.2.2/lib
 cp /root/software/hbase-1.2.6/lib/hbase-hadoop-compat-1.2.6.jar  /root/software/hive-1.2.2/lib
 cp /root/software/hbase-1.2.6/lib/hbase-client-1.2.6.jar  /root/software/hive-1.2.2/lib
 cp /root/software/hbase-1.2.6/lib/hbase-common-1.2.6.jar  /root/software/hive-1.2.2/lib
 cp /root/software/hbase-1.2.6/lib/hbase-protocol-1.2.6.jar  /root/software/hive-1.2.2/lib
 cp /root/software/hbase-1.2.6/lib/htrace-core-3.1.0-incubating.jar  /root/software/hive-1.2.2/lib

2.拷贝hbase-site.xml到hive conf目录下

 cp /root/software/hbase-1.2.6/conf/hbase-site.xml  /root/software/hive-1.2.2/conf/hbase-site.xml

3.修改hive-env.sh文件

 vim /root/software/hive-1.2.2/conf/hive-env.sh
 export HBASE_HOME=/root/software/hbase-1.2.6
验证

1.创建hbase表并插入数据

# 创建表
hbase(main):001:0> create 't1', {NAME=>'f1'};
# 插入数据
hbase(main):003:0> put 't1', '0001',  'f1:name', 'zhangsan' 
hbase(main):004:0> put 't1', '0001',  'f1:age', '30'
hbase(main):006:0> put 't1', '0002',  'f1:name', 'lisi' 
hbase(main):007:0> put 't1', '0002',  'f1:age', '29'

2.创建hive表

hive> CREATE EXTERNAL TABLE hbase_t1(id int,name string,age int) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,f1:name,f1:age") TBLPROPERTIES("hbase.table.name" = "t1");
#hive外部表字段与hbase表字段对应关系(id <--> rowkey, name <--> f1:name, age <--> f1:age)
"hbase.columns.mapping" = ":key,f1:name,f1:age"
#hive外部表对应的hbase表名为t1
"hbase.table.name" = "t1"

3.通过hive sql访问hbase表数据

hive> select * from hbase_t1;
OK
1       zhangsan        30
2       lisi    29
Time taken: 0.313 seconds, Fetched: 2 row(s)

hive> select count(*) from hbase_t1;
Query ID = root_20200303102934_f5bea28c-3c06-4d43-ac15-be436853189e
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1583248050275_0001, Tracking URL = http://master:8088/proxy/application_1583248050275_0001/
Kill Command = /root/software/hadoop-2.6.5/bin/hadoop job  -kill job_1583248050275_0001
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2020-03-03 10:29:49,791 Stage-1 map = 0%,  reduce = 0%
2020-03-03 10:30:08,544 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 14.12 sec
2020-03-03 10:30:17,965 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 16.93 sec
MapReduce Total cumulative CPU time: 16 seconds 930 msec
Ended Job = job_1583248050275_0001
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1  Reduce: 1   Cumulative CPU: 16.93 sec   HDFS Read: 7030 HDFS Write: 2 SUCCESS
Total MapReduce CPU Time Spent: 16 seconds 930 msec
OK
2
Time taken: 44.955 seconds, Fetched: 1 row(s)
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值