hive和hbase的整合

hive hbase整合,要求比较多,1.hive的得是0.6.0(当前最新的版本) 
2.hive本身要求hadoop的最高版本是hadoop-0.20.2 
3.要求hbase的版本是0.20.3,其他版本需要重新编译hive_hbase-handler 
但是新版的hbase(0.90)变动特别大,根本无法从编译。这点比较恶心,hbase目前升级比较快,当前是0.90(从0.20.6直接跳到0.89),至于为什么这样跳跃,参考官方的解释http://wiki.apache.org/hadoop/Hbase/HBaseVersions 
开场白: 
Hive与HBase的整合功能的实现是利用两者本身对外的API接口互相进行通信,相互通信主要是依靠hive_hbase-handler.jar工具类 (Hive Storage Handlers ), 大致意思如图所示:
hive-hbase

1)启动Hbase, 
要求hbase-0.20.3,zookeeper-3.2.2 
如果使用的不是hbase-0.20.3需要重新编译hive_hbase-handler.jar 

2)单节点HBase的连接 
./bin/hive -hiveconf hbase.master=master:60000 

3)集群HBase的连接 
1.启动zookeeper 
2.启动hbase 
3.启动hive,添加zookeeper的支持 
./bin/hive -hiveconf hbase.zookeeper.quorum= master,slave-A,slave-B

//所有的zookeeper节点 
二、插入数据 
启动 
./bin/hive --auxpath /data/soft/hive/lib/hive_hbase-handler.jar,/data/soft/hive/lib/hbase-0.20.3.jar,/data/soft/hive/lib/zookeeper-3.2.2.jar  -hiveconf hbase.zookeeper.quorum=slave-001,slave-002,slave-003

hive 
1.创建hbase识别的数据库 
CREATE TABLE hbase_table_1(key int, value string) 
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:val")
TBLPROPERTIES ("hbase.table.name" = "xyz");

hbase.table.name 定义在hbase的table名称 
hbase.columns.mapping 定义在hbase的列族 

2.使用sql导入数据 
i.预先准备数据
 
a)新建hive的数据表

CREATE TABLE pokes (foo INT, bar STRING);

b)批量插入数据

hive> LOAD DATA LOCAL INPATH './examples/files/kv1.txt' OVERWRITE INTO TABLE pokes;

这个文件位于hive的安装目录下,examples/files/kv1.txt 

ii.使用sql导入hbase_table_1

INSERT OVERWRITE TABLE hbase_table_1 SELECT * FROM pokes WHERE foo=86;

注意,默认的启动会报错的 
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.ExecDriver 
启动的时候要添加 
-auxpath /data/soft/hive/lib/hive_hbase-handler.jar,/data/soft/hive/lib/hbase-0.20.3.jar,/data/soft/hive/lib/zookeeper-3.2.2.jar

3查看数据

hive> select * from  hbase_table_1;

会显示刚刚插入的数据 
86      val_86 

hbase 
1.登录hbase 
[root@master hbase]# ./bin/hbase shell

2.查看表结构

hbase(main):001:0> describe 'xyz'
DESCRIPTION                                                             ENABLED                               
 {NAME => 'xyz', FAMILIES => [{NAME => 'cf1', COMPRESSION => 'NONE', VE true                                  
 RSIONS => '3', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY =>                                       
  'false', BLOCKCACHE => 'true'}]}                                                                            
1 row(s) in 0.7460 seconds

3.查看加载的数据 
hbase(main):002:0> scan 'xyz'
ROW                          COLUMN+CELL                                                                                       
 86                          column=cf1:val, timestamp=1297690405634, value=val_86 

1 row(s) in 0.0540 seconds 
可以看到,在hive中添加的数据86,已经在hbase中了 

4.添加数据 
' hbase(main):008:0> put 'xyz','100','cf1:val','www.360buy.com'  
0 row(s) in 0.0630 seconds

Hive 
参看hive中的数据

hive> select * from hbase_table_1;                                            
OK
100     www.360buy.com
86      val_86
Time taken: 8.661 seconds

刚刚在hbase中插入的数据,已经在hive里了 

hive访问已经存在的hbase 
使用CREATE EXTERNAL TABLE 
CREATE EXTERNAL TABLE hbase_table_2(key int, value string) 
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
WITH SERDEPROPERTIES ("hbase.columns.mapping" = "cf1:val")
TBLPROPERTIES("hbase.table.name" = "some_existing_table");

三、多列和多列族(Multiple Columns and Families) 
1.创建数据库
 
CREATE TABLE hbase_table_2(key int, value1 string, value2 int, value3 int) 
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
WITH SERDEPROPERTIES (
"hbase.columns.mapping" = ":key,a:b,a:c,d:e"
);

2.插入数据

INSERT OVERWRITE TABLE hbase_table_2 SELECT foo, bar, foo+1, foo+2 
FROM pokes WHERE foo=98 OR foo=100;

这个有3个hive的列(value1和value2,value3),2个hbase的列族(a,d) 
Hive的2列(value1和value2)对应1个hbase的列族(a,在hbase的列名称b,c),hive的另外1列(value3)对应列(e)位于列族(d) 

3.登录hbase查看结构 
hbase(main):003:0> describe "hbase_table_2"
DESCRIPTION                                                             ENABLED                               
 {NAME => 'hbase_table_2', FAMILIES => [{NAME => 'a', COMPRESSION => 'N true                                  
 ONE', VERSIONS => '3', TTL => '2147483647', BLOCKSIZE => '65536', IN_M                                       
 EMORY => 'false', BLOCKCACHE => 'true'}, {NAME => 'd', COMPRESSION =>                                        
 'NONE', VERSIONS => '3', TTL => '2147483647', BLOCKSIZE => '65536', IN                                       
 _MEMORY => 'false', BLOCKCACHE => 'true'}]}                                                                  
1 row(s) in 1.0630 seconds

4.查看hbase的数据 
hbase(main):004:0> scan 'hbase_table_2'
ROW                          COLUMN+CELL                                                                      
 100                         column=a:b, timestamp=1297695262015, value=val_100                               
 100                         column=a:c, timestamp=1297695262015, value=101                                   
 100                         column=d:e, timestamp=1297695262015, value=102                                   
 98                          column=a:b, timestamp=1297695242675, value=val_98                                
 98                          column=a:c, timestamp=1297695242675, value=99                                    
 98                          column=d:e, timestamp=1297695242675, value=100                                   
2 row(s) in 0.0380 seconds

5.在hive中查看 
hive> select * from hbase_table_2;
OK
100     val_100 101     102
98      val_98  99      100
Time taken: 3.238 seconds



用hbase做数据库,但由于hbase没有类sql查询方式,所以操作和计算数据非常不方便,于是整合hive,让hive支撑在hbase数据库层面 的 hql查询.hive也即 做数据仓库 

1. 基于Hadoop+Hive架构对海量数据进行查询:http://blog.csdn.net/kunshan_shenbin/article/details/7105319 
2. HBase 0.90.5 + Hadoop 1.0.0 集成:http://blog.csdn.net/kunshan_shenbin/article/details/7209990 
本文的目的是要讲述如何让Hbase和Hive能互相访问,让Hadoop/Hbase/Hive协同工作,合为一体。 
本文测试步骤主要参考自:http://running.iteye.com/blog/898399 
当然,这边博文也是按照官网的步骤来的:http://wiki.apache.org/hadoop/Hive/HBaseIntegration 
1. 拷贝hbase-0.90.5.jar和zookeeper-3.3.2.jar到hive/lib下。 
注意:如何hive/lib下已经存在这两个文件的其他版本(例如zookeeper-3.3.1.jar),建议删除后使用hbase下的相关版本。 
2. 修改hive/conf下hive-site.xml文件,在底部添加如下内容: 

01 [html] view plaincopy
02 <!-- 
03 <property> 
04   <name>hive.exec.scratchdir</name>  
05   <value>/usr/local/hive/tmp</value>  
06  
07 </property>  
08 --> 
09    
10 <property>  
11   <name>hive.querylog.location</name>  
12   <value>/usr/local/hive/logs</value>  
13 </property>  
14    
15 <property> 
16   <name>hive.aux.jars.path</name>  
17   <value>file:///usr/local/hive/lib/hive-hbase-handler-0.8.0.jar,file:///usr/local/hive/lib/hbase-0.90.5.jar,file:///usr/local/hive/lib/zookeeper-3.3.2.jar</value> 
18  
19 </property>


注意:如果hive-site.xml不存在则自行创建,或者把hive-default.xml.template文件改名后使用。 
具体请参见:http://blog.csdn.net/kunshan_shenbin/article/details/7210020 

3. 拷贝hbase-0.90.5.jar到所有hadoop节点(包括master)的hadoop/lib下。 
4. 拷贝hbase/conf下的hbase-site.xml文件到所有hadoop节点(包括master)的hadoop/conf下。 
注意,hbase-site.xml文件配置信息参照:http://blog.csdn.net/kunshan_shenbin/article/details/7209990 
注意,如果3,4两步跳过的话,运行hive时很可能出现如下错误: 
1 [html] view plaincopy
2 org.apache.hadoop.hbase.ZooKeeperConnectionException: HBase is able to connect to ZooKeeper but the connection closes immediately.  
3 This could be a sign that the server has too many connections (30is the default). Consider inspecting your ZK server logs for that error and  
4 then make sure you are reusing HBaseConfiguration as often as you can. See HTable's javadoc formore information. at org.apache.hadoop. 
5 hbase.zookeeper.ZooKeeperWatcher.


参考:http://blog.sina.com.cn/s/blog_410d18710100vlbq.html 

现在可以尝试启动Hive了。 
单节点启动: 
1 > bin/hive -hiveconf hbase.master=master:60000

集群启动: 
1 > bin/hive -hiveconf hbase.zookeeper.quorum=slave

如何hive-site.xml文件中没有配置hive.aux.jars.path,则可以按照如下方式启动。 
1 > bin/hive --auxpath /usr/local/hive/lib/hive-hbase-handler-0.8.0.jar, /usr/local/hive/lib/hbase-0.90.5.jar, /usr/local/hive/lib/zookeeper-3.3.2.jar -hiveconf hbase.zookeeper.quorum=slave


接下来可以做一些测试了。 
1.创建hbase识别的数据库: 
[sql] view plaincopy 
CREATE TABLE hbase_table_1(key int, value string)  
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'  
WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:val")  
TBLPROPERTIES ("hbase.table.name" = "xyz");  
hbase.table.name 定义在hbase的table名称 
hbase.columns.mapping 定义在hbase的列族 
2.使用sql导入数据 
a) 新建hive的数据表 
[sql] view plaincopy 
<span><span></span></span>hive> CREATE TABLE pokes (foo INT, bar STRING);  
b) 批量插入数据 
[sql] view plaincopy 
1 hive> LOAD DATA LOCAL INPATH './examples/files/kv1.txt' OVERWRITE INTO TABLE
pokes;  
c) 使用sql导入hbase_table_1 
[sql] view plaincopy 
hive> INSERT OVERWRITE TABLE hbase_table_1 SELECT * FROM pokes WHERE foo=86;  
3. 查看数据 
[sql] view plaincopy 
hive> select * from  hbase_table_1;  
这时可以登录Hbase去查看数据了. 
> /usr/local/hbase/bin/hbase shell 
hbase(main):001:0> describe 'xyz'   
hbase(main):002:0> scan 'xyz'   
hbase(main):003:0> put 'xyz','100','cf1:val','www.360buy.com' 
这时在Hive中可以看到刚才在Hbase中插入的数据了。 
hive> select * from hbase_table_1 
4. hive访问已经存在的hbase 
使用CREATE EXTERNAL TABLE 
[sql] view plaincopy 
CREATE EXTERNAL TABLE hbase_table_2(key int, value string)  
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'  
WITH SERDEPROPERTIES ("hbase.columns.mapping" = "cf1:val")  
TBLPROPERTIES("hbase.table.name" = "some_existing_table");  


多列和多列族(Multiple Columns and Families) 
1.创建数据库 
Java代码  
CREATE TABLE hbase_table_2(key int, value1 string, value2 int, value3 int)   
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'  
WITH SERDEPROPERTIES (  
"hbase.columns.mapping" = ":key,a:b,a:c,d:e"  
);  

2.插入数据 
Java代码  
INSERT OVERWRITE TABLE hbase_table_2 SELECT foo, bar, foo+1, foo+2   
FROM pokes WHERE foo=98 OR foo=100;  


这个有3个hive的列(value1和value2,value3),2个hbase的列族(a,d) 
Hive的2列(value1和value2)对应1个hbase的列族(a,在hbase的列名称b,c),hive的另外1列(value3)对应列(e)位于列族(d) 

3.登录hbase查看结构 
Java代码  
1 hbase(main):003:0> describe "hbase_table_2" 
2 DESCRIPTION                                                             ENABLED                                
3  {NAME => 'hbase_table_2', FAMILIES => [{NAME => 'a', COMPRESSION => 'N true                                   
4  ONE', VERSIONS => '3', TTL => '2147483647', BLOCKSIZE => '65536', IN_M                                        
5  EMORY => 'false', BLOCKCACHE => 'true'}, {NAME => 'd', COMPRESSION =>                                         
6  'NONE', VERSIONS => '3', TTL => '2147483647', BLOCKSIZE => '65536', IN                                        
7  _MEMORY => 'false', BLOCKCACHE => 'true'}]}                                                                   
8 1 row(s) in 1.0630 seconds


4.查看hbase的数据 
Java代码  
1 hbase(main):004:0> scan 'hbase_table_2' 
2 ROW                          COLUMN+CELL                                                                       
3  100                        column=a:b, timestamp=1297695262015, value=val_100                                
4  100                        column=a:c, timestamp=1297695262015, value=101                                    
5  100                        column=d:e, timestamp=1297695262015, value=102                                    
6  98                         column=a:b, timestamp=1297695242675, value=val_98                                 
7  98                         column=a:c, timestamp=1297695242675, value=99                                     
8  98                         column=d:e, timestamp=1297695242675, value=100                                    
9 2 row(s) in 0.0380 seconds


5.在hive中查看 
Java代码  
1 hive> select * from hbase_table_2; 
2 OK 
3 100     val_100 101     102 
4 98      val_98  99      100 
5 Time taken: 3.238 seconds


参考资料: 
http://running.iteye.com/blog/898399 
http://heipark.iteye.com/blog/1150648 
http://www.javabloger.com/article/apache-hadoop-hive-hbase-integration.html 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值