HBase加载数据,和hive整合

HBase加载数据原理:利用HBase的数据信息按照特定格式存储在hdfs内这一原理,直接在HDFS中生成持久化的HFile数据格式文件,然后上传至适当位置;

HFile文件:是数据的实际存储格式,他是二进制文件。StoreFile对HFile进行了封装。HBase的数据在底层文件中时以KeyValue键值对的形式存储的,HBase没有数据类型,HFile中存储的是字节,这些字节按字典序排列。

自定义Map类:
public class BulkLoadMapper extends Mapper<LongWritable,Text,ImmutableBytesWritable,Put> {
@Override
protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
    String[] split = value.toString().split("\t");
    Put put = new Put(split[0].getBytes()); 	//rowkey
    put.addColumn("f1".getBytes(),"name".getBytes(),split[1].getBytes()); //列族\列名\值
    put.addColumn("f1".getBytes(),"age".getBytes(),split[2].getBytes());
    context.write(new ImmutableBytesWritable(split[0].getBytes()),put);
}
}
主类main:
public class BulkLoadMain extends Configured implements Tool {
@Override
public int run(String[] args) throws Exception {
    Configuration conf = super.getConf();
    Connection connection = ConnectionFactory.createConnection(conf);
    Table table = connection.getTable(TableName.valueOf("myuser2"));//目标的hbase表
    Job job = Job.getInstance(conf, "bulkLoad");
    //读取文件,解析成key,value对
    job.setInputFormatClass(TextInputFormat.class);
    TextInputFormat.addInputPath(job,new Path("hdfs://node01:8020/hbase/input"));
    //定义我们的mapper类
    job.setMapperClass(BulkLoadMapper.class);
    job.setMapOutputKeyClass(ImmutableBytesWritable.class);
    job.setMapOutputValueClass(Put.class);
    //reduce过程也省掉
    /**
     * Job job, Table table, RegionLocator regionLocator
     *  使用configureIncrementalLoad来进行配置我们的HFile加载到哪一个表里面的哪一个列族里面去
     */
//设置数据增量导入        HFileOutputFormat2.configureIncrementalLoad(job,table,connection.getRegionLocator(TableName.valueOf("myuser2")));
    //设置我们的输出类型,将我们的数据输出成为HFile格式
    job.setOutputFormatClass(HFileOutputFormat2.class);
    //设置我们的输出路径
    HFileOutputFormat2.setOutputPath(job,new Path("hdfs://node01:8020/hbase/hfile_out"));
    boolean b = job.waitForCompletion(true);
    return b?0:1;
}
public static void main(String[] args) throws Exception {
    Configuration configuration = HBaseConfiguration.create();
    configuration.set("hbase.zookeeper.quorum", "node01:2181,node02:2181");
    int run = ToolRunner.run(configuration, new BulkLoadMain(), args);
    System.exit(run);
}
}


加载到hbase:
public class LoadData {
public static void main(String[] args) throws Exception {
    Configuration configuration = HBaseConfiguration.create();
    configuration.set("hbase.zookeeper.property.clientPort", "2181");
    configuration.set("hbase.zookeeper.quorum", "node01,node02,node03");

    Connection connection =  ConnectionFactory.createConnection(configuration);
    Admin admin = connection.getAdmin();
    Table table = connection.getTable(TableName.valueOf("myuser2"));
    LoadIncrementalHFiles load = new LoadIncrementalHFiles(configuration);
    load.doBulkLoad(new Path("hdfs://node01:8020/hbase/output_hfile"), admin,table,connection.getRegionLocator(TableName.valueOf("myuser2")));
}
}

yarn jar hbase-mapreduce-2.0.0.jar 路径1 hbase目标表name

load data inpath ‘/hbase/hfile_out’ into table myuser2

Hbase和hive整合:

1、拷贝hbase的五个依赖jar包到hive的lib目录下

ln -s /export/servers/hbase-2.0.0/lib/hbase-client-2.0.0.jar /export/servers/apache-hive-2.1.0-bin/lib/hbase-client-2.0.0.jar
ln -s /export/servers/hbase-2.0.0/lib/hbase-hadoop2-compat-2.0.0.jar /export/servers/apache-hive-2.1.0-bin/lib/hbase-hadoop2-compat-2.0.0.jar
ln -s /export/servers/hbase-2.0.0/lib/hbase-hadoop-compat-2.0.0.jar /export/servers/apache-hive-2.1.0-bin/lib/hbase-hadoop-compat-2.0.0.jar
ln -s /export/servers/hbase-2.0.0/lib/hbase-it-2.0.0.jar /export/servers/apache-hive-2.1.0-bin/lib/hbase-it-2.0.0.jar
ln -s /export/servers/hbase-2.0.0/lib/hbase-server-2.0.0.jar /export/servers/apache-hive-2.1.0-bin/lib/hbase-server-2.0.0.jar 

2、修改hive配置文件hive-site.xml

<property>
<name>hive.zookeeper.quorum</name>
<value>node01,node02,node03</value>
</property>

<property>
<name>hbase.zookeeper.quorum</name>
<value>node01,node02,node03</value>
</property>

3、修改hive-env.sh配置文件添加配置

export HADOOP_HOME=/export/servers/hadoop-2.7.5
export HBASE_HOME=/export/servers/hbase-2.0.0
export HIVE_CONF_DIR=/export/servers/apache-hive-2.1.0-bin/conf

4、我们可以创建一个hive的管理表与hbase当中的表进行映射,hive管理表当中的数据,都会存储到hbase上面去

4.1创建hive数据库与hive对应的数据库表

create database course;
use course;
create external table if not exists course.score(id int,cname string,score int) row format 	delimited fields terminated by '\t' stored as textfile ;

4.2hive当中创建内部表(course数据库名)

create table course.hbase_score(id int,cname string,score int)  
stored by 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'  
with serdeproperties("hbase.columns.mapping" = "cf:name,cf:score") 
tblproperties("hbase.table.name" = "hbase_score");

4.3通过insert overwrite select 插入数据

insert overwrite table course.hbase_score select id,cname,score from course.score;

4.4进入hbase的客户端查看表hbase_score,并查看当中的数据

hbase(main):023:0> list
TABLE                                                                                       
hbase_score            

hive映射hbase表:
建立hive的外部表,映射HBase当中的表以及字段
在hive当中建立外部表,
进入hive客户端,然后执行以下命令进行创建hive外部表,就可以实现映射HBase当中的表数据

CREATE external TABLE course.hbase2hive(id int, name string, score int) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf:name,cf:score") TBLPROPERTIES("hbase.table.name" ="hbase_hive_score");
  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值