Hbase

概述

Hbase是Hadoop数据库,这是一个分布式,可扩展的大数据存储。
当你需要对大数据进行随机,实时的读/写访问时,可以使用Hbase。该项目的目标是在商品硬件群集上托管非常大的表(数十亿行X数百万列)。Apache HBase是一个开放源代码,分布式,版本化,非关系型数据库,其仿照Google的Bigtable: Chang等人的结构化数据分布式存储系统。正如Bigtable利用Google文件系统提供的分布式数据存储一样,Apache HBase在Hadoop和HDFS之上提供类似于Bigtable的功能。

架构

在这里插入图片描述
client

安装

我安装的2.2.4版本,如果后边需要用到sqoop导入数据,建议安装1.6.0,sqoop1.4.7不支持2.2.4版本

环境

  • Hadoop
  • ZK

下载安装

  • 解压,配置环境变量
tar -zxvf hbase-2.2.4-bin.tar.gz -C /home/software/

vi .bash_profile
export HBASE_HOME=/home/software/hbase-2.2.4
export HBASE_MANAGES_ZK=false
export PATH=$PATH:$HBASE_HOME/bin
  • 修改hbase-site.xml
vi /home/software/hbase-2.2.4/conf/hbase-site.xml
<configuration>
<property>
     <name>hbase.rootdir</name>
     <value>hdfs://Hadoop:9000/hbase</value>
</property>

<property>
        <name>hbase.zookeeper.quorum</name>
        <value>Hadoop</value>
</property>

<property>
        <name>hbase.zookeeper.property.clientPort</name>
        <value>2181</value>
</property>

<property>
        <name>hbase.cluster.distributed</name>
        <value>true</value>
</property>
</configuration>

启动&关闭

[root@hadoop ~]# start-hbase.sh
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/software/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/software/hbase-2.2.4/lib/client-facing-thirdparty/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
running master, logging to /home/software/hbase-2.2.4/logs/hbase-root-master-hadoop.out
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/software/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/software/hbase-2.2.4/lib/client-facing-thirdparty/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Hadoop: running regionserver, logging to /home/software/hbase-2.2.4/bin/../logs/hbase-root-regionserver-hadoop.out

[root@hadoop ~]# jps
1360 QuorumPeerMain
4960 HRegionServer
4833 HMaster
3526 SecondaryNameNode
5240 Jps
3257 NameNode
3375 DataNode

[root@hadoop ~]# stop-hbase.sh
stopping hbase..............
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/software/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/software/hbase-2.2.4/lib/client-facing-thirdparty/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]

页面访问

http://ip:16010/master-status

在这里插入图片描述

常用Shell命令

命令说明
create创建一张表,例如create ‘test’, ‘f1’, ‘f2’, ‘f3’
disable停止指定的表,例如disable ‘test’
enable启动指定的表,例如enable ‘test’
alter更改表结构。可以通过alter命令增加、修改、删除列族信息以及表相关的参数值,例如alter ‘test’, {NAME => ‘f3’, METHOD => ‘delete’}
describe获取表的描述信息,例如describe ‘test’
drop删除指定表。删除前表必须已经是停止状态,例如drop ‘test’
put写入指定cell的value。Cell的定位由表、rowk、列组合起来唯一决定,例如put ‘test’,‘r1’,‘f1:c1’,‘myvalue1’
get获取行的值或者行的指定cell的值。例如get ‘test’,‘r1’
scan查询表数据。参数中指定表名和scanner,例如scan ‘test’
名称命令表达式
查看hbase状态status
创建表create ‘表名’,‘列簇1’,‘列簇2’,‘列簇N’
查看所有表list
描述表describe ‘表名’
判断表存在exists ‘表名’
判断是否禁用启用表is_enabled ‘表名’
is_disabled ‘表名’
添加记录put ‘表名’,‘rowkey’,‘列簇:列’,‘值’
查看记录rowkey下的所有数据get ‘表名’,‘rowkey’
查看所有记录scan ‘表名’
查看表中的记录总数count ‘表名’
获取某个列簇get ‘表名’,‘rowkey’,‘列簇:列’
获取某个列簇的某个列get ‘表名’,‘rowkey’,‘列簇:列’
删除记录delete ‘表名’,‘rowkey’,‘列簇:列’
删除整行deleteall ‘表名’,‘rowkey’
清空表truncate ‘表名’
查看某个表某个列中所有数据scan ‘表名’,{COLUMNS=>‘列簇:列名’}
更新记录就是重新一边,进行覆盖,Hbase没有修改,都是追加

Java API

  • Maven依赖
<dependency>
    <groupId>org.apache.hbase</groupId>
    <artifactId>hbase-client</artifactId>
    <version>1.2.4</version>
</dependency>
  • 相关代码
package hbase;

import org.apache.commons.lang.StringUtils;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.Cell;
import org.apache.hadoop.hbase.HColumnDescriptor;
import org.apache.hadoop.hbase.HTableDescriptor;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.client.*;
import org.apache.hadoop.hbase.util.Bytes;
import org.apache.log4j.Logger;
import util.PropertiesUtils;

import java.io.IOException;

/**
 * @ClassName HbaseClient
 * @Description: TODO
 * @Author Lyb
 * @Date 2020/11/18
 **/
public class HbaseClient {

    private static Logger LOG = Logger.getLogger(HbaseClient.class);

    private static Connection conn;
    private static Admin admin;

    static {
        Configuration configuration = new Configuration();
        configuration.set("hbase.zookeeper.quorum", "Hadoop");
        configuration.set("hbase.zookeeper.property.clientPort", "2181");
        try {
            conn = ConnectionFactory.createConnection(configuration);
            LOG.info("Hbase connection: " + conn);
            admin = conn.getAdmin();
            LOG.info("Hbase admin: " + admin);
        } catch (IOException e) {
            LOG.error("获取连接失败!");
            LOG.error(e.getMessage(), e);
        }
    }

    /**
     * 获取Hbase connection
     *
     * @return conn
     */
    public static Connection getConn() {
        return conn;
    }

    /**
     * 获取HBase admin
     *
     * @return admin
     */
    public static Admin getHbaseAdmin() {
        return admin;
    }

    /**
     * 关闭连接
     */
    public static synchronized void close() {
        try {
            if (admin != null) admin.close();
        } catch (IOException e) {
            LOG.error("Failed to close the Hbase admin");
        }
        try {
            if (conn != null) conn.close();
        } catch (IOException e) {
            LOG.error("Failed to close the Hbase connection");
        }
    }

    /**
     * 表是否存在
     *
     * @param tableName
     * @return tableExists
     */
    public static boolean tableExists(String tableName) {
        try {
            return admin.tableExists(TableName.valueOf(tableName));
        } catch (IOException e) {
            e.printStackTrace();
            return false;
        }
    }

    /**
     * 创建表
     *
     * @param tableName
     * @param columFamilys
     * @throws IOException
     */
    public static void createTable(String tableName, String... columFamilys) {
        if (StringUtils.isBlank(tableName) || columFamilys.length == 0) {
            LOG.error("Table name or columFamilys is empty");
            return;
        }
        HTableDescriptor hd = new HTableDescriptor(TableName.valueOf(tableName));
        for (String cf : columFamilys) {
            if (!StringUtils.isBlank(tableName)) {
                hd.addFamily(new HColumnDescriptor(cf));
            }
        }
        try {
            admin.createTable(hd);
        } catch (IOException e) {
            LOG.error("Failed to create hbase table");
            LOG.error(e.getMessage(), e);
            e.printStackTrace();
        } finally {
            HbaseClient.close();
        }
    }

    /**
     * 获取Hbase中所有表名
     *
     * @return
     */
    public static TableName[] listTableName() {
        try {
            return admin.listTableNames();
        } catch (IOException e) {
            e.printStackTrace();
            return null;
        }
    }


    public static void scanTable(String tableName) {
        try {
            Scan scan = new Scan();
            // 使用正则匹配进行数据过滤
            // RowFilter rowFilter = new RowFilter(CompareFilter.CompareOp.EQUAL, new RegexStringComparator("5555"));
            // scan.setFilter(rowFilter);
            Table table = conn.getTable(TableName.valueOf(tableName));
            ResultScanner scanner = table.getScanner(scan);

            for (Result result : scanner) {
                System.out.println("===============================");
                String rowkey = Bytes.toString(result.getRow());
                System.out.println("rowkey ---> " + rowkey);
                for (Cell cell : result.listCells()) {
                    String key =
                            Bytes.toString(
                                    cell.getQualifierArray(), cell.getQualifierOffset(), cell.getQualifierLength());
                    String value =
                            Bytes.toString(cell.getValueArray(), cell.getValueOffset(), cell.getValueLength());

                    System.out.println(key + " --> " + value);
                }
                System.out.println("===============================");
            }
        } catch (IOException e) {
            LOG.error("Failed to scan " + tableName);
            e.printStackTrace();
        }
    }

    public static void main(String[] args) throws IOException {
        scanTable("test_user");
    }
}
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值