HBase2.2.7从源码编译到高可用部署 整合整合Phoenix5.1.0 二级索引 整合hive3.1.2

8 篇文章 0 订阅
2 篇文章 0 订阅

HBase源码编译与安装配置

在这里插入图片描述

HBase是一种分布式、可扩展、支持海量数据存储的NoSQL数据库。

高可靠/高性能/面向列/可伸缩的分布式文件存储系统。

Hbase源码编译

1.环境配置

jdk,maven安装配置略,可参考我编译hadoop3.1.4的文章

2.下载

下载地址:

https://hbase.apache.org/downloads.html

在这里插入图片描述

3.上传解压

将下载的源码包上传到/opt/src目录下并解压

[root@localhost src]#tar -zxvf hbase-2.2.7-src.tar.gz

4.选择hadoop版本

官方源码中提供了2中hadoop版本,可以根据自己的需要选择hadoop版本

# 基于Hadoop 2.8.5 版本编译(默认)
mvn clean package -DskipTests assembly:single
# 基于Hadoop 3.1.2 版本编译
mvn clean package -DskipTests assembly:single -Dhadoop.profile=3.0

我这里hadoop使用的是3.1.4

修改pom.xml文件

[root@localhost src]# cd hbase-2.2.7/
[root@localhost hbase-2.2.7]# vim pom.xml

将文件中的

<hadoop-two.version>2.8.5</hadoop-two.version>

修改为

<hadoop-two.version>3.1.4</hadoop-two.version>

5.编译

mvn clean package -DskipTests assembly:single

等待编译成功

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary for Apache HBase 2.2.7:
[INFO] 
[INFO] Apache HBase ....................................... SUCCESS [  5.689 s]
[INFO] Apache HBase - Checkstyle .......................... SUCCESS [  0.952 s]
[INFO] Apache HBase - Annotations ......................... SUCCESS [  1.940 s]
[INFO] Apache HBase - Build Configuration ................. SUCCESS [  0.097 s]
[INFO] Apache HBase - Shaded Protocol ..................... SUCCESS [ 33.017 s]
[INFO] Apache HBase - Common .............................. SUCCESS [  9.195 s]
[INFO] Apache HBase - Metrics API ......................... SUCCESS [  1.472 s]
[INFO] Apache HBase - Hadoop Compatibility ................ SUCCESS [  1.971 s]
[INFO] Apache HBase - Metrics Implementation .............. SUCCESS [  1.526 s]
[INFO] Apache HBase - Hadoop Two Compatibility ............ SUCCESS [  2.824 s]
[INFO] Apache HBase - Protocol ............................ SUCCESS [  9.417 s]
[INFO] Apache HBase - Client .............................. SUCCESS [  9.585 s]
[INFO] Apache HBase - Zookeeper ........................... SUCCESS [  2.225 s]
[INFO] Apache HBase - Replication ......................... SUCCESS [  2.008 s]
[INFO] Apache HBase - Resource Bundle ..................... SUCCESS [  0.125 s]
[INFO] Apache HBase - HTTP ................................ SUCCESS [  5.303 s]
[INFO] Apache HBase - Procedure ........................... SUCCESS [  2.814 s]
[INFO] Apache HBase - Server .............................. SUCCESS [ 31.412 s]
[INFO] Apache HBase - MapReduce ........................... SUCCESS [  5.591 s]
[INFO] Apache HBase - Testing Util ........................ SUCCESS [  3.415 s]
[INFO] Apache HBase - Thrift .............................. SUCCESS [  9.243 s]
[INFO] Apache HBase - RSGroup ............................. SUCCESS [  4.060 s]
[INFO] Apache HBase - Shell ............................... SUCCESS [  3.912 s]
[INFO] Apache HBase - Coprocessor Endpoint ................ SUCCESS [  4.116 s]
[INFO] Apache HBase - Integration Tests ................... SUCCESS [  4.609 s]
[INFO] Apache HBase - Rest ................................ SUCCESS [  5.552 s]
[INFO] Apache HBase - Examples ............................ SUCCESS [  4.092 s]
[INFO] Apache HBase - Shaded .............................. SUCCESS [  0.225 s]
[INFO] Apache HBase - Shaded - Client (with Hadoop bundled) SUCCESS [ 26.096 s]
[INFO] Apache HBase - Shaded - Client ..................... SUCCESS [ 13.066 s]
[INFO] Apache HBase - Shaded - MapReduce .................. SUCCESS [ 17.384 s]
[INFO] Apache HBase - External Block Cache ................ SUCCESS [  1.988 s]
[INFO] Apache HBase - HBTop ............................... SUCCESS [  2.130 s]
[INFO] Apache HBase - Assembly ............................ SUCCESS [ 57.041 s]
[INFO] Apache HBase - Shaded - Testing Util ............... SUCCESS [ 55.031 s]
[INFO] Apache HBase - Shaded - Testing Util Tester ........ SUCCESS [  2.177 s]
[INFO] Apache HBase Shaded Packaging Invariants ........... SUCCESS [  1.430 s]
[INFO] Apache HBase Shaded Packaging Invariants (with Hadoop bundled) SUCCESS [  0.807 s]
[INFO] Apache HBase - Archetypes .......................... SUCCESS [  0.033 s]
[INFO] Apache HBase - Exemplar for hbase-client archetype . SUCCESS [  2.557 s]
[INFO] Apache HBase - Exemplar for hbase-shaded-client archetype SUCCESS [  2.430 s]
[INFO] Apache HBase - Archetype builder ................... SUCCESS [  0.689 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  05:52 min
[INFO] Finished at: 2021-05-27T13:18:48+08:00
[INFO] ------------------------------------------------------------------------

6.查看编译成果

[root@localhost hbase-2.2.7]# cd hbase-assembly/target/
[root@localhost target]# ls
archive-tmp  dependency  dependency-maven-plugin-markers  hbase-2.2.7-bin.tar.gz  hbase-2.2.7-client-bin.tar.gz  maven-shared-archive-resources  NOTICE.aggregate  supplemental-models.xml

Hbase安装部署

1.上传解压

上传到hdp16

解压到**/opt/bigdata**文件夹

[along@hdp16 resource]$ tar -zxvf hbase-2.2.7-bin.tar.gz -C /opt/bigdata/
[along@hdp16 resource]$ cd /opt/bigdata/
[along@hdp16 bigdata]$ mv hbase-2.2.7/ hbase

2.配置环境变量

[along@hdp16 bigdata]$ sudo vim /etc/profile.d/my_env.sh

文件末尾添加内容

#HBASE_HOME
export HBASE_HOME=/opt/bigdata/hbase
export PATH=$PATH:$HBASE_HOME/bin

3.修改配置文件

修改hbase-env.sh

[along@hdp16 conf]$ ls
hadoop-metrics2-hbase.properties  hbase-env.cmd  hbase-env.sh  hbase-policy.xml  hbase-site.xml  log4j-hbtop.properties  log4j.properties  regionservers
[along@hdp16 conf]$ vim hbase-env.sh 

修改内容

export HBASE_MANAGES_ZK=false

修改配置文件hbase-site.xml

修改为

    <property>
        <name>hbase.rootdir</name>
        <value>hdfs://ns/hbase</value>
    </property>

    <property>
        <name>hbase.cluster.distributed</name>
        <value>true</value>
    </property>

    <property>
        <name>hbase.zookeeper.quorum</name>
        <value>hdp16,hdp17,hdp18</value>
    </property>

    <property>
        <name>hbase.unsafe.stream.capability.enforce</name>
        <value>false</value>
    </property>

    <property>
        <name>hbase.wal.provider</name>
        <value>filesystem</value>
    </property>

修改文件regionservers

[along@hdp16 conf]$ vim regionservers

内容修改为

hdp16
hdp17
hdp18

4.同步安装文件到hdp17和hdp18

[along@hdp16 bigdata]$ scp -r hbase/ along@hdp17:/opt/bigdata/
[along@hdp16 bigdata]$ scp -r hbase/ along@hdp18:/opt/bigdata/

5.启动服务

启动zookeeper

[along@hdp14 ~]$ zk.sh start

启动hadoop集群

[along@hdp14 ~]$ hdp.sh start

Hbase单点启动

[along@hdp16 hbase]$ pwd
/opt/bigdata/hbase
[along@hdp16 bigdata]$ bin/hbase-daemon.sh start master
[along@hdp17 bigdata]$ /bin/hbase-daemon.sh start regionserver
[along@hdp17 bigdata]$ /opt/bigdata/hbase/bin/hbase-daemon.sh start regionserver
[along@hdp18 bigdata]$ /opt/bigdata/hbase/bin/hbase-daemon.sh start regionserver

Hbase集群启停

bin/start-hbase.sh
bin/stop-hbase.sh

查看Hbase的web页面

http://hdp16:16010/

在这里插入图片描述

6.高可用

HMaster负责监控HRegionServer的生命周期,对RegionServer均衡负载,如果HMaster挂了,Hbase集群就除了问题。所以要配置HMaster高可用。

停掉Hbase集群

[along@hdp16 hbase]$ pwd
/opt/bigdata/hbase
[along@hdp16 hbase]$ bin/stop-hbase.sh 
stopping hbase.............
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/bigdata/hbase/lib/client-facing-thirdparty/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]

conf目录下新建配置文件backup-masters

[along@hdp16 hbase]$ vim conf/backup-masters

添加内容

hdp17

将配置文件backup-masters同步到hdp17和hdp18

[along@hdp16 hbase]$ scp conf/backup-masters along@hdp17:/opt/bigdata/hbase/conf/

启动Hbase集群

[along@hdp16 hbase]$ bin/start-hbase.sh 

查看Hbase的web页面

http://hdp16:16010/

在这里插入图片描述

可以看到hdp17成为了备份的Hmaser

7.整合Phoenix

下载Phoneix安装包

http://phoenix.apache.org/download.html

使用Hbase2.2对应的Phoneix5.1.0或者5.1.1,官网编译了各种版本供开发者使用,要是每个组件都这样就好了

在这里插入图片描述

上传解压

上传到/opt/resource目录,解压到/opt/bigdata目录下

[along@hdp16 resource]$ tar -zxvf phoenix-hbase-2.2-5.1.0-bin.tar.gz -C /opt/bigdata/
[along@hdp16 resource]$ cd /opt/bigdata/
[along@hdp16 bigdata]$ mv phoenix-hbase-2.2-5.1.0-bin/ phoenix

复制jar包

将phoenix-server-hbase-2.2-5.1.0.jar复制到各个节点的Hbase下

[along@hdp16 bigdata]$ cd phoenix/
[along@hdp16 phoenix]$ ls
bin  docs  examples  LICENSE  NOTICE  phoenix-client-hbase-2.2-5.1.0.jar  phoenix-pherf-5.1.0.jar  phoenix-server-hbase-2.2-5.1.0.jar
#hdp16
[along@hdp16 phoenix]$ cp phoenix-server-hbase-2.2-5.1.0.jar /opt/bigdata/
#hdp17
[along@hdp16 phoenix]$ scp phoenix-server-hbase-2.2-5.1.0.jar along@hdp17:/opt/bigdata/hbase/lib/
#hdp18
[along@hdp16 phoenix]$ scp phoenix-server-hbase-2.2-5.1.0.jar along@hdp18:/opt/bigdata/hbase/lib/

配置环境变量

[along@hdp16 phoenix]$ sudo vim /etc/profile.d/my_env.sh

文件末尾添加如下内容

#PHOENIX
export PHOENIX_HOME=/opt/bigdata/phoenix
export PHOENIX_CLASSPATH=$PHOENIX_HOME
export PATH=$PATH:$PHOENIX_HOME/bin

环境变量生效

[along@hdp16 phoenix]$ source /etc/profile.d/my_env.sh 

重启Hbase

[along@hdp16 phoenix]$ cd ../hbase/
[along@hdp16 hbase]$ bin/stop-hbase.sh
[along@hdp16 hbase]$ bin/start-hbase.sh

启动测试

 [along@hdp16 hbase]$ /opt/bigdata/phoenix/bin/sqlline.py hdp16,hdp17,hdp18:2181
Setting property: [incremental, false]
Setting property: [isolation, TRANSACTION_READ_COMMITTED]
issuing: !connect -p driver org.apache.phoenix.jdbc.PhoenixDriver -p user "none" -p password "none" "jdbc:phoenix:hdp16,hdp17,hdp18:2181"
Connecting to jdbc:phoenix:hdp16,hdp17,hdp18:2181
21/05/28 14:30:33 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Connected to: Phoenix (version 5.1)
Driver: PhoenixEmbeddedDriver (version 5.1)
Autocommit status: true
Transaction isolation: TRANSACTION_READ_COMMITTED
sqlline version 1.9.0
0: jdbc:phoenix:hdp16,hdp17,hdp18:2181> !table

日志这里有个警告,不影响使用,可忽略

也可参考这里这里解决https://blog.csdn.net/yesuhuangsi/article/details/51832170

测试命令

0: jdbc:phoenix:hdp16,hdp17,hdp18:2181> !table
+-----------+-------------+------------+--------------+---------+-----------+---------------------------+----------------+-------------+----------------+--------------+--------------+-----------------+
| TABLE_CAT | TABLE_SCHEM | TABLE_NAME |  TABLE_TYPE  | REMARKS | TYPE_NAME | SELF_REFERENCING_COL_NAME | REF_GENERATION | INDEX_STATE | IMMUTABLE_ROWS | SALT_BUCKETS | MULTI_TENANT | VIEW_STATEMENT  |
+-----------+-------------+------------+--------------+---------+-----------+---------------------------+----------------+-------------+----------------+--------------+--------------+-----------------+
|           | SYSTEM      | CATALOG    | SYSTEM TABLE |         |           |                           |                |             | false          | null         | false        |                 |
|           | SYSTEM      | CHILD_LINK | SYSTEM TABLE |         |           |                           |                |             | false          | null         | false        |                 |
|           | SYSTEM      | FUNCTION   | SYSTEM TABLE |         |           |                           |                |             | false          | null         | false        |                 |
|           | SYSTEM      | LOG        | SYSTEM TABLE |         |           |                           |                |             | true           | 32           | false        |                 |
|           | SYSTEM      | MUTEX      | SYSTEM TABLE |         |           |                           |                |             | true           | null         | false        |                 |
|           | SYSTEM      | SEQUENCE   | SYSTEM TABLE |         |           |                           |                |             | false          | null         | false        |                 |
|           | SYSTEM      | STATS      | SYSTEM TABLE |         |           |                           |                |             | false          | null         | false        |                 |
|           | SYSTEM      | TASK       | SYSTEM TABLE |         |           |                           |                |             | false          | null         | false        |                 |
+-----------+-------------+------------+--------------+---------+-----------+---------------------------+----------------+-------------+----------------+--------------+--------------+-----------------

Phoenix 二级索引配置

修改Hbase配置文件

[along@hdp16 hbase]$ pwd
/opt/bigdata/hbase
[along@hdp16 hbase]$ vim conf/hbase-site.xml 

添加如下内容

    <!-- phoenix regionserver 二级索引配置-->
    <property>
        <name>hbase.regionserver.wal.codec</name>
        <value>org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec</value>
    </property>

    <property>
        <name>hbase.region.server.rpc.scheduler.factory.class</name>
        <value>org.apache.hadoop.hbase.ipc.PhoenixRpcSchedulerFactory</value>
    </property>

    <property>
        <name>hbase.rpc.controllerfactory.class</name>
        <value>org.apache.hadoop.hbase.ipc.controller.ServerRpcControllerFactory</value>
    </property>

同步配置文件到hdp17,hdp18

[along@hdp17 hbase]$ scp conf/hbase-site.xml along@hdp17:/opt/bigdata/hbase/conf/
[along@hdp18 hbase]$ scp conf/hbase-site.xml along@hdp17:/opt/bigdata/hbase/conf/

8.整合hive

整合hive配置

hdp14,hdp15上修改hive配置文件

[along@hdp14 ~]$ cd /opt/bigdata/hive
[along@hdp14 hive]$ vim conf/hive-site.xml

添加让如下内容,配置hbase的zookeeper

<!--Hbase 配置-->
<property>
  <name>hbase.zookeeper.quorum</name>
  <value>hdp16,hdp17,hdp18</value>
</property>

注意:这里必须配置,否则hive会一直链接本地的zookeeper

详细原因可参考我的另一篇文章
https://blog.csdn.net/weixin_52918377/article/details/117416505

启动hive服务的两个服务

[along@hdp14 hive]$ hive2server.sh 

测试

hive中创建一张表关联Hbase

[along@hdp14 hive]$ hive
which: no hbase in (/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/opt/bigdata/jdk1.8.0_212/bin:/opt/bigdata/hadoop-3.1.4/bin:/opt/bigdata/hadoop-3.1.4/sbin:/opt/bigdata/hive/bin:/opt/bigdata/spark/bin:/home/along/.local/bin:/home/along/bin)
Hive Session ID = 85d9ade2-3494-4ae1-8884-3b93ae02139d

Logging initialized using configuration in file:/opt/bigdata/hive/conf/hive-log4j2.properties Async: true
Hive Session ID = 245b0622-569b-4608-b364-bfe128750974

执行下面的sql

CREATE TABLE hive_hbase_emp_table(
empno int,
ename string,
job string,
mgr int,
hiredate string,
sal double,
comm double,
deptno int)
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,info:ename,info:job,info:mgr,info:hiredate,info:sal,info:comm,info:deptno")
TBLPROPERTIES ("hbase.table.name" = "hbase_emp_table");

hive中查看有没有这个表

hive (default)> show tables;
OK
tab_name
hive_hbase_emp_table
student
Time taken: 0.587 seconds, Fetched: 2 row(s)

Hbase中查看表

[along@hdp16 ~]$ /opt/bigdata/hbase/bin/hbase shell

执行list命令,查看表

hbase(main):010:0* list
TABLE        
SYSTEM.CATALOG      
SYSTEM.CHILD_LINK           
SYSTEM.FUNCTION        
SYSTEM.LOG           
SYSTEM.MUTEX         
SYSTEM.SEQUENCE           
SYSTEM.STATS            
SYSTEM.TASK               
US_POPULATION            
hbase_emp_table                 
test                
11 row(s)
Took 1.1153 seconds               
=> ["SYSTEM.CATALOG", "SYSTEM.CHILD_LINK", "SYSTEM.FUNCTION", "SYSTEM.LOG", "SYSTEM.MUTEX", "SYSTEM.SEQUENCE", "SYSTEM.STATS", "SYSTEM.TASK", "US_POPULATION", "hbase_emp_table", "test"]
hbase(main):011:0>

:0* list
TABLE
SYSTEM.CATALOG
SYSTEM.CHILD_LINK
SYSTEM.FUNCTION
SYSTEM.LOG
SYSTEM.MUTEX
SYSTEM.SEQUENCE
SYSTEM.STATS
SYSTEM.TASK
US_POPULATION
hbase_emp_table
test
11 row(s)
Took 1.1153 seconds
=> [“SYSTEM.CATALOG”, “SYSTEM.CHILD_LINK”, “SYSTEM.FUNCTION”, “SYSTEM.LOG”, “SYSTEM.MUTEX”, “SYSTEM.SEQUENCE”, “SYSTEM.STATS”, “SYSTEM.TASK”, “US_POPULATION”, “hbase_emp_table”, “test”]
hbase(main):011:0>


整合完成
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值