一、软件准备:
(Hbase) https://mirrors.tuna.tsinghua.edu.cn/apache/hbase/1.3.1/hbase-1.3.1-bin.tar.gz
(Zookeeper) https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper/zookeeper-3.4.9/zookeeper-3.4.9.tar.gz
下载完成后对上述两个软件进行解压:
[root@Clouder3local]# tar zxvf hbase-1.3.1-bin.tar.gz
[root@Clouder3local]# tar zxvf zookeeper-3.4.9.tar.gz
这里我使用的是外部的Zookeeper集群管理Hbase。
二、配制文件修改:
进入hbase配制文件存放目录对各配制文件进行修改,[root@Clouder3local]# cd hbase-1.3.1/conf/
(1) 修改hbase-env.sh , [root@Clouder3 conf]# vim hbase-env.sh 添加如下配制:
exportJAVA_HOME=/usr/java/jdk1.8.0_144/ (配制JAVA_HOME)
exportHBASE_MANAGES_ZK=false (默认为true使用内部Zookeeper,由于我使用的外部ZK因此此处改为false)
#Configure PermSize. Only needed in JDK7. You can safely remove it for JDK8+
参看配制项说明如下两行配制仅JDK7需要,JDK8+可以删除,在这里我将下面两行注释掉
# export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -XX:PermSize=128m-XX:MaxPermSize=128m"
# export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS-XX:PermSize=128m -XX:MaxPermSize=128m"
(2) 修改hbase-site.xml , [root@Clouder3 conf]# vim hbase-site.xml 在configuration中添加如下配制:
<property>
<name>hbase.zookeeper.quorum</name>
<value>192.168.1.19,192.168.1.20,192.168.1.21</value>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://Clouder3:9000/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
(3) 修改regionservers , [root@Clouder3 conf]# vim regionservers 删除原文件里面的内容新增集群中节点名称:
Clouder1
Clouder2
Clouder3
(4) 修改hadoop-env.sh ,[root@Clouder3 conf]# vim/usr/local/hadoop-2.7.3/etc/hadoop/hadoop-env.sh 找到HADOOP_CLASSPATH将hbase lib路径添加到HADOOP_CLASSPATH中,防止运行hbase相关jar包报错:
exportHADOOP_CLASSPATH=$HADOOP_CLASSPATH:/usr/local/hbase-1.3.1/lib/*
以上Hbase相关配制修改完成,分别将修改后的文件发送到其它节点:
[root@Clouder3local]# scp -r hbase-1.3.1 Clouder1:/usr/local/
[root@Clouder3local]# scp -r hbase-1.3.1 Clouder2:/usr/local/
(5) 外部Zookeeper集群配制(Zookeeper集群搭建时最好选用奇数台主机这样便于ZK选主):
修改zoo.cfg文件 , 将zoo_sample.cfg重命名为zoo.cfg(或者复制并命名为zoo.cfg):
[root@Clouder3conf]# cp zoo_sample.cfg zoo.cfg
向zoo.cfg文件中添加配制信息:
server.0=192.168.1.19:2888:3888
server.1=192.168.1.20:2888:3888
server.2=192.168.1.21:2888:3888
dataDir=/usr/local/zookeeper-3.4.9/data(将默认路径替换为这个)
(6) 创建zookeeper dataDir目录:
该路径对应上述(5)中的配制路径,[root@Clouder3 zookeeper-3.4.9]#mkdir data
在创建的data目录中写入zookeeper的ID,[root@Clouder3 data]# vim myid
添加内容:2(该ID号码对应(5)中配制的Server号码)
将配制好的zookeeper发送至其它各节点:
[root@Clouder3local]# scp -rzookeeper-3.4.9 Clouder1:/usr/local/
[root@Clouder3local]# scp -rzookeeper-3.4.9 Clouder2:/usr/local/
注意:将配制好的zookeeper发送至其它节点的时候,必须到各节点修改一次myid文件,使当前节点的ID与zoo.cfg中配制的号码保持一致!!!
(7)zookeeper集群启动
进入到zookeeper安装目录,[root@clouder3 zookeeper-3.4.9]#bin/zkServer.sh start (zookeeper集群需要在各节点运行该命令单独启动,也可自行编写脚本一键启动)
查看zookeeper状态:
[root@clouder3zookeeper-3.4.9]# bin/zkServer.sh status (zookeeper查看运行状态命令)
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.4.9/bin/../conf/zoo.cfg
Mode: follower (代表从节点)
[root@Clouder1~]# /usr/local/zookeeper-3.4.9/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.4.9/bin/../conf/zoo.cfg
Mode: leader
(代表主节点)
[root@clouder2zookeeper-3.4.9]# bin/zkServer.sh status (zookeeper查看运行状态命令)
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.4.9/bin/../conf/zoo.cfg
Mode: follower
(代表从节点)
(8) 修改系统环境变量
[root@clouder3~]# vim /etc/profile
配制hbase环境变量
exportHBASE_HOME=/usr/local/hbase-1.3.1
exportPATH=.:$JAVA_HOME/bin:$JRE_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HBASE_HOME/bin:$PATH
使配制生效:
source/etc/profile
分别在其它各节点进行/etc/profile修改并使配制生效(source /etc/profile)
(9) 启动Hbase集群及验证:
bin/start-hbase.sh
验证jps:
[root@clouder3 hbase-1.3.1]# jps
8256 SecondaryNameNode
8405 ResourceManager
30965 Jps
12806 QuorumPeerMain
13831 HRegionServer
8504 NodeManager
13706 HMaster
8091 DataNode
7965 NameNode
[root@Clouder2 ~]# jps
851 QuorumPeerMain
1555 HRegionServer
5508 DataNode
5612 NodeManager
22462 Jps
[root@Clouder1 ~]# jps
2881 DataNode
30834 QuorumPeerMain
2986 NodeManager
31594 HRegionServer
28814 Jps
注:启动Hbase集群前一定要确保Hadoop集群和Zookeeper集群已经启动
三、测试
直接访问地址:http://192.168.27.139:16030/
如图:
验证Hbase集群
执行命令,进入到Hbase的bin目录内,命令是:
cd /opt/hbase/hbase-1.2.5/bin
执行命令启动Hbase命令行窗口,命令是:
./hbase shell
如图:
完整的输出是:
[plain] view plain copy
1. [root@hserver1 bin]# cd /opt/hbase/hbase-1.2.5/bin
2. [root@hserver1 bin]# ./hbase shell
3. 2017-05-15 17:52:55,411 WARN [main] util.NativeCodeLoader: Unable to loadnative-hadoop library for your platform... using builtin-java classes whereapplicable
4. SLF4J: Class path contains multiple SLF4Jbindings.
5. SLF4J: Found binding in[jar:file:/opt/hbase/hbase-1.2.5/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
6. SLF4J: Found binding in[jar:file:/opt/hadoop/hadoop-2.8.0/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
7. SLF4J: Seehttp://www.slf4j.org/codes.html#multiple_bindings for an explanation.
8. SLF4J: Actual binding is of type[org.slf4j.impl.Log4jLoggerFactory]
9. HBase Shell; enter 'help<RETURN>' forlist of supported commands.
10.Type "exit<RETURN>" toleave the HBase Shell
11. Version 1.2.5,rd7b05f79dee10e0ada614765bb354b93d615a157, Wed Mar 1 00:34:48 CST 2017
12.
13. hbase(main):001:0>
在hbase命令行模式下,可以输入一系列hbase命令,进行测试
输入:status
如图:
完整的输出是:
[plain] view plain copy
1. hbase(main):001:0> status
2. 1 active master, 0 backup masters, 3servers, 0 dead, 0.6667 average load
3.
4. hbase(main):002:0>
如果要退出Hbase命令行模式的话,输入:exit
如图: