一、安装介质

http://archive.apache.org/dist/hbase

hbase-0.96.2-hadoop2-bin.tar.gz

二、安装JDK:

[root@hadoop-server01 bin]# mkdir -p /usr/local/apps

[root@hadoop-server01 bin]# ll /usr/local/apps/

total 4

drwxr-xr-x. 8 uucp 143 4096 Apr 10  2015 jdk1.7.0_80

[root@hadoop-server01 bin]# pwd

/usr/local/apps/jdk1.7.0_80/bin

 

[root@hadoop-server01 bin]#vi /etc/profile

export JAVA_HOME=/usr/local/apps/jdk1.7.0_80

export PATH=$PATH:$JAVA_HOME/bin

[root@hadoop-server01 bin]# source /etc/profile

三、安装ZK:

详见博客中Zookeeper安装部分

四、安装HBase

说明:HBase的RegionServer需要和HDFS DATANODE节点在相同的机器,HAmaster可以单独机器部署。

1、上传安装软件并解压到安装目录

[root@hadoop-server01 hbase-0.96.2-hadoop2]# tar -xvf hbase-0.96.2-hadoop2-bin.tar.gz -C /usr/local/apps/

2、修改配置

--hbase-env.sh

# This script sets variables multiple times over the course of starting an hbase process,

# so try to keep things idempotent unless you want to take an even deeper look

# into the startup scripts (bin/hbase, etc.)

# The java implementation to use.  Java 1.6 required.

export JAVA_HOME=/usr/local/apps/jdk1.7.0_80

# Extra Java CLASSPATH elements.  Optional.

# export HBASE_CLASSPATH=

...................

# Tell HBase whether it should manage it's own instance of Zookeeper or not.

--关闭HBase自带的ZK,利用自己搭建的ZK

export HBASE_MANAGES_ZK=false

--hbase-site.xml

<configuration>

<!-- 指定HBase在HDFS上存储的路径-->

<property>

<name>hbase.rootdir</name>

<value>hdfs://hadoop-server01:9000/hbase</value>

</property>

<!-- 指定HBase是分布式的-->

<property>

<name>hbase.cluster.distributed</name>

<value>true</value>

</property>

<!-- 指定ZK地址,多个用逗号分隔-->

<property>

<name>hbase.zookeeper.quorum</name>

<value>hadoop-server01:2181,hadoop-server02:2181,hadoop-server03:2181</value>

</property>

   <!-- 设置HBase HMaster和HRegionServer之间允许的时间误差,单位为毫秒,目的防止节点时间差异太大导致集群异常

-->

       <property>

               <name>hbase.master.maxclockskew</name>

               <value>180000</value>

       </property>

</configuration>

--regionservers

[root@hadoop-server01 conf]# vi regionservers

hadoop-server01

hadoop-server03

hadoop-server03

3、将hadoop中hdfs-site.xml和core-site.xml拷贝到hbase/conf目录

尤其在HA架构的Namenode下,更需要拷贝过来,因为是通过hdfs://ns1/hbase访问hdfs的

[root@hadoop-server01 hadoop]# cp hdfs-site.xml /usr/local/apps/hbase-0.96.2-hadoop2/conf/

[root@hadoop-server01 hadoop]# cp core-site.xml /usr/local/apps/hbase-0.96.2-hadoop2/conf/

4、分发Hbase安装文件到其它节点

[root@hadoop-server01 apps]# scp -r hbase-0.96.2-hadoop1/ hadoop-server03:/usr/local/apps/

[root@hadoop-server01 apps]# scp -r hbase-0.96.2-hadoop2/ hadoop-server03:/usr/local/apps/

5、启动Hbase

说明:Hmaster在配置文件中没有指定,在那台启动脚本,那台就是Hmaster

[root@hadoop-server01 bin]# ./start-hbase.sh

starting master, logging to /usr/local/apps/hbase-0.96.2-hadoop2/bin/../logs/hbase-root-master-hadoop-server01.out

hadoop-server03: starting regionserver, logging to /usr/local/apps/hbase-0.96.2-hadoop2/bin/../logs/hbase-root-regionserver-hadoop-server03.out

hadoop-server02: starting regionserver, logging to /usr/local/apps/hbase-0.96.2-hadoop2/bin/../logs/hbase-root-regionserver-hadoop-server02.out

hadoop-server01: starting regionserver, logging to /usr/local/apps/hbase-0.96.2-hadoop2/bin/../logs/hbase-root-regionserver-hadoop-server01.out

[root@hadoop-server01 bin]# jps

4332 HRegionServer

4591 Jps

2597 DataNode

2747 SecondaryNameNode

2478 NameNode

3640 NodeManager

3356 ResourceManager

4203 HMaster

3790 QuorumPeerMain

[root@hadoop-server03 ~]# jps

2405 DataNode

3265 Jps

2832 QuorumPeerMain

3104 HRegionServer

2694 NodeManager

6、Hbase备用Master

Hbase备用Master服务可以为任何安装了Hbase服务的节点,只需要用后台进程启动即可,可以有多个backup master节点

--启动备用master

[root@hadoop-server02 bin]# ./hbase-daemon.sh start master

[root@hadoop-server02 bin]# jps

3114 HRegionServer

2702 NodeManager

3312 HMaster

3430 Jps

2829 QuorumPeerMain

2411 DataNode

--网页查看状态

访问端口60010

clipboard.png

--模拟高可用测试

杀掉当前活动的Master节点

[root@hadoop-server01 bin]# jps

2597 DataNode

5099 HRegionServer

5436 Jps

2747 SecondaryNameNode

4964 HMaster

2478 NameNode

3640 NodeManager

3356 ResourceManager

3790 QuorumPeerMain

[root@hadoop-server01 bin]# kill -9 4964

clipboard.png

可以看出当前master节点为hadoop-server02,现在将hadoop-server01 启动后,自己变为了backup

[root@hadoop-server01 bin]# ./hbase-daemons.sh start master

附录命令:

一次性启动所有master节点

[root@hadoop-server01 bin]# ./hbase-daemons.sh start master

一次只启动当前节点为master节点

[root@hadoop-server01 bin]# ./hbase-daemon.sh start master