(1)安装JDK、Hadoop、以及Zookeeper,本次配置设置的JDK版本是2.7.4以及Zookeeper的版本是3.4.10。
(2)下载HBase,这里选择下载的版本是1.2.1。
(3)上传并解压HBase安装包。将HBase安装包上传至Linux系统的/export/software/目录下,
然后加压到/export/servers/目录下,命令如下:
tar -zxvf hbase-1.2.1-bin.tar.gz -C /export/servers/
(4)将/hadoop-2.7.4/etc/hadoop目录下的hdfs-site.xml和core-site.xml配置文件复制一份到/hbase-1.2.1/conf目录下,复制文件命令如下:
cp /export/servers/hadoop-2.7.4/etc/hadoop/{hdfs-site.xml,core-site.xml} /export/servers/hbase-1.2.1/conf
(5)进入/hbase-1.2.1/conf目录修改相关配置文件。
打开hbase-env.sh配置文件,指定jdk的环境变量并配置zookeeper,修改后的hbase-env.sh如下
#The java implementation to use.JAVA 1.7+required.
#配置jdk环境变量
export JAVA_HOME=/export/servers/jdk
#Tell HBase whether it should manage it's own instance of zookeeper or not,
#配置hbase使用外部zookeeper
export HBASE_MANAGES_ZK=false
打开hbase-site.xml配置文件,指定HBase在HDFS的存储路径、HBase的分布式存储方式以及zookeeper地址,修改后的hbase-site.xml文件内容如下
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
/**
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
-->
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://hadoop01:9000/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>hadoop01:2181,hadoop02:2181,hadoop03:2181</value>
</property>
</configuration>
修改regionservers配置文件,配置HBase的从节点角色(即hadoop02和hadoop03)。具体内容如下:
hadoop02
hadoop03
创建backup-masters配置文件,为防止单点故障配置备用的主节点角色,内容如下:
hadoop02
修改profile配置文件,通过vi /etc/profile命令进入系统环境变量配置文件,配置HBase的环境变量(服务器hadoop01,hadoop02,hadoop03都需要配置,也可以使用scp命令直接分发),配置具体如下:
export HBASE_HOME=/export/servers/hbase-1.2.1
export PATH=$PATH:$HBASE_HOME/bin:
将HBase的安装目录分发至hadoop02和hadoop03服务器上。命令如下:
scp -r /export/servers/hbase-1.2.1/ hadoop02:/export/servers/
scp -r /export/servers/hbase-1.2.1/ hadoop03:/export/servers/
在服务器hadoop01、hadoop02、hadoop03上分别执行
source /etc/profile
使得系统环境变量配置文件生效
(6)启动zookeeper和HDFS,具体命令如下:
#启动zookeeper
zkServer.sh start
#启动hdfs
start-dfs.sh
(7)保证时间同步,需要在集群各个节点执行如下命令:
ntpdate -u cn.pool.ntp.org
如果报如下错误说明没有安装ntp
需要先安装ntp然后再执行上述时间同步命令,安装ntp命令如下:,同样需要在三个节点上都安装。
yum -y install ntp
启动HBase集群,具体命令如下(只需要在hadoop01上输入):
start-hbase.sh
(8)通过jps命令检查HBase集群服务是否部署成功,结果如下图:
可看出,在hadoop01上出现了HMaster进程,hadoop02上出现了HMaster和HRegionServer进程,hadoop03上出现了HRegionServer进程,证明HBase集群安装部署成功。
如需要停止HBase集群,则执行stop-hbase.sh命令。
也可以通过访问https://hadoop01:16010查看HBase集群状态,hadoop01也可以改为其的IP地址
结果如下图:
通过访问https://hadoop02:16010来查看集群备用主节点的状态,结果如下图: