1、 下载hbase安装文件(hbase-1.0.0-cdh5.5.0)并用二进制方式上传需要安装的主机。
2、 解压hbase-1.0.0-cdh5.5.0文件。(tar zxvf hbase-1.0.0-cdh5.5.0)
3、 在用户的环境变量文件中加入jar环境和hbase的环境设置:(jar需要6及以上版本)
export JAVA_HOME=/usr/java/jdk1.7 export HBASE_HOME=/home/ecs/hbase-1.0.0-cdh5.5.0 PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$HADOOP_HOME/bin :$HBASE_HOME/bin:$PATH:$HOME/bin export PATH |
4、 集群模式必须先安装好hadoop集群,并且hadoop版本与hbase版本必须对应,否则的话,需要用 hadoop 目录下的 jar 替换 hbase/lib 目录下的 jar 文件,替换脚本如下(脚本内容按实际情况修改,不可照搬):
find -name "hadoop*jar" | sed 's/2.5.1/2.5.2/g' | sed 's/\.\///g' > f.log rm ./hadoop*jar cat ./f.log | while read Line do find /home/grid/hadoop-2.5.2 -name "$Line" | xargs -i cp {} ./ done rm ./f.log |
5、在$ HBASE_HOME /conf目录下修改hbase-env.sh文件,增加环境参数(第一个参数指定了JDK路径;第二个参数指定了 hadoop 的配置文件路径;第三个参数设置使用 hbase 默认自带的 Zookeeper,集群模式设置false,使用已安装的zookeeper):
export JAVA_HOME=/usr/java/jdk1.7 export HBASE_CLASSPATH=/home/ecs/hadoop-2.6.0-cdh5.5.0/etc/hadoop export HBASE_MANAGES_ZK=false |
6、在$ HBASE_HOME /conf目录下修改hbase-site.xml文件:
<configuration> <property> <name>hbase.rootdir</name> <value>hdfs://app1.ecs.top:9000/hbase</value> </property> <property> <name>hbase.cluster.distributed</name> <value>true</value> </property> <property> <name>hbase.tmp.dir</name> <value>/home/ecs/hbase-1.0.0-cdh5.5.0/tmp</value> </property> <property> <name>hbase.zookeeper.quorum</name> <value>app1.ecs.top,app2. ecs.top,app3. ecs.top </value> </property> <property> <name>hbase.master</name> <value>app1.ecs.top:60000</value> </property> <property> <name>hbase.zookeeper.property.dataDir</name> <value>/home/ekafka/hbase-1.0.0-cdh5.5.0/zookeeper</value> </property> </configuration> |
7、在$ HBASE_HOME/conf目录下修改配置文件regionservers,加入集群主机的子节点:
app2.ecs.top app3. ecs.top |
8、以上就算是配置好了一台机器的hbase,把整个hbase目录拷贝到其它需要作为集群部署的主机上去,使用命令:
scp -r /home/ekafka/hbase-1.0.0-cdh5.5.0 ekafka@app2. ecs.top:/home/ekafka scp -r /home/ekafka/hbase-1.0.0-cdh5.5.0 ekafka@app3. ecs.top:/home/ekafka |
9、其他注意的:如果五台机器都完成,需要修改每台机器的/etc/hosts,添加五台机器的IP和对应机器名:(注:每台主机按照实际情况修改环境变量及配置文件)
10.1.236.85 app1.ecs.top 10.1.236.86 app2.ecs.top XX.XX.XXX.XX app3.ecs.top |
10、启动hbase,在$ HBASE_HOME/bin目录下,执行:
sh start-hbase.sh |
11、验证各主机是否已经启动了hadoop程序,使用jps命令查看(红色标记hbase进程):
[[ekafka@app1 ~]$ jps 10072 ResourceManager 20909 QuorumPeerMain 13215 HMaster 9801 NameNode 13953 Jps 22618 Kafka |
[ekafka@app2 ~]$ jps 8379 QuorumPeerMain 22785 HRegionServer 18841 SecondaryNameNode 9438 Kafka 18735 DataNode 24044 Jps 18917 NodeManager |
[ekafka@app3 ~]$ jps 9031 Kafka 16775 NodeManager 7929 QuorumPeerMain 18312 HRegionServer 21965 Jps 16664 DataNode |
12、检查hbase安装是否成功,执行hbaseshell命令,进入hbase命令模式:
[ekafka@app1 ~]$ hbase shell 2015-12-15 14:43:49,436 INFO [main] Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available 2015-12-15 14:43:49,470 INFO [main] Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available 2015-12-15 14:43:49,497 INFO [main] Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available 2015-12-15 14:43:49,522 INFO [main] Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available 2015-12-15 14:43:49,543 INFO [main] Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/home/ekafka/hbase-1.0.0-cdh5.5.0/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/home/ekafka/hadoop-2.6.0-cdh5.5.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] HBase Shell; enter 'help<RETURN>' for list of supported commands. Type "exit<RETURN>" to leave the HBase Shell Version 1.0.0-cdh5.5.0, rUnknown, Mon Nov 9 12:37:38 PST 2015
hbase(main):001:0> |
注意事项:
1)、先启动hadoop后,再开启hbase
2)、去掉hadoop的安全模式:hadoopdfsadmin -safemode leave
3) 、确认hbase的hbase-site.xml中
<name>hbase.rootdir</name>
<value> hdfs://app1.ecs.top:9000/hbase</value>
与hadoop的core-site.xml中
<name>fs.default.name</name>
<value>hdfs://app1.ecs.top:9000</value>
红字部分保持一致
否则报错:java.lang.RuntimeException:HMaster Aborted