HBase完全分布式安装过程详解

HBase版本:0.90.5
Hadoop版本:0.20.2
OS版本:CentOS6.4
安装方式:完全分布式(1个master,2个regionserver)
192.168.220.128 guanyy        《master》
192.168.220.129 guanyy1       <slaver>
192.168.220.130 guanyy2       <slaver>

与hadoop节点完全一样


1)解压缩HBase安装文件 
[hadoop@guanyy ~]$ tar -zxvf hbase-0.90.5.tar.gz
解压缩成功后的HBase主目录结构如下:
[hadoop@guanyy hbase-0.90.5]$ ll
total 2656
drwxrwxr-x. 3 hadoop hadoop    4096 Jan  4  2012 bin
-rw-r--r--. 1 hadoop hadoop  217043 Jan  4  2012 CHANGES.txt
drwxr-xr-x. 2 hadoop hadoop    4096 Jan  4  2012 conf
-rwxrwxr-x. 1 hadoop hadoop 2425487 Jan  4  2012 hbase-0.90.5.jar
drwxrwxr-x. 5 hadoop hadoop    4096 Jan  4  2012 hbase-webapps
drwxrwxr-x. 3 hadoop hadoop    4096 Jan  9 14:45 lib
-rw-r--r--. 1 hadoop hadoop   11358 Jan  4  2012 LICENSE.txt
-rw-r--r--. 1 hadoop hadoop     803 Jan  4  2012 NOTICE.txt
-rw-r--r--. 1 hadoop hadoop   31073 Jan  4  2012 pom.xml
-rw-r--r--. 1 hadoop hadoop    1358 Jan  4  2012 README.txt
drwxr-xr-x. 8 hadoop hadoop    4096 Jan  4  2012 src

2)配置hbase-env.sh 
[hadoop@guanyy conf]$ vim hbase-env.sh
# The java implementation to use.  Java 1.6 required.
export JAVA_HOME=/usr/java/jdk1.7.0_45


# Extra Java CLASSPATH elements.  Optional.
export HBASE_CLASSPATH=/home/hadoop/hadoop/conf

# Tell HBase whether it should manage it's own instance of Zookeeper or not.
export HBASE_MANAGES_ZK=true
(由于hbase0.90.5版本自带了zookeeper,设置此项的为true的目的是让使用hbase自带的zookeeper,hbasestart的时候会先启动zookeeper,然后再启动regionserver,如果不想用hbase自带的zookeeper而自己安装管理zookeeper,可以将此项的值设为false)

3)配置hbase-site.xml 
[hadoop@guanyy conf]$ vim hbase-site.xml
<configuration>
  <property>
       <name>hbase.rootdir</name>
       <value>hdfs:// guanyy:9000/hbase</value> <!--说明:红色部分为hadoop master的主机名,在这里必须使用hadoop master的主机名而不是ip地址,如果是ip地址,hbase master启动的时候会报Wrong FS错误,具体原因会在下文说明-->
  </property>
  <property>
       <name>hbase.cluster.distributed</name> <!--说明:将此项值设置为true,是告诉hbase采用完全分布模式-->
       <value>true</value>
  </property>
  <property>
       <name>hbase.zookeeper.quorum</name> <!--说明:此项设置了管理hbase的zookeeper集群节点的ip地址或主机名或dns映射名,建议为奇数个数-->

       <value>guanyy,guanyy1,guanyy2</value>
  </property>
  <property>
       <name>hbase.zookeeper.property.dataDir</name> <!--说明:使用hbase自带zookeeper的时候,此项设置zookeeper文件存放的地址,注:没有存放在hdfs系统里边--> 

       <value>/home/hadoop/zookeeper</value>   
  </property>

</configuration>

4)修改hbase0.90.5/conf/regionservers文件(同hadoop的slaves文件内容相同),内容如下:(可以是ip地址或主机名或dns映射名)
[hadoop@guanyy conf]$ vim regionservers
guanyy
guanyy1
guanyy2


5) 替换Jar包
[hadoop@guanyy lib]$ mv ./hadoop-core-0.20-append-r1056497.jar ./hadoop-core-0.20-append-r1056497.jar.bak
[hadoop@guanyy lib]$ pwd
/home/hadoop/hbase-0.90.5/lib
[hadoop@guanyy lib]$ cp /home/hadoop/hadoop/hadoop-0.20.2-core.jar ./
[hadoop@guanyy lib]$ ls
activation-1.1.jar                        jaxb-impl-2.1.12.jar
asm-3.1.jar                               jersey-core-1.4.jar
avro-1.3.3.jar                            jersey-json-1.4.jar
commons-cli-1.2.jar                       jersey-server-1.4.jar
commons-codec-1.4.jar                     jettison-1.1.jar
commons-el-1.0.jar                        jetty-6.1.26.jar
commons-httpclient-3.1.jar                jetty-util-6.1.26.jar
commons-lang-2.5.jar                      jruby-complete-1.6.0.jar
commons-logging-1.1.1.jar                 jsp-2.1-6.1.14.jar
commons-net-1.4.1.jar                     jsp-api-2.1-6.1.14.jar
core-3.1.1.jar                            jsr311-api-1.1.1.jar
guava-r06.jar                             log4j-1.2.16.jar
hadoop-0.20.2-core.jar                    protobuf-java-2.3.0.jar
hadoop-core-0.20-append-r1056497.jar.bak  ruby
jackson-core-asl-1.5.5.jar                servlet-api-2.5-6.1.14.jar
jackson-jaxrs-1.5.5.jar                   slf4j-api-1.5.8.jar
jackson-mapper-asl-1.4.2.jar              slf4j-log4j12-1.5.8.jar
jackson-xc-1.5.5.jar                      stax-api-1.0.1.jar
jasper-compiler-5.5.23.jar                thrift-0.2.0.jar
jasper-runtime-5.5.23.jar                 xmlenc-0.52.jar
jaxb-api-2.1.jar                          zookeeper-3.3.2.jar


6) 向其它2个结点复制Hbase相关配置
[hadoop@guanyy ~]$ scp -r /home/hadoop/hbase-0.90.5/  hadoop@guanyy1:/home/hadoop/
[hadoop@guanyy ~]$ scp -r /home/hadoop/hbase-0.90.5/  hadoop@guanyy2:/home/hadoop/

7) 添加HBase相关环境变量 (所有结点)
[hadoop@guanyy ~]$ su root
Password:
[root@node01 ~]# vi /etc/profile
export HBASE_HOME=/home/hadoop/hadoop/hbase-0.90.5
export PATH=$PATH:$HBASE_HOME/bin

8)启动Hadoop,创建HBase主目录
[hadoop@guanyy hadoop]$ pwd
/home/hadoop/hadoop
[hadoop@guanyy hadoop]$ stop-all.sh
stopping jobtracker
guanyy2: stopping tasktracker
guanyy1: stopping tasktracker
stopping namenode
guanyy2: stopping datanode
guanyy1: stopping datanode
guanyy: stopping secondarynamenode
[hadoop@guanyy hadoop]$ start-all.sh
starting namenode, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-namenode-guanyy.out
guanyy2: starting datanode, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-guanyy2.out
guanyy1: starting datanode, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-guanyy1.out
guanyy: starting secondarynamenode, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-guanyy.out
starting jobtracker, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-jobtracker-guanyy.out
guanyy2: starting tasktracker, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-guanyy2.out
guanyy1: starting tasktracker, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-guanyy1.out

[hadoop@guanyy hadoop]$ jps
16468 JobTracker
16303 NameNode
16411 SecondaryNameNode
16629 Jps

[hadoop@guanyy hadoop]$ hadoop fs -mkdir hbase

9)启动HBase
[hadoop@guanyy bin]$ /home/hadoop/hbase-0.90.5/bin/start-hbase.sh
guanyy2: starting zookeeper, logging to /home/hadoop/hbase-0.90.5/bin/../logs/hbase-hadoop-zookeeper-guanyy2.out
guanyy1: starting zookeeper, logging to /home/hadoop/hbase-0.90.5/bin/../logs/hbase-hadoop-zookeeper-guanyy1.out
guanyy: starting zookeeper, logging to /home/hadoop/hbase-0.90.5/bin/../logs/hbase-hadoop-zookeeper-guanyy.out
starting master, logging to /home/hadoop/hbase-0.90.5/bin/../logs/hbase-hadoop-master-guanyy.out
guanyy: starting regionserver, logging to /home/hadoop/hbase-0.90.5/bin/../logs/hbase-hadoop-regionserver-guanyy.out
guanyy2: starting regionserver, logging to /home/hadoop/hbase-0.90.5/bin/../logs/hbase-hadoop-regionserver-guanyy2.out
guanyy1: starting regionserver, logging to /home/hadoop/hbase-0.90.5/bin/../logs/hbase-hadoop-regionserver-guanyy1.out

[hadoop@guanyy bin]$ jps
18938 JobTracker
19144 HQuorumPeer
18717 NameNode
18870 SecondaryNameNode
19261 HRegionServer
19411 Jps


10) 测试:在HBase上创建表


  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值