Hadoop-2.7.3下安装Hbase-1.2.6

1.环境信息

操作系统信息如下:
1
[hadoop@hadoop1 soft]$ uname -a
2
Linux hadoop1 2.6.32-358.el6.x86_64 #1 SMP Fri Feb 22 00:31:26 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
3
[hadoop@hadoop1 soft]$ cat /etc/redhat-release 
4
CentOS release 6.4 (Final)

IP地址和主机名如下:
1
[hadoop@hadoop1 soft]$ cat /etc/hosts
2
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
3
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
4
192.168.56.15 basic
5
192.168.56.1 PC-201306171517
6
192.168.56.21 hadoop1
7
192.168.56.22 hadoop2
8
192.168.56.23 hadoop3

2.下载hbase

hbase需要依赖hadoop和zookeeper,所以在安装之前需要先安装hadoop,详细安装步骤见:
zookeeper可以使用hbase自带的或者自己先安装,我这里是事先安装好了zookeeper,详细安装步骤见:
安装的hbase还需要和hadoop版本相兼容,具体hbase和hadoop版本对应如下:
X:表示不支持
NT:表示没有测试
S:表示支持
  HBase-1.1.x HBase-1.2.x HBase-1.3.x HBase-2.0.x

Hadoop-2.0.x-alpha

X

X

X

X

Hadoop-2.1.0-beta

X

X

X

X

Hadoop-2.2.0

NT

X

X

X

Hadoop-2.3.x

NT

X

X

X

Hadoop-2.4.x

S

S

S

X

Hadoop-2.5.x

S

S

S

X

Hadoop-2.6.0

X

X

X

X

Hadoop-2.6.1+

NT

S

S

S

Hadoop-2.7.0

X

X

X

X

Hadoop-2.7.1+

NT

S

S

S

Hadoop-2.8.0

X

X

X

X

Hadoop-3.0.0-alphax

NT

NT

NT

NT

下载地址如下: http://hbase.apache.org/
我这里选择1.2.6版本的hbase.

3.解压缩hbase

上传 hbase-1.2.6-bin.tar.gz到hadoop1主机,解压缩,将解压缩后的目录拷贝到你需要的安装目录
1
[hadoop@hadoop1 soft]$ tar zxvf hbase-1.2.6-bin.tar.gz 
2
[hadoop@hadoop1 soft]$ mv hbase-1.2.6 /hadoop/hbase
为了运行方便,我们将hbase的可执行目录添加到PATH环境变量中:
1
PATH=/hadoop/hbase/bin:/hadoop/zookeeper/bin:$HIVE_HOME/bin:/hadoop/pig/bin:$JAVA_HOME/bin:$PATH:$HOME/bin:/hadoop/hadoop/bin:/hadoop/hadoop/sbin

4.配置hbase

一.修改hbase-env.sh

修改java_home环境变量:
1
[hadoop@hadoop1 conf]$ pwd
2
/hadoop/hbase/conf
3
[hadoop@hadoop1 conf]$ vi hbase-env.sh
4
export JAVA_HOME=/usr/local/jdk1.8.0_131
5
export HBASE_MANAGES_ZK=false
6
export HBASE_CLASSPATH=/hadoop/hadoop/etc/hadoop          #hadoop配置文件目录
7
export HBASE_OPTS="-XX:+UseConcMarkSweepGC"
8
export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -XX:PermSize=128m -XX:MaxPermSize=128m"
9
export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -XX:PermSize=128m -XX:MaxPermSize=128m"

二.修改hbase-site.xml


1
[hadoop@hadoop1 conf]$ vi hbase-site.xml
2
<configuration>
3
4
<property>
5
        <name>hbase.rootdir</name>
6
        <value>hdfs://hadoop1:9000/hbase</value>                                   <!-- 和hadoop core-site.xml 里面的设置保持一直 -->
7
</property>     
8
<property>      
9
        <name>hbase.cluster.distributed</name>
10
        <value>true</value>
11
</property>
12
<property>
13
        <name>base.zookeeper.quorum</name>
14
        <value>hadoop1:2181,hadoop2:2181,hadoop3:2181</value>
15
</property>
16
<property>
17
        <name>hbase.tmp.dir</name>
18
        <value>/hadoop/hbase/tmp</value>                                              <!-- 需要手工创建 -->
19
</property>
20
21
</configuration>
22

三.修改regionservers

1
[hadoop@hadoop1 conf]$ vi regionservers 
2
3
hadoop1
4
hadoop2
5
hadoop3
6

4.复制hbase

将刚配置完成的hbase安装目录拷贝到其它两个节点:
1
[hadoop@hadoop3 ~]$ scp -r hadoop1:/hadoop/hbase /hadoop/
1
[hadoop@hadoop2 hadoop]$ scp -r hadoop1:/hadoop/hbase /hadoop/

5.启动hbase

在启动hbase之前需要先启动hadoop和zookeeper.

启动hadoop:
1
[hadoop@hadoop1 hadoop]$ start-all.sh
2
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
3
Starting namenodes on [hadoop1]
4
hadoop1: starting namenode, logging to /hadoop/hadoop/logs/hadoop-hadoop-namenode-hadoop1.out
5
hadoop3: starting datanode, logging to /hadoop/hadoop/logs/hadoop-hadoop-datanode-hadoop3.out
6
hadoop2: starting datanode, logging to /hadoop/hadoop/logs/hadoop-hadoop-datanode-hadoop2.out
7
Starting secondary namenodes [hadoop1]
8
hadoop1: starting secondarynamenode, logging to /hadoop/hadoop/logs/hadoop-hadoop-secondarynamenode-hadoop1.out
9
starting yarn daemons
10
starting resourcemanager, logging to /hadoop/hadoop/logs/yarn-hadoop-resourcemanager-hadoop1.out
11
hadoop3: starting nodemanager, logging to /hadoop/hadoop/logs/yarn-hadoop-nodemanager-hadoop3.out
12
hadoop2: starting nodemanager, logging to /hadoop/hadoop/logs/yarn-hadoop-nodemanager-hadoop2.out
检查namenode节点hadoop状态:
1
[hadoop@hadoop1 hadoop]$ jps
2
2497 NameNode
3
2852 ResourceManager
4
2698 SecondaryNameNode
5
3148 Jps
其它节点状态:
1
[hadoop@hadoop2 hadoop]$ jps
2
2292 NodeManager
3
2391 Jps
4
2190 DataNode
启动zookeeper:
三个节点上分别执行zkserver.sh start
1
[hadoop@hadoop2 hadoop]$ zkServer.sh start
2
ZooKeeper JMX enabled by default
3
Using config: /hadoop/zookeeper/bin/../conf/zoo.cfg
4
Starting zookeeper ... STARTED
检查zookeeper进程状态:
1
[hadoop@hadoop1 bin]$ jps
2
3184 QuorumPeerMain
3
2497 NameNode
4
2852 ResourceManager
5
2698 SecondaryNameNode
6
3215 Jps
1
[hadoop@hadoop2 hadoop]$ jps
2
2497 Jps
3
2292 NodeManager
4
2472 QuorumPeerMain
5
2190 DataNode
启动hbase:
在namenode节点上执行start-hbase.sh即可
1
[hadoop@hadoop1 bin]$ ./start-hbase.sh
2
starting master, logging to /hadoop/hbase/bin/../logs/hbase-hadoop-master-hadoop1.out
3
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0
4
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0
5
hadoop3: starting regionserver, logging to /hadoop/hbase/bin/../logs/hbase-hadoop-regionserver-hadoop3.out
6
hadoop2: starting regionserver, logging to /hadoop/hbase/bin/../logs/hbase-hadoop-regionserver-hadoop2.out
7
hadoop1: starting regionserver, logging to /hadoop/hbase/bin/../logs/hbase-hadoop-regionserver-hadoop1.out
8
hadoop3: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0
9
hadoop3: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0
10
hadoop2: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0
11
hadoop2: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0
12
hadoop1: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0
13
hadoop1: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0
检查进程状态:
1
[hadoop@hadoop1 bin]$ jps
2
3184 QuorumPeerMain
3
2497 NameNode
4
3779 Jps
5
3524 HRegionServer
6
2852 ResourceManager
7
2698 SecondaryNameNode
8
3390 HMaster
其它节点进程状态:
1
[hadoop@hadoop2 hadoop]$ jps
2
2292 NodeManager
3
2472 QuorumPeerMain
4
2556 HRegionServer
5
2190 DataNode
6
2718 Jps

6.测试连接hbase

使用hbase shell来登录连接hbase:
1
[hadoop@hadoop1 bin]$ hbase shell
2
SLF4J: Class path contains multiple SLF4J bindings.
3
SLF4J: Found binding in [jar:file:/hadoop/hbase/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
4
SLF4J: Found binding in [jar:file:/hadoop/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
5
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
6
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
7
HBase Shell; enter 'help<RETURN>' for list of supported commands.
8
Type "exit<RETURN>" to leave the HBase Shell
9
Version 1.2.6, rUnknown, Mon May 29 02:25:32 CDT 2017
10
11
hbase(main):002:0> list
12
TABLE                                                                                                                                                                                                                                 
13
0 row(s) in 0.5330 seconds
14
15
=> []
16
hbase(main):003:0> create 'test','data'
17
0 row(s) in 2.5520 seconds
18
19
=> Hbase::Table - test
20
hbase(main):004:0> list
21
TABLE                                                                                                                                                                                                                                 
22
test   
在其它节点上连接测试:
1
[hadoop@hadoop2 etc]$ hbase shell
2
SLF4J: Class path contains multiple SLF4J bindings.
3
SLF4J: Found binding in [jar:file:/hadoop/hbase/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
4
SLF4J: Found binding in [jar:file:/hadoop/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
5
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
6
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
7
HBase Shell; enter 'help<RETURN>' for list of supported commands.
8
Type "exit<RETURN>" to leave the HBase Shell
9
Version 1.2.6, rUnknown, Mon May 29 02:25:32 CDT 2017
10
11
hbase(main):001:0> status
12
1 active master, 0 backup masters, 3 servers, 0 dead, 1.0000 average load
13
还可以用网页查看hbase状态:
1
http://192.168.56.21:16010/master-status








  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值