安装hbas_hbase安装

本文档详细介绍了在RedHatLinuxAS5上安装HBase-0.95.21的步骤,包括下载安装介质、解压安装、配置环境变量、配置HBase相关文件,并通过SSH将安装目录同步到其他节点。在启动HBase时遇到了文件布局升级、版本不匹配以及SLF4J绑定冲突等问题,并给出了相应的解决方法。
摘要由CSDN通过智能技术生成

linux下安装hbase

环境:

OS:Rad Hat Linux As5

hbase-0.95.2

1.安装步骤

1.1下载安装介质

下载安装介质,下载地址为: http://archive.apache.org/dist/hbase/

根据情况选择下载的版本,我这里下载的版本是hbase-0.95.2-hadoop1-bin.tar.gz

以下的步骤只需要在主节点(名称节点)上操作

1.2解压并安装

使用hadoop登陆

[hadoop1@node1 ~]$ echo $HADOOP_HOME

/usr1/hadoop

将安装介质拷贝到如下的目录

[root@node1 hbase]# cp hbase-0.95.2-hadoop1-bin.tar.gz /usr1

解压

[root@node1 usr1]# tar -zxvf hbase-0.95.2-hadoop1-bin.tar.gz

目录改名

[root@node1 usr1]# mv hbase-0.95.2 hbase

将hive目录权限赋予hadoop用户

[root@node1 usr1]# chown -R hadoop1:hadoop1 ./hbase

1.3添加环境变量

export HBASE_HOME=/usr1/hbase

PATH变量添加$ HBASE_HOME/bin

1.4配置hbase配置文件

1.4.1配置 hbase-env.sh

该文件默认路径是/usr1/hbase/conf/

添加JAVA_HOME环境变量

export JAVA_HOME=/usr/java/jdk1.8.0_05

export HBASE_MANAGES_ZK=false

export HBASE_CLASSPATH=/usr1/hadoop/conf

1.4.2配置 hbase-site.xml

文件目录/usr1/hbase/conf

添加如下参数

hbase.rootdir

hdfs://192.168.56.101:9000/hbase

hbase.cluster.distributed

true

dfs.replication

3

hbase.zookeeper.quorum

192.168.56.101,192.168.56.102,192.168.56.103,192.168.56.104

hbase.zookeeper.property.dataDir

/home/hadoop1/zookeeperdir/zookeeper-data

其中hbase.rootdir要保持与hadoop的core-site.xml文件中的fs.default.name中的值一致,然后在后面添加自己的子目录,我这里定义是hbase

1.4.3配置 regionserver

文件目录/usr1/hbase/conf

添加如下内容

192.168.56.101

192.168.56.102

192.168.56.103

192.168.56.104

配置完成后,hbase整个目录拷贝到另外的节点

scp hbase.tar root@192.168.56.102:/usr1/

scp hbase.tar root@192.168.56.103:/usr1/

scp hbase.tar root@192.168.56.104:/usr1/

[root@node2 usr1]# tar -xvf hbase.tar

[root@node2 usr1]# chown -R hadoop1:hadoop1 ./hbase

1.5启动hbase

使用hadoop用户登录

在主节点上启动整个集群

[hadoop1@node1 bin]$ ./start-hbase.sh

启动完成后,执行如下命令可以进入到hbase shell界面

[hadoop1@node1 bin]$ ./hbase shell

1.6验证

hbase(main):001:0> create 'test', 'cf'

SLF4J: Class path contains multiple SLF4J bindings.

SLF4J: Found binding in [jar:file:/usr1/hbase/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in [jar:file:/usr1/hadoop/lib/slf4j-log4j12-1.4.3.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.

0 row(s) in 33.5640 seconds

=> Hbase::Table - test

hbase(main):002:0> list

TABLE

hbase:namespace

test

2 row(s) in 6.3780 seconds

=> #:0x1aed682>

hbase(main):003:0>

hbase(main):003:0> put 'test', 'row1', 'cf:a', 'value1'

0 row(s) in 1.9870 seconds

hbase(main):004:0> put 'test', 'row2', 'cf:b', 'value2'

0 row(s) in 0.0220 seconds

hbase(main):005:0> put 'test', 'row3', 'cf:c', 'value3'

0 row(s) in 0.0680 seconds

hbase(main):006:0> scan 'test'

ROWCOLUMN+CELL

row1column=cf:a, timestamp=1414466774970, value=value1

row2column=cf:b, timestamp=1414466783818, value=value2

row3column=cf:c, timestamp=1414466790998, value=value3

3 row(s) in 0.0530 seconds

1.7遇到的问题

1.7.1错误1

WARNING! HBase file layout needs to be upgraded.You have version 7 and I want version 8.Is your hbase.rootdir valid?If so, you

may need to run 'hbase hbck -fixVersionFile'.

14/10/28 11:16:05 FATAL master.HMaster: Unhandled exception. Starting shutdown.

org.apache.hadoop.hbase.util.FileSystemVersionException: HBase file layout needs to be upgraded.You have version 7 and I want vers

ion 8.Is your hbase.rootdir valid?If so, you may need to run 'hbase hbck -fixVersionFile'.

at org.apache.hadoop.hbase.util.FSUtils.checkVersion(FSUtils.java:583)

at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:456)

at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:147)

at org.apache.hadoop.hbase.master.MasterFileSystem.(MasterFileSystem.java:131)

at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:761)

at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:578)

at java.lang.Thread.run(Thread.java:745)

14/10/28 11:16:05 INFO master.HMaster: Aborting

14/10/28 11:16:05 INFO ipc.RpcServer: Stopping server on 60000

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=0,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=1,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=2,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=3,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=4,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=5,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=6,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=7,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=8,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=9,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=10,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=11,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=12,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=13,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=14,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=15,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=16,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=17,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=18,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=19,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=20,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=21,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=22,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=23,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=24,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=25,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=26,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=27,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=28,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.handler=29,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: Replication.RpcServer.handler=0,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: Replication.RpcServer.handler=1,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: Replication.RpcServer.handler=2,port=60000: exiting

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.listener,port=60000: stopping

14/10/28 11:16:05 INFO master.HMaster: Stopping infoServer

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.responder: stopped

14/10/28 11:16:05 INFO ipc.RpcServer: RpcServer.responder: stopping

14/10/28 11:16:05 INFO mortbay.log: Stopped SelectChannelConnector@0.0.0.0:60010

14/10/28 11:16:05 INFO zookeeper.ZooKeeper: Session: 0x149547f5e0d0001 closed

14/10/28 11:16:05 INFO master.HMaster: HMaster main thread exiting

14/10/28 11:16:05 INFO zookeeper.ClientCnxn: EventThread shut down

14/10/28 11:16:05 ERROR master.HMasterCommandLine: Master exiting

java.lang.RuntimeException: HMaster Aborted

at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:191)

at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:134)

at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)

at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:78)

at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2812)

原因:

hadoop版本跟hbase版本不一致

将hadoop目录下的hadoop-core-x.x.x.jar替换掉hbase/lib目录下的hadoop-core.y.y.y文件

1.7.2错误2

You have version null and I want version 8.Is your hbase.rootdir valid?If so, you may need to run 'hbase hbck -fixVersionFile'.

重建一下hdfs/hbase文件

bin/hadoop fs -rm -r /hbase

1.7.2错误3

hbase(main):003:0* scan 'test'

SLF4J: Class path contains multiple SLF4J bindings.

SLF4J: Found binding in [jar:file:/usr1/hbase/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in [jar:file:/usr1/hadoop/lib/slf4j-log4j12-1.4.3.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.

ROWCOLUMN+CELL

row1column=cf:a, timestamp=1414466774970, value=value1

row2column=cf:b, timestamp=1414466783818, value=value2

row3column=cf:c, timestamp=1414466790998, value=value3

row4column=cf:d, timestamp=1414471915567, value=value4

row5column=cf:e, timestamp=1414471877185, value=value5

row6column=cf:f, timestamp=1414471898749, value=value6

查看涉及到slf4j的jar包

[hadoop1@node1 logs]$ hbase classpath | tr ":" "\n" | grep -i slf4j

/usr1/hbase/lib/slf4j-api-1.6.4.jar

/usr1/hbase/lib/slf4j-log4j12-1.6.1.jar

/usr1/hadoop/libexec/../lib/slf4j-api-1.4.3.jar

/usr1/hadoop/libexec/../lib/slf4j-log4j12-1.4.3.jar

解决办法,将一个hbase lib下的jar移除,警告消除。

(不能将hadoop lib下的jar文件移除,否则调用shell脚本start-all.sh远程启动hadoop时会报找不到log4j包的错误。)

hadoop1@node1 logs]$ cd /usr1/hbase/lib

[hadoop1@node1 lib]$ ls -1 slf4j*

slf4j-api-1.6.4.jar

slf4j-log4j12-1.6.1.jar

[hadoop1@node1 lib]$ mv slf4j-api-1.6.4.jar ./otherpath/

[hadoop1@node1 lib]$ mv slf4j-log4j12-1.6.1.jar ./otherpath/

再次登录查询

hbase(main):003:0* scan 'test'

ROWCOLUMN+CELL

row1column=cf:a, timestamp=1414466774970, value=value1

row2column=cf:b, timestamp=1414466783818, value=value2

row3column=cf:c, timestamp=1414466790998, value=value3

row4column=cf:d, timestamp=1414471915567, value=value4

row5column=cf:e, timestamp=1414471877185, value=value5

row6column=cf:f, timestamp=1414471898749, value=value6

6 row(s) in 0.1130 seconds

-- The End --

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值