HBase1.0.0&Zookeeper3.4.6安装

安装环境介绍

Linux:CentOS6.5_x64
JDK:jdk1.7.0_76
Hadoop: hadoop-2.6
Hbase: hbase-1.0.0

安装前准备

JDK版本

[hadoop@HM ~]$ java -version
java version "1.7.0_76"
Java(TM) SE Runtime Environment (build 1.7.0_76-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.76-b04, mixed mode)

Hadoop2.6完全分布式环境

前期已经部署完成

HBase版本

hbase-1.0.0

Zookeeper版本

zookeeper-3.4.6

配置hosts文件

# vim /etc/hosts
192.168.103.162 HM
192.168.103.163 HS1
192.168.103.164 HS2

安装Zookeeper

解压zookeeper安装包

Hadoop用户登录 将zookeeper-3.4.6.tar.gz文件上传至Hadoop的Master节点。

$ tar -zxvf zookeeper-3.4.6.tar.gz

配置zookeeper环境变量

$ vim .bash_profile
# set zookeeper environment
export ZOOKEEPER_HOME=/home/hadoop/zookeeper-3.4.6
export PATH=$PATH:$ZOOKEEPER_HOME/bin:$ZOOKEEPER_HOME/conf

$ source .bash_profile

修改配置文件

$ cd /home/hadoop/zookeeper-3.4.6/conf
$ cp zoo_sample.cfg zoo.cfg
$ vim zoo.cfg
修改并添加以下内容:
dataDir=/home/hadoop/zookeeper-3.4.6/var/data
# the port at which the clients will connect
clientPort=2181
server.1=HM:2888:3888
server.2=HS1:2888:3888
server.3=HS2:2888:3888

说明: server.X=A:B:C 其中X是一个数字,表示这是第几号server。 A是该server所在的IP地址。 B配置该server和集群中的leader交换消息所使用的端口. C配置选举leader时所使用的端口

在dataDir路径下创建myid文件

$ mkdir -p /home/hadoop/zookeeper-3.4.6/var/{data,log}
$ cd /home/hadoop/zookeeper-3.4.6/var/data
$ vim myid
1

注意:各节点的dataDir目录下的myid文件中的数字不能相同,即,HM节点上的myid为1;HS1节点上的myid为2;HS2节点上的myid为3;

将配置好的zookeeper目录同步到从节点

$ scp -r zookeeper-3.4.6 hadoop@HS1:/home/hadoop/
$ scp -r zookeeper-3.4.6 hadoop@HS2:/home/hadoop/

注意:如果主节点上Hbase目录中配置文件修改,那么要同步到从节点。 ##修改从节点中的myid文件内容

Hadoop用户分别登录到从节点HS1和HS2。
[hadoop@HS1 ~]$ vim zookeeper-3.4.6/var/data/myid  //修改HS1节点的myid为2
[hadoop@HS2 ~]$ vim zookeeper-3.4.6/var/data/myid  //修改HS2节点的myid为3

启动/关闭zookeeper

$ cd /home/hadoop/zookeeper-3.4.6/bin
$ ./zkServer.sh start
JMX enabled by default
Using config: /home/hadoop/zookeeper-3.4.6/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED     //表示启动成功
$ ./zkServer.sh stop
JMX enabled by default
Using config: /home/hadoop/zookeeper-3.4.6/bin/../conf/zoo.cfg
Stopping zookeeper ... STOPPED    //表示关闭成功

判断是否启动成功

查看进程

$ ps aux | grep zookeeper
hadoop   17462  0.2  1.7 3382236 66864 pts/0   Sl   13:30   0:02 /usr/java/jdk1.7.0_76/bin/java -Dzookeeper.log.dir=. -Dzookeeper.root.logger=INFO,CONSOLE -cp /home/hadoop/zookeeper-3.4.6/bin/../build/classes:/home/hadoop/zookeeper-3.4.6/bin/../build/lib/*.jar:/home/hadoop/zookeeper-3.4.6/bin/../lib/slf4j-log4j12-1.6.1.jar:/home/hadoop/zookeeper-3.4.6/bin/../lib/slf4j-api-1.6.1.jar:/home/hadoop/zookeeper-3.4.6/bin/../lib/netty-3.7.0.Final.jar:/home/hadoop/zookeeper-3.4.6/bin/../lib/log4j-1.2.16.jar:/home/hadoop/zookeeper-3.4.6/bin/../lib/jline-0.9.94.jar:/home/hadoop/zookeeper-3.4.6/bin/../zookeeper-3.4.6.jar:/home/hadoop/zookeeper-3.4.6/bin/../src/java/lib/*.jar:/home/hadoop/zookeeper-3.4.6/bin/../conf:.:/usr/java/jdk1.7.0_76/jre/lib:/usr/java/jdk1.7.0_76/lib/dt.jar:/usr/java/jdk1.7.0_76/lib/tools.jar -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false org.apache.zookeeper.server.quorum.QuorumPeerMain /home/hadoop/zookeeper-3.4.6/bin/../conf/zoo.cfg
hadoop   17566  0.0  0.0 103252   840 pts/0    S+   13:43   0:00 grep zookeeper

执行jps命令

HM节点

[hadoop@HM ~]$ jps
7246 Jps
18691 RunJar
18596 RunJar
1653 ResourceManager
1481 SecondaryNameNode
1284 NameNode
1997 QuorumPeerMain

HS1节点

[hadoop@HS1 ~]$ jps
18338 QuorumPeerMain
18087 DataNode
20346 Jps
18210 NodeManager

HS2节点

[hadoop@HS2 ~]$ jps
20101 Jps
18137 QuorumPeerMain
17943 NodeManager
17850 DataNode

查看日志

登录HM、HS1和HS2节点查看是否报错
$ cd /home/hadoop/zookeeper-3.4.6/bin
$ tail -f -n 500 zookeeper.out

修改hadoop配置

$ vim /home/hadoop/hadoop/etc/hadoop/core-site.xml
添加如下内容:
  <property>
   <name>ha.zookeeper.quorum</name>
   <value>HM:2181,HS1:2181,HS2:2181</value>
  </property>

注意:HM、HS1和HS2节点都需要同步修改。

安装HBase

解压HBase安装包

Hadoop用户登录,将hbase-1.0.0-bin.tar.gz安装包上传至服务器并解压

$ tar -zxvf hbase-1.0.0-bin.tar.gz

编辑hbase-env.sh文件

$ cd /home/hadoop/hbase-1.0.0/conf
$ vim hbase-env.sh
# 添加如下内容
export JAVA_HOME=/usr/java/jdk1.7.0_76
export HBASE_CLASSPATH=/home/hadoop/hbase-1.0.0/conf
export HBASE_HOME=/home/hadoop/hbase-1.0.0
export HBASE_LOG_DIR=${HBASE_HOME}/logs
export HBASE_MANAGES_ZK=false

说明:使用独立的ZooKeeper时需要修改HBASE_MANAGES_ZK值为false;不使用,则ZooKeeper实例默认HBASE_MANAGES_ZK值为true。

编辑hbase-site.xml文件

$ vim hbase-site.xml
#添加如下内容
<configuration>
<property>
<name>hbase.rootdir</name> #设置hbase数据库存放数据的目录
<value>hdfs://HM:9000/hbase</value>    #必须与Hadoop集群的core-site.xml文件配置保持完全一致才行,如果Hadoop的hdfs使用了其它端口,请在这里也修改。再者就是Hbase该项并不识别机器IP,只能使用机器hostname才可行,即若使用HM的IP(192.168.103.162)是会抛出java错误
</property>
<property>
<name>hbase.cluster.distributed</name>  #打开hbase分布模式
<value>true</value>
</property>
<property>
<name>hbase.master</name> #指定hbase集群主控节点
<value>HM:60000</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>HM,HS1,HS2</value> #指定zookeeper集群节点名,因为是由zookeeper表决算法决定的
</property>
<property>
<name>hbase.master.info.port</name>
<value>16030</value>  #HBase-1.0.0版本不需要配置该端口;HBase-1.0.2版本配置该端口,因为HBase-1.0.1&HBase-1.0.2版本默认web端口是关闭
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name> #指zookeeper集群clinet端口
<value>2181</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name> #指zookeeper集群data目录
<value>/home/hadoop/zookeeper-3.4.6/var/data</value>
</property>
</configuration>

##编辑regionservers文件

$ vim regionservers
将localhost改为HM、HS1和HS2

##将配置好的hbase-1.0.0目录同步到从节点

$ scp -r /home/hadoop/hbase-1.0.0 hadoop@HS1:/home/hadoop/
$ scp -r /home/hadoop/hbase-1.0.0 hadoop@HS2:/home/hadoop/

注意:如果主节点上Hbase目录中配置文件修改,那么要同步到从节点。

启动/关闭HBase集群

Hadoop、ZooKeeper和HBase之间应该按照顺序启动和关闭:

启动Hadoop集群 —> 启动ZooKeeper集群 —> 启动HBase集群 —> 停止HBase —> 停止ZooKeeper集群 —> 停止Hadoop集群

查看Hadoop启动状态

启动HBase前先检查Hadoop是否已启动

$ hadoop dfsadmin -report  或  $ /home/hadoop/hadoop/bin/hadoop dfsadmin -report

启动HBase集群

[hadoop@HM ~]$ /home/hadoop/hbase-1.0.0/bin/start-hbase.sh 
starting master, logging to /home/hadoop/hbase-1.0.0/logs/hbase-hadoop-master-HM.out
HS2: starting regionserver, logging to /home/hadoop/hbase-1.0.0/logs/hbase-hadoop-regionserver-HS2.out
HS1: starting regionserver, logging to /home/hadoop/hbase-1.0.0/logs/hbase-hadoop-regionserver-HS1.out
HM: starting regionserver, logging to /home/hadoop/hbase-1.0.0/logs/hbase-hadoop-regionserver-HM.out

查看主/从节点上的进程

主节点

[hadoop@HM ~]$ jps
6949 Jps
6296 HMaster
18691 RunJar
18596 RunJar
1653 ResourceManager
1481 SecondaryNameNode
1284 NameNode
1997 QuorumPeerMain

从节点

[hadoop@HS1 ~]$ jps
19937 HRegionServer
18338 QuorumPeerMain
18087 DataNode
18210 NodeManager
20278 Jps
[hadoop@HS2 ~]$ jps
19675 HRegionServer
18137 QuorumPeerMain
20034 Jps
17943 NodeManager
17850 DataNode

浏览器访问HBase

http://192.168.100.xxx:16030/master-status

关闭HBase集群

[hadoop@HM ~]$ /home/hadoop/hbase-1.0.0/bin/stop-hbase.sh 
stopping hbase....................

HBase命令行操作

进入hbase

[hadoop@HM ~]$ /home/hadoop/hbase-1.0.0/bin/hbase shell
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.0.0, r6c98bff7b719efdb16f71606f3b7d8229445eb81, Sat Feb 14 19:49:22 PST 2015

hbase(main):001:0> 

查看hbase状态

hbase(main):001:0> status
2 servers, 0 dead, 1.0000 average load

查看hbase版本

hbase(main):002:0> version
1.0.0, r6c98bff7b719efdb16f71606f3b7d8229445eb81, Sat Feb 14 19:49:22 PST 2015

help命令

hbase(main):003:0> help
HBase Shell, version 1.0.0, r6c98bff7b719efdb16f71606f3b7d8229445eb81, Sat Feb 14 19:49:22 PST 2015
Type 'help "COMMAND"', (e.g. 'help "get"' -- the quotes are necessary) for help on a specific command.
Commands are grouped. Type 'help "COMMAND_GROUP"', (e.g. 'help "general"') for help on a command group.

COMMAND GROUPS:
  Group name: general
  Commands: status, table_help, version, whoami

  Group name: ddl
  Commands: alter, alter_async, alter_status, create, describe, disable, disable_all, drop, drop_all, enable, enable_all, exists, get_table, is_disabled, is_enabled, list, show_filters

  Group name: namespace
  Commands: alter_namespace, create_namespace, describe_namespace, drop_namespace, list_namespace, list_namespace_tables

  Group name: dml
  Commands: append, count, delete, deleteall, get, get_counter, incr, put, scan, truncate, truncate_preserve

  Group name: tools
  Commands: assign, balance_switch, balancer, catalogjanitor_enabled, catalogjanitor_run, catalogjanitor_switch, close_region, compact, compact_rs, flush, major_compact, merge_region, move, split, trace, unassign, wal_roll, zk_dump

  Group name: replication
  Commands: add_peer, append_peer_tableCFs, disable_peer, enable_peer, list_peers, list_replicated_tables, remove_peer, remove_peer_tableCFs, set_peer_tableCFs, show_peer_tableCFs

  Group name: snapshots
  Commands: clone_snapshot, delete_all_snapshot, delete_snapshot, list_snapshots, restore_snapshot, snapshot

  Group name: configuration
  Commands: update_all_config, update_config

  Group name: security
  Commands: grant, revoke, user_permission

  Group name: visibility labels
  Commands: add_labels, clear_auths, get_auths, list_labels, set_auths, set_visibility

SHELL USAGE:
Quote all names in HBase Shell such as table and column names.  Commas delimit
command parameters.  Type <RETURN> after entering a command to run it.
Dictionaries of configuration used in the creation and alteration of tables are
Ruby Hashes. They look like this:

  {'key1' => 'value1', 'key2' => 'value2', ...}

and are opened and closed with curley-braces.  Key/values are delimited by the
'=>' character combination.  Usually keys are predefined constants such as
NAME, VERSIONS, COMPRESSION, etc.  Constants do not need to be quoted.  Type
'Object.constants' to see a (messy) list of all constants in the environment.

If you are using binary keys or values and need to enter them in the shell, use
double-quote'd hexadecimal representation. For example:

  hbase> get 't1', "key\x03\x3f\xcd"
  hbase> get 't1', "key\003\023\011"
  hbase> put 't1', "test\xef\xff", 'f1:', "\x01\x33\x40"

The HBase shell is the (J)Ruby IRB with the above HBase-specific commands added.
For more on the HBase Shell, see http://hbase.apache.org/book.html

退出

hbase(main):004:0> exit

HBase启动异常

在启动hbase之后,查看日志:/home/hadoop/hbase-1.0.0/logs/hbase-hadoop-master-HM.out

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/hbase-1.0.0/lib/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]

解决方法

$ /home/hadoop/hbase-1.0.0/bin/stop-hbase.sh
$ cd /home/hadoop/hbase-1.0.0/lib/
$ mv slf4j-log4j12-1.7.7.jar slf4j-log4j12-1.7.7.jar.bak   或    $ rm slf4j-log4j12-1.7.7.jar

注意:HM、HS1和HS2节点中的/home/hadoop/hbase-1.0.0/lib/slf4j-log4j12-1.7.7.jar这个包文件格式修改或删除。 当然,进入HBase时报这个异常也不影响使用,可以忽略。

转载于:https://my.oschina.net/siiiso/blog/846676

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值