Hadoop + HBase 集群

Hadoop + HBase 集群

最好在各个服务器的/etc/hosts里面,配置机器名:IP对,以下所有配置文件,都用机器名

192.168.2.79  webdev

192.168.5.11  TEST-A

192.168.5.12  TEST-B    (后来新增)

 

下载最新的HBase

http://labs.renren.com/apache-mirror/hadoop/hbase/hbase-0.20.3/

 

安装步骤

http://hadoop.apache.org/hbase/docs/r0.20.3/api/overview-summary.html#overview_description

 

2.79

cp hbase-0.20.3.tar.gz /home/iic/

cd /home/iic

gzip -d hbase-0.20.3.tar.gz

tar xvf hbase-0.20.3.tar

cd hbase-0.20.3

chmod 700 bin/*

vi conf/hbase-env.sh: export JAVA_HOME=/home/bmb/jdk1.6.0_16

 

修改2.79的conf/hadoop-env.sh,加入对HBASE类库的引用

 

export HBASE_HOME=/home/iic/hbase-0.20.3

(此处的配置要加上$HADOOP_CLASSPATH:,不然会影响Hive的启动)
export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$HBASE_HOME/hbase-0.20.3.jar:$HBASE_HOME/hbase-0.20.3-test.jar:$HBASE_HOME/conf:${HBASE_HOME}/lib/zookeeper-3.3.0.jar

 

5.11

scp hbase-0.20.3.tar iic@192.168.5.11:/home/iic/

vi conf/hbase-env.sh: export JAVA_HOME=/home/iic/jdk1.6.0_16

 

 

 

1:提高ulimit数量

ulimit -n 2048

2:所有机器的时间必须同步

date -s 10:54:12     

date -s 100412

3:修改 ${HBASE_HOME}/conf/hbase-site.xml,指向2.79 Hadoop集群 (/hbase必须自动创建)

  <property>
    <name>hbase.rootdir</name>
    <value>hdfs://webdev:9000/hbase</value>
    <description>The directory shared by region servers.
    </description>
  </property>
  <property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
    <description>The mode the cluster will be in. Possible values are
      false: standalone and pseudo-distributed setups with managed Zookeeper
      true: fully-distributed with unmanaged Zookeeper Quorum (see hbase-env.sh)
    </description>
  </property>

    <property>
      <name>hbase.zookeeper.quorum</name>       

      <value>webdev,TEST-A</value>
      <description>Comma separated list of servers in the ZooKeeper Quorum.
      For example, "host1.mydomain.com,host2.mydomain.com,host3.mydomain.com".
      By default this is set to localhost for local and pseudo-distributed modes
      of operation. For a fully-distributed setup, this should be set to a full
      list of ZooKeeper quorum servers. If HBASE_MANAGES_ZK is set in hbase-env.sh
      this is the list of servers which we will start/stop ZooKeeper on.
      </description>
    </property>

4:修改2.79 ${HBASE_HOME}/conf/regionservers

webdev

TEST-A

TEST-B

5:HBase 依赖 ZooKeeper 集群 (本例子使用默认的Hbase启动内嵌的Zookeeper,而不是使用单独Zookeeper集群)

${HBASE_HOME}/conf/hbase-env.sh的变量HBASE_MANAGES_ZK 默认true.  tells HBase whether to start/stop the ZooKeeper quorum servers alongside the rest of the servers.

 

5:让HBase看到Hadoop的HDFS client configuration

 

 

Add a pointer to your HADOOP_CONF_DIR to CLASSPATH in hbase-env.sh. Add a copy of hdfs-site.xml (or hadoop-site.xml) to ${HBASE_HOME}/conf, or if only a small set of HDFS client configurations, add them to hbase-site.xml.

 

此功能很重要,比如Hadoop指定了replication是2,如果不按照上面的操作,则Hbase不会跟Hadoop一致。 

HBase默认的replication是3

cd /home/iic/hbase-0.20.3

vi conf/hbase-env.sh

export HBASE_CLASSPATH=/home/iic/hadoop-0.20.2

 

启动Hbase集群

/home/iic/hbase-0.20.3

启动后,日志hbase-iic-master-webdev.log出现错误:

Caused by: java.lang.IllegalArgumentException: Wrong FS: hdfs://192.168.2.79:9000/hbase, expected: hdfs://webdev:9000
修改:hbase-site.xml

重启后,还是出错:

尝试把“hbase.zookeeper.quorum”的IP“192.168.2.79,192.168.5.11”,改为机器名“webdev,TEST-A”

在/etc/hosts中互相添加映射

192.168.2.79 webdev
192.168.5.11 TEST-A

重启后,5.11的zookeeper日志出现错误:org.apache.zookeeper.KeeperException$NodeExistsException: KeeperErrorCode = NodeExists

 

最后发现5.11的/etc/hosts是如下配置,

127.0.0.1 TEST-A localhost.localdomain localhost

由于以上的配置,导致2.79的master.log

 Updated ZNode /hbase/rs/1271048379831 with data 127.0.0.1:60020
 Updated ZNode /hbase/rs/1271048380554 with data 192.168.2.79:60020

而5.11想再次注册Test-A时,发现已经存在相同的“ data 127.0.0.1:60020”

 修改/etc/hosts成:

127.0.0.1 localhost.localdomain localhost
192.168.5.11    TEST-A

 

重启后,RegionServer运行正常,但是5.11的Zookeeper日志有错误:

Got exception when processing sessionid:0x124827089980006 type:create cxid:0x1d zxid:0xfffffffffffffffe txntype:unknown n/a
org.apache.zookeeper.KeeperException$NodeExistsException: KeeperErrorCode = NodeExists

此错误不影响HBase的运行,临时解决方案是采用单ZK Server
清空HDFS /hbase目录,可以重新生成Zookeeper节点信息:

bin/hadoop fs -rmr /hbase

修改 <value>webdev,TEST-A</value>为 <value>webdev</value>,但QuarmServer

但是,运行一段时间,仍然出现以上错误。


升级ZooKeeper版本,把hbase-0.20.3自带的zookeeper-3.2.2.jar,升级到最新版的zookeeper-3.3.0.jar。

rm -rf /tmp/hbase-iic/zookeeper/

bin/hadoop fs -rmr /hbase

 出现“2010-04-13 09:40:02,685 INFO org.apache.zookeeper.server.PrepRequestProcessor: Got user-level KeeperException when processing sessionid:0x127f4d2acc70001 type:create cxid:0x4 zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a Error Path:/hbase Error:KeeperErrorCode = NodeExists for /hbase”

从日志看,这个异常的日志级别是INFO,可能是正常的情况,不影响Hbase运行。

但是升级Zookeeper以后,带来新的问题,Hadoop不能Map Reduce(是因为换了ZooKeeper,没有重新启动Hadoop?hadoop-env.sh引用旧版的ZooKeeper):

java.io.IOException: java.io.IOException: Could not find requested method, the usual cause is a version mismatch between client and server.

 

ps -ef|grep hadoop 得到的NameNode中,Classpath指向/home/iic/hbase-0.20.3/lib/zookeeper-3.2.2.jar org.apache.hadoop.hdfs.server.namenode.NameNode。修改hadoop

vi conf/hadoop-env.sh

 

 

 

查看Master

http://192.168.2.79:60010/master.jsp

 

查看Region Server

http://webdev:60030/regionserver.jsp

 

http://test-a:60030/regionserver.jsp

 

查看ZK Tree

http://192.168.2.79:60010/zk.jsp

 

 

 

批量删除Region Server进程

kill -9 `ps -ef |grep hbase |grep -v grep |awk '{print $2}' `

kill -9 `ps -ef |grep hbase.regionserver |grep -v grep  |awk '{print $2}' `

 

 

bin/hbase-daemon.sh stop zookeeper

bin/hbase-daemon.sh start zookeeper

 

 

----------------------------------------------------------------

测试Hbase按照是否正确

 

bin/hbase shell

create 'scores', 'grade', 'course'

list

describe 'scores'

put 'scores', 'Tom', 'grade:', '1'

put 'scores', 'Tom', 'course:math', '87'

put 'scores', 'Tom', 'course:art', '97'

put 'scores', 'Jerry', 'grade:', '2'

put 'scores', 'Jerry', 'course:math', '100'

put 'scores', 'Jerry', 'course:art', '80'

get 'scores', 'Tom'

 

get 'scores', 'Jerry'

 

scan 'scores'

scan 'scores', ['course:']

 

----------------------------------------------------------------

Hbase Client:D:\7g\Projects\BMB\Hadoop-Projects\Hbase-Learning

项目中包含hbase-site.xml

同时覆盖三个参数:<name>hbase.master</name>,<name>hbase.rootdir</name>,<name>hbase.zookeeper.quorum</name>

 

 

 

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值