HBase分布式集群搭建

                            HBase集群安装

由于HBase集群建立在hadoop集群与zookeeper集群基础上,所以首先搭建好hadoop与zookeeper集群。

1解压hbase-1.4.11-bin.tar.gz安装包

创建安装解压目录

mkidr -p /home/hadoop/hbase

tar -zxvf hbase-1.4.11-bin.tar.gz  -C  /home/hadoop/hbase/

2 修改hbase-env.sh文件配置

export JAVA_HOME=/home/hadoop/java/jdk1.8.0_192/

如果需要使用HBase自带的Zookeeper,则去掉该文件中的# export HBASE_MANAGES_ZK=true配置就好,我们这里使用自搭建的zookeeper集群,设置为false。

export HBASE_MANAGES_ZK=false

vi  .bash_profile

# 配置HBase

export HBASE_HOME=/home/hadoop/hbase/hbase-1.4.11

export PATH=$PATH:$HBASE_HOME/bin

source  .bash_profile

3 修改hbase-site.xml配置文件

<configuration>

  <!--Zookeeper客户端通讯端口 -->

  <property>

    <name>hbase.zookeeper.property.clientPort</name>

    <value>2181</value>

  </property>

  <!-- Zookeeper集群节点列表-->

  <property>

    <name>hbase.zookeeper.quorum</name>

    <value>node-1,node-2,node-3</value>

    <description>The directory shared by RegionServers.

    </description>

  </property>

  <!-- Zookeeper数据存放目录-->

  <property>

    <name>hbase.zookeeper.property.dataDir</name>

    <value>/usr/local/zookeeper/data</value>

    <description>

    注意这里的zookeeper数据目录与hadoop ha的共用,也即要与 zoo.cfg 中配置的一致

    Property from ZooKeeper config zoo.cfg.

    The directory where the snapshot is stored.

    </description>

  </property>

 

<!-- hbase.rootdir 属性的值需要与 Hadoop 目录下这个conf/core-site.xml 文件中的 fs.default.name 属性值对应。-->

 <property>

    <name>hbase.rootdir</name>

    <value>hdfs://mycluster:8020/hbase/hbasedata</value>

    <description>The directory shared by RegionServers.

                 官网多次强调这个目录不要预先创建,hbase会自行创建,否则会做迁移操作,引发错误

                 至于端口,有些是8020,有些是9000,看 $HADOOP_HOME/etc/hadoop/hdfs-site.xml 里面的配置,本实验配置的是

                 dfs.namenode.rpc-address.mycluster.nn1 , dfs.namenode.rpc-address.mycluster.nn2, 这里配置的端口为8020

    </description>

  </property>

  <property>

    <name>hbase.cluster.distributed</name>

    <value>true</value>

    <description>分布式集群配置,这里要设置为true,如果是单节点的,则设置为false

      The mode the cluster will be in. Possible values are

      false: standalone and pseudo-distributed setups with managed ZooKeeper

      true: fully-distributed with unmanaged ZooKeeper Quorum (see hbase-env.sh)

    </description>

  </property>

</configuration>

上述配置属性解析如下:

Hbase.rootdir: HBase的数据存储目录,由于HBase数据存储在HDFS上,所以要写HDFS的目录,注意路径的端口要和Hadoop的fs.defaultFS端口一致。配置好后,HBase数据就会写入到这个目录中,且目录不需要手动创建,HBase启动的时候自动创建

Hbase.cluster.distributed: 设置为true代表开启分布式

Hbase.zookeeper.quorum:设置依赖的Zookeeper节点,此处加入所有Zookeeper集群即可

Hbase.zookeeper.property.dataDir:设置Zookeeper的配置,日志等数据存放的目录,该目录为zoo.cfg中配置的数据目录

4 修改regionservers文件配置

centoshadoop1

centoshadoop2

centoshadoop3

centoshadoop4

regionservers文件列出了所有运行HRegionServer进程的服务器。对该文件的配置与Hadoop中对slaves文件的配置相似,需要在该文件中的每一行指定一台服务器,当HBase启动时会读取该文件,将文件中指定的所有服务器启动HRegionServer进程。当HBase停止时,也会同时停止它们。

5 复制HBase到其他节点

scp -r hbase/  hadoop@centoshadoop4:/home/hadoop/

scp -r .bash_profile hadoop@centoshadoop4:~

 

6 修改ulimit限制

 

修改limits.conf

HBase 会在同一时间打开大量的文件句柄和进程,超过 Linux 的默认限制,导致可能会出现如下错误。

2020-02-06 03:04:37,542 INFO org.apache.hadoop.hdfs.DFSClient: Exception increateBlockOutputStream java.io.EOFException 

2020-02-06 03:04:37,542 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning block blk_-6935524980745310745_1391901

所以编辑/etc/security/limits.conf文件,添加以下两行,提高能打开的句柄数量和进程数量。注意将hadoop改成你运行 HBase 的用户名。

 

修改 /etc/security/limits.conf 文件,在最后加上nofile(文件数量)、nproc(进程数量)属性,如下:

vi /etc/security/limits.conf

 

* soft nofile 65536

* hard nofile 65536

* soft nproc  65536

* hard nproc  65536

 

scp -r  /etc/security/limits.conf root@centoshadoop2:/etc/security/

修改后,重启服务器生效

Reboot

分发到各台机器

 

 

7 启动测试(只需要在节点一上启动集群)

cd  /home/hadoop/hbase/hbase-1.4.11

bin/start-hbase.sh

running master, logging to /home/hadoop/hbase/hbase-1.4.11/logs/hbase-hadoop-master-centoshadoop1.out

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0

centoshadoop1: running regionserver, logging to /home/hadoop/hbase/hbase-1.4.11/bin/../logs/hbase-hadoop-regionserver-centoshadoop1.out

centoshadoop2: running regionserver, logging to /home/hadoop/hbase/hbase-1.4.11/bin/../logs/hbase-hadoop-regionserver-centoshadoop2.out

centoshadoop4: running regionserver, logging to /home/hadoop/hbase/hbase-1.4.11/bin/../logs/hbase-hadoop-regionserver-centoshadoop4.out

centoshadoop3: running regionserver, logging to /home/hadoop/hbase/hbase-1.4.11/bin/../logs/hbase-hadoop-regionserver-centoshadoop3.out

centoshadoop1: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0

centoshadoop1: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0

centoshadoop2: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0

centoshadoop2: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0

centoshadoop4: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0

centoshadoop4: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0

centoshadoop3: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0

centoshadoop3: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0

启动完成后,查看各个节点的Java进程

[hadoop@centoshadoop1 hbase-1.4.11]$ jps

3825 NameNode

4162 JournalNode

5634 Jps

4293 ResourceManager

3081 DFSZKFailoverController

3945 DataNode

4409 NodeManager

4924 RunJar

5277 HMaster

5421 HRegionServer

 

HBase提供了Web端UI界面,浏览器访问HMaster所在的节点的16010端口即可查看HBase集群的运行状态.

http://192.168.227.140:16010

 

停止hbase时,pid不存在

HBase停止节点报错pid不存在

修改配置文件hbase-env.sh

hbase-env.sh

export HBASE_PID_DIR=/home/hadoop/hbase

 

备注:配置这个hbase.rootdir属性的时候,需要将hdfs的core-site.xml和hdfs-site.xml两个配置文件copy到hbase的conf或者lib目录下,否则regionserver不能识别mycluster逻辑名称。

cd /home/hadoop/hadoop-ha/hadoop/hadoop-2.8.5/etc/hadoop

scp -r core-site.xml  hadoop@centoshadoop1:/home/hadoop/hbase/hbase-1.4.11/conf/

scp -r hdfs-site.xml  hadoop@centoshadoop1:/home/hadoop/hbase/hbase-1.4.11/conf/

 

HRegionServer 否则这个节点不能启动,或者启动消失

cd  /home/hadoop/hbase/hbase-1.4.11

bin/start-hbase.sh

启动顺序二:

在regionServer上bin/hbase-daemon.sh start regionserver

 在master上执行:bin/bin/hbase-daemon.sh start master

 

启动报警告:

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0

解决办法

由于使用了JDK8 ,需要在HBase的配置文件中hbase-env.sh,注释掉两行。

vi /home/hadoop/hbase/hbase-1.4.11/conf/hbase-env.sh

# Configure PermSize. Only needed in JDK7. You can safely remove it for JDK8+

#export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -XX:PermSize=128m -XX:MaxPermSize=128m"

#export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -XX:PermSize=128m -XX:MaxPermSize=128m"

 

端口被拦截,导致从节点不能在界面显示

WARN  [regionserver/centoshadoop3/192.168.227.142:16020] regionserver.HRegionServer: reportForDuty failed; sleeping and then retrying.

firewall-cmd --zone=public --add-port=16000/tcp --permanent

firewall-cmd --zone=public --add-port=16020/tcp --permanent

firewall-cmd --reload

各个Regionserver之间是通过16000 16020端口通讯,

否则查看日志,各个region Server之间不能通讯,不显示

 

 

 

 

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值