目录
前提条件
拥有3台CentOS7集群
安装好hadoop3.1.3集群,点击查看hadoop3.x集群安装教程
安装好zookeeper集群,点击查看zk集群安装教程
步骤
查看版本匹配
hbase与jdk版本匹配
hbase与hadoop版本匹配
集群规划
Node Name | Master | ZooKeeper | RegionServer |
---|---|---|---|
node2 | yes | yes | yes |
node3 | backup | yes | yes |
node4 | no | yes | yes |
下载、解压、配置环境变量
在node2机器操作
官网下载hbase-2.4.11-bin.tar.gz
解压
[hadoop@node2 installfile]$ tar -zxvf hbase-2.4.11-bin.tar.gz -C ~/soft
配置环境变量
[hadoop@node2 installfile]$ sudo nano /etc/profile.d/my_env.sh
文末添加如下内容:
#HBASE_HOME export HBASE_HOME=/home/hadoop/soft/hbase-2.4.11 export PATH=$PATH:$HBASE_HOME/bin
让环境变量生效
[hadoop@node2 installfile]$ source /etc/profile
验证
[hadoop@node2 installfile]$ hbase version SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/home/hadoop/soft/hbase-2.4.11/lib/client-facing-thirdparty/slf4j-reload4j-1.7.33.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/home/hadoop/soft/hadoop-3.1.3/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Reload4jLoggerFactory] HBase 2.4.11 Source code repository git://buildbox/home/apurtell/build/hbase revision=7e672a0da0586e6b7449310815182695bc6ae193 Compiled by apurtell on Tue Mar 15 10:31:00 PDT 2022 From source with checksum ff045651054080f63b7c121441563515273f455696f9391e0c3af056af16c0d8f41bc7fef7a92969be215e0621833bcc35fe0bc31a2e8e5f12997cfafb9b1752
修改其他机器环境变量
在node3机器操作
[hadoop@node3 ~]$ sudo nano /etc/profile.d/my_env.sh 添加内容如下: #HBASE_HOME export HBASE_HOME=/home/hadoop/soft/hbase-2.4.11 export PATH=$PATH:$HBASE_HOME/bin [hadoop@node3 ~]$ source /etc/profile
在node4机器操作
[hadoop@node4 ~]$ sudo nano /etc/profile.d/my_env.sh 添加内容如下: #HBASE_HOME export HBASE_HOME=/home/hadoop/soft/hbase-2.4.11 export PATH=$PATH:$HBASE_HOME/bin [hadoop@node4 ~]$ source /etc/profile
配置hbase-env.sh
返回node2机器操作
[hadoop@node2 conf]$ nano hbase-env.sh
修改如下内容:
export JAVA_HOME=/home/hadoop/soft/jdk1.8.0_212 export HBASE_MANAGES_ZK=false
配置hbase-site.xml
[hadoop@node2 conf]$ nano hbase-site.xml
配置内容如下:
<configuration> <property> <name>hbase.rootdir</name> <value>hdfs://node2:9820/hbase</value> </property> <property> <name>hbase.cluster.distributed</name> <value>true</value> </property> <property> <name>hbase.zookeeper.quorum</name> <value>node2,node3,node4</value> </property> <property> <name>hbase.zookeeper.property.dataDir</name> <value>/home/hadoop/soft/zookeeper-3.5.7/zkData</value> </property> <property> <name>hbase.tmp.dir</name> <value>/home/hadoop/soft/hbase-2.4.11/tmp</value> </property> <!-- 在分布式的情况下一定要设置,不然容易出现Hmaster起不来的情况 --> <property> <name>hbase.unsafe.stream.capability.enforce</name> <value>false</value> </property> </configuration>
配置regionservers
[hadoop@node2 conf]$ nano regionservers
删除原有的localhost
,添加如下内容:
node2 node3 node4
配置备用master
[hadoop@node2 conf]$ nano backup-masters
内容为
node3
软连接 hadoop 配置文件到 HBase配置目录
[hadoop@node2 conf]$ ln -s /home/hadoop/soft/hadoop-3.1.3/etc/hadoop/core-site.xml core-site.xml [hadoop@node2 conf]$ ln -s /home/hadoop/soft/hadoop-3.1.3/etc/hadoop/hdfs-site.xml hdfs-site.xml
分发hbase
[hadoop@node2 conf]$ cd ~/soft [hadoop@node2 soft]$ xsync hbase-2.4.11
启动集群
启动zookeeper
[hadoop@node2 hbase-2.4.11]$ zk.sh start ---------- zookeeper node2 启动 ------------ ZooKeeper JMX enabled by default Using config: /home/hadoop/soft/zookeeper-3.5.7/bin/../conf/zoo.cfg Starting zookeeper ... STARTED ---------- zookeeper node3 启动 ------------ ZooKeeper JMX enabled by default Using config: /home/hadoop/soft/zookeeper-3.5.7/bin/../conf/zoo.cfg Starting zookeeper ... STARTED ---------- zookeeper node4 启动 ------------ ZooKeeper JMX enabled by default Using config: /home/hadoop/soft/zookeeper-3.5.7/bin/../conf/zoo.cfg Starting zookeeper ... STARTED
启动hadoop
[hadoop@node2 hbase-2.4.11]$ start-all.sh
启动hbase
一次性启动 [hadoop@node2 hbase-2.4.11]$ start-hbase.sh 或者分开启动 [hadoop@node2 hbase-2.4.11]$ hbase-daemon.sh start master [hadoop@node2 hbase-2.4.11]$ hbase-daemons.sh start regionserver
验证
分别在node2、node3、node4机器执行jps命令
jps验证
[hadoop@node2 conf]$ jps 2337 DataNode 7491 HMaster 8243 Jps 7733 HRegionServer 1978 QuorumPeerMain 2203 NameNode [hadoop@node3 conf]$ jps 6660 HMaster 7032 Jps 1785 DataNode 6443 HRegionServer 1676 QuorumPeerMain [hadoop@node4 conf]$ jps 3878 HRegionServer 1719 QuorumPeerMain 1943 SecondaryNameNode 4169 Jps 1821 DataNode
分别执行较为麻烦,可以编写脚本一次性查询node2、node3、node4机器的jps进程。
在node2的~/bin目录下,新建脚本名称为 jpsall
内容如下
#!/bin/bash for host in node2 node3 node4 do echo =============== $host =============== ssh $host jps done
添加可执行权限
[hadoop@node2 bin]$ chmod +x jpsall
分发到其他机器
[hadoop@node2 bin]$ xsync ~/bin/jpsall
jps验证
[hadoop@node2 bin]$ jpsall =============== node2 =============== 2337 DataNode 7491 HMaster 8243 Jps 7733 HRegionServer 1978 QuorumPeerMain 2203 NameNode =============== node3 =============== 6660 HMaster 7032 Jps 1785 DataNode 6443 HRegionServer 1676 QuorumPeerMain =============== node4 =============== 3878 HRegionServer 1719 QuorumPeerMain 1943 SecondaryNameNode 4169 Jps 1821 DataNode
查看Web UI
停止集群
关闭hbase
[hadoop@node2 hbase-2.4.11]$ stop-hbase.sh 或者 [hadoop@node2 hbase-2.4.11]$ hbase-daemon.sh stop master [hadoop@node2 hbase-2.4.11]$ hbase-daemons.sh stop regionserver
关闭zk
[hadoop@node2 conf]$ zk.sh stop
关闭hadoop
[hadoop@node2 conf]$ stop-all.sh
完成!enjoy it!