目录
终于可以搭建hbase集群了,安装hbase集群主要分为两部分,zookeeper的安装,hbase的安装
部署zookeeper 集群
在menber3(10.0.0.48) 做配置
cd /usr/local
wget http://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.4.14/zookeeper-3.4.14.tar.gz
tar -zxvf zookeeper-3.4.14.tar.gz
cd zookeeper-3.4.14/conf
cp zoo_sample.cfg zoo.cfg
vi zoo.cfg
zoo.cfg 内容如下
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/var/zookeeper
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=menber1:2888:3888
server.2=menber2:2888:3888
server.3=menber3:2888:3888
设置menber3上的zookeeper id 为3
cd /var/zookeeper
echo "3" >> ./myid
将相同的配置 拷贝至 menber2,menber1
scp -r /usr/local/zookeeper-3.4.14 root@menber2:/usr/local
scp -r /usr/local/zookeeper-3.4.14 root@menber1:/usr/local
将menber2上zookeeper id 设置为2 , 将menber1上zookeeper id 设置为1
设置3台机器的环境变量, 启动zookeeper
vi /etc/profile
#加入以下内容
export ZOOKEEPER_HOME=/usr/local/zookeeper-3.4.14
export PATH=$ZOOKEEPER_HOME/bin:$PATH
source /etc/profile
在3台机器上分别 执行
挨个启动并检查各节点状态
zkServer.sh start
zkServer.sh status
可以看到zookeeper 集群有一台是leader, 另外两台是 follower , 集群工作状态正常
搭建hbase集群
下载hbase
cd /usr/local
wget https://mirror.bit.edu.cn/apache/hbase/2.2.5/hbase-2.2.5-bin.tar.gz
tar -zxvf hbase-2.2.5-bin.tar.gz
修改hbase-site.xml
vi /usr/local/hbase-2.2.5/conf/hbase-site.xml
#内容如下
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://menber3:8020/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>centos48,centos49,centos50</value>
</property>
<property>
<name>hbase.master.info.port</name>
<value>9084</value>
</property>
<property>
<name>hbase.unsafe.stream.capability.enforce</name>
<value>false</value>
</property>
</configuration>
## 注意 hbase.rootdir 要跟hadoop 中的配置一致(即和 /usr/local/hadoop-3.2.0/etc/hadoop/core-site.xml 中的fs.defaultFS 配置一致
修改hbase-env.sh
vi /usr/local/hbase-2.2.5/conf/hbase-env.sh
#jdk 路径
export JAVA_HOME=/usr/java/jdk1.8.0_131
# 不要用hbase 自带的zookeeper
export HBASE_MANAGES_ZK=false
设置regionServer
vi regionservers
menber3
menber2
menber1
# 将 hbase 分发至 menber2, menber1
scp -r /usr/local/hbase-2.2.5 root@menber2:/usr/local
scp -r /usr/local/hbase-2.2.5 root@menber1:/usr/local
修改环境变量(3台机器都要做设置)
vi /etc/profile
export HBASE_HOME=/usr/local/hbase-2.2.5
export PATH=$HBASE_HOME/bin:$PATH
启动/停止habse
start-hbase.sh
stop-hbase.sh
检查HBase 集群状态
在 menber3上可以看到HMaster 和 HRegionServer 两个进程
输入jps
111489 ResourceManager
95537 Jps
110912 NameNode
111075 DataNode
67938 QuorumPeerMain
71671 HRegionServer
76326 HMaster
104184 jar
62910 Master
111646 NodeManager
在 menber2,menber1 分别可以看到 HRegionServer 进程
输入jps
83344 NodeManager
34737 QuorumPeerMain
56436 Jps
37303 HRegionServer
30520 Worker
83215 DataNode
说明hbase 集群工作正常
页面访问
http://192.168.43.43:9084