最近在搭建大数据等环境,用来学习一些组件或者东西的使用,这里简单的记录下搭建HBase的一些配置文件记录
当然了,搭建HBase的环境就需要先搭建ZooKeeper的环境,这里由于我之前都是搭建的分布式环境,所以这里也搭建分布式的环境来一起操作。
Zookeeper的集群环境搭建
先将Zookeeper上传到服务器上
将文件解压到 /usr/local/的目录下
tar -zxf zookeeper-3.4.14.tar.gz -C /usr/local/
cd /usr/local/zookeeper-3.4.14/conf
cp zoo_sample.cfg zoo.cfg
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/usr/local/zookeeper-3.4.14/gavin
dataLogDir=/usr/local/zookeeper-3.4.14/gavinlog
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=192.168.119.141:2888:3888
server.2=192.168.119.142:2888:3888
server.3=192.168.119.143:2888:3888
这里的dataDir 和 dataLogDir是需要先创建的目录的
scp -r /usr/local/zookeeper-3.4.14/ slave1:/usr/local/
scp -r /usr/local/zookeeper-3.4.14/ slave2:/usr/local/
依次复制到其他的几台服务器的目录下
依次到几台服务器上 ; 进入 zookeeper的bin目录下
./zkServer.sh start|stop|status 来启动
接下来是HBase的配置
tar -zxf hbase-1.2.4-bin.tar.gz -C /usr/local/
cd /usr/local/hbase-1.2.4/conf
修改hbase-site.xml配置文件
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
/**
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
-->
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://master:8020/hbase</value>
</property>
<property>
<name>hbase.master</name>
<value>master</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.ZooKeeper.property.clientPort</name>
<value>2181</value>
</property>
<property>
<name>hbase.ZooKeeper.quorum</name>
<value>192.168.119.141,192.168.119.142,192.168.119.143</value>
</property>
<property>
<name>ZooKeeper.session.timeout</name>
<value>60000000</value>
</property>
<property>
<name>dfs.support.append</name>
<value>true</value>
</property>
</configuration>
修改 hbase-env.sh 主要就是修改几个地方
export JAVA_HOME=/opt/jdk1.8.0_181
export HBASE_CLASSPATH=/usr/local/hadoop-2.7.3/etc/hadoop
export HBASE_MANAGES_ZK=false
然后配置 regionservers
192.168.119.141
192.168.119.142
192.168.119.143这里写你/etc/hosts下的配置也可以,写你ip也可以,看个人。
进入bin 目录下
./start-hbase.sh
如果没什么问题都可以用jps来查看进程的指令
访问 http://192.168.119.141:16010/master-status
然后你就可以看到这个成功的界面,就说明你是安装成功得了。