伪分布式搭建HBASE,及phoenix插件安装

伪分布式搭建HBASE,及phoenix插件安装

最近在搞实时数仓全流程,为了提高效率在单个虚机上搭建了HADOOP,ZK,Redis,Hbase!这里强调在搭建单机伪分布式Hbase时步骤和注意事项,虽然简单但是很给劲。

1.下载hbase-1.3.1-bin.tar.gz ,解压到安装目录

hbase-1.3.1-bin.tar.gz下载地址网上很多,就不赘述啦,直接进入正题。

#tar -zxvf hbase-1.3.1-bin.tar.gz 

2.修改配置文件hbase-env.sh

在hbase-env.sh文件中修改并加入以下几行

#vi hbase-1.3.1/conf/hbase-env.sh

export JAVA_HOME=/home/jdk1.8.0_45
export HADOOP_HOME=/home/hadoop/hadoop-3.2.1
export HBASE_HOME=/home/hadoop/bigdata/hbase-1.3.1
export HBASE_CLASSPATH=/home/hadoop/hadoop-3.2.1/etc/hadoop
export HBASE_PID_DIR=/home/hadoop/bigdata/hbase-1.3.1/pids
export ZK_HOME=/home/hadoop/bigdata/zookeeper-3.4.14

export HBASE_MANAGES_ZK=false #不使用Hbase自带的zk,这块需要注意,不然Hmaster服务会起不来

3.修改配置文件hbase-site.xml

在hbase-site.xml文件中修改并加入以下几行

#vi hbase-1.3.1/conf/hbase-site.xml
<configuration>

	<!-- hbase的端口 -->
	<property>
	 <name>hbase.rootdir</name>
	 <value>hdfs://hadoop:9000/hbase</value>
	 <description>The directory shared byregion servers.</description>
	</property>
	
	<!-- 设置master端口号 -->
	<property>
		<name>hbase.master.port</name>
		<value>16000</value>
	</property>
	 
	<property>
		<name>hbase.master.info.port</name>
		<value>16010</value>
	</property>
	
	 <!-- 设置regionserver端口号 -->
	<property>
		<name>hbase.regionserver.port</name>
		<value>16201</value>
	</property>
	 
	<property>
		<name>hbase.regionserver.info.port</name>
		<value>16301</value>
	</property>
	
	<property>
	 <name>hbase.zookeeper.property.clientPort</name>
	 <value>2181</value>
	 <description>Property from ZooKeeper'sconfig zoo.cfg. The port at which the clients will connect.
	 </description>
	</property>
	<!--  超时时间 -->
	<property>
	 <name>zookeeper.session.timeout</name>
	 <value>120000</value>
	</property>
	
	<!--  zookeeper 主机地址 与hosts一致 -->
	<property>
	 <name>hbase.zookeeper.quorum</name>
	 <value>hadoop</value>
	</property>
	
	<!--  zookeeper 数据目录 与zk 配置文件一致 -->
	  <property>
	     <name>hbase.zookeeper.property.dataDir</name>
	     <value>/home/hadoop/bigdata/zkdata</value>
	  </property>
	  
	<property>
	 <name>hbase.tmp.dir</name>
	 <value>bigdata/hbase-1.3.1/tmp</value>
	</property>
	<!-- false是单机模式,true是分布式模式,咱们是伪分布式也是true-->
	<property>
	 <name>hbase.cluster.distributed</name>
	 <value>true</value>
	</property>

 </configuration>

4.其他配置及注意事项

至此Hbase的配置完成,需要注意的是hadoop和zookeeper配置文件 将localhost 替换为hosts对应得主机名

#vi hdfs-site.xml

    <property>
                <name>dfs.namenode.secondary.http-address</name>
                <value>hadoop:50090</value>
	</property>
	
	<property>
		<name>dfs.http.address</name>
		<value>hadoop:50070</value>
		<description>
		The address and the base port where the dfs namenode web ui will listen on.
		If the port is 0 then the server will start on a free port.
		</description>
	</property>

	<property>
		<name>fs.defaultFS</name>
		<value>hadoop:9000</value>
	</property>

#vi zoo.cfg

tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/home/hadoop/bigdata/zkdata
# the port at which the clients will connect
server.1=hadoop:2181
clientPort=2181
dataLogDir=/home/hadoop/bigdata/zkdatalog

最后将hbase加入到环境变量

#vi /etc/profile

export HBASE_HOME=/home/hadoop/bigdata/hbase-1.3.1
export PATH=.:${JAVA_HOME}/bin:${HADOOP_HOME}/bin:${HBASE_HOME}/bin:$PATH

生效环境变量
#source /etc/profile
启动hbase
#start-hbase.sh

#jps命令查看是否去 启动
在这里插入图片描述
至此安装成功,登录hbase shell 可以开始你的表演

$ hbase shell
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/bigdata/hbase-1.3.1/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/hadoop-3.2.1/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.3.1, r930b9a55528fe45d8edce7af42fef2d35e77677a, Thu Apr  6 19:36:54 PDT 2017

hbase(main):001:0> list
TABLE                                                             
0 row(s) in 0.2030 seconds

=> []
hbase(main):002:0> 

5.phoenix插件安装

官方网址: http://phoenix.apache.org/index.html
5.1下载hbase对应phoenix插件
5.2解压缩后将client和server放在hbase的lib下
phoenix-4.14.2-HBase-1.3-client.jar
phoenix-4.14.2-HBase-1.3-server.jar
5.3配置环境变量

export PHOENIX_HOME=/home/hadoop/bigdata/apache-phoenix-4.14.2-HBase-1.3-bin
export PHOENIX_CLASSPATH=$PHOENIX_HOME
export PATH=$PATH:$PHOENIX_HOME/bin

5.4重启hbase

stop-hbase.sh
start-hbase.sh

5.5 登录 phoenix


$ sqlline.py hadoop:2181
Setting property: [incremental, false]
Setting property: [isolation, TRANSACTION_READ_COMMITTED]
issuing: !connect jdbc:phoenix:hadoop:2181 none none org.apache.phoenix.jdbc.PhoenixDriver
Connecting to jdbc:phoenix:hadoop:2181
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/bigdata/apache-phoenix-4.14.2-HBase-1.3-bin/phoenix-4.14.2-HBase-1.3-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/hadoop-3.2.1/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
20/09/09 15:52:39 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Connected to: Phoenix (version 4.14)
Driver: PhoenixEmbeddedDriver (version 4.14)
Autocommit status: true
Transaction isolation: TRANSACTION_READ_COMMITTED
Building list of tables and columns for tab-completion (set fastconnect to true to skip)...
133/133 (100%) Done
Done
sqlline version 1.2.0
0: jdbc:phoenix:hadoop:2181> !tables
+------------+--------------+-------------+---------------+------+
| TABLE_CAT  | TABLE_SCHEM  | TABLE_NAME  |  TABLE_TYPE   | REMA |
+------------+--------------+-------------+---------------+------+
|            | SYSTEM       | CATALOG     | SYSTEM TABLE  |      |
|            | SYSTEM       | FUNCTION    | SYSTEM TABLE  |      |
|            | SYSTEM       | LOG         | SYSTEM TABLE  |      |
|            | SYSTEM       | SEQUENCE    | SYSTEM TABLE  |      |
|            | SYSTEM       | STATS       | SYSTEM TABLE  |      |
+------------+--------------+-------------+---------------+------+
0: jdbc:phoenix:hadoop:2181> 

至此,Hbase 和jphoenix搭建完成,有问题欢迎留言讨论,致敬未来大师的你!!!

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值