phoenix搭建

环境

建议使用apache 版本的环境,使用cdh可能会出现一些问题。

请参考stack overflow上非apache版本的hbase报错描述:
https://stackoverflow.com/questions/31849454/using-phoenix-with-cloudera-hbase-installed-from-repo

下载

官方下载地址:
https://mirrors.tuna.tsinghua.edu.cn/apache/phoenix/

安装

1.将phoenix的tar包上传到节点上解压

tar -xzvf apache-phoenix-4.10.0-HBase-1.2-bin.tar.gz -C /opt/
  1. 将phoenix-4.10.0-HBase-1.2-server.jar 拷贝到各个节点的$HBASE_HOME/lib目录下。
cd /opt/beh/core/apache-phoenix-4.10.0-HBase-1.2-bin
cp phoenix-4.10.0-HBase-1.2-server.jar /opt/hbase/lib
scp phoenix-4.10.0-HBase-1.2-server.jar test102:/opt/hbase/lib
scp phoenix-4.10.0-HBase-1.2-server.jar test103:/opt/hbase/lib

3.重启hbase

stop-hbase.sh
start-hbase.sh

4.登录连接phoenix

cd /opt/apache-phoenix-4.10.0-HBase-1.2-bin/bin
./sqlline.py localhost

出现以下信息表示安装成功

[hadoop@test101 bin]$ ./sqlline.py localhost
Setting property: [incremental, false]
Setting property: [isolation, TRANSACTION_READ_COMMITTED]
issuing: !connect jdbc:phoenix:localhost none none org.apache.phoenix.jdbc.PhoenixDriver
Connecting to jdbc:phoenix:localhost
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/phoenix-4.10.0-HBase-1.2-bin/phoenix-4.10.0-HBase-1.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
17/08/14 17:02:25 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/08/14 17:02:26 WARN shortcircuit.DomainSocketFactory: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded.
Connected to: Phoenix (version 4.10)
Driver: PhoenixEmbeddedDriver (version 4.10)
Autocommit status: true
Transaction isolation: TRANSACTION_READ_COMMITTED
Building list of tables and columns for tab-completion (set fastconnect to true to skip)...
91/91 (100%) Done
Done
sqlline version 1.2.0
0: jdbc:phoenix:localhost> 

启动报错

启动phoenix报以下异常

Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException): org.apache.hadoop.hbase.DoNotRetryIOException: Class org.apache.phoenix.coprocessor.MetaDataEndpointImpl cannot be loaded Set hbase.table.sanity.checks to false at conf or table descriptor if you want to bypass sanity checks

解决方法:
在hbase的配置文件hbase-site.xml中添加以下参数

<property>
<name>hbase.table.sanity.checks</name>
<value>false</value>
</property>

分发该配置文件,重启hbase即可。

hbase.table.sanity.checks主要用于hbase的各种参数检查。当该参数为==true== 时检查步骤入下。

1.check max file size,hbase.hregion.max.filesize,最小为2MB  
2.check flush size,hbase.hregion.memstore.flush.size,最小为1MB  
3.check that coprocessors and other specified plugin classes can be loaded  
4.check compression can be loaded  
5.check encryption can be loaded  
6.Verify compaction policy  
7.check that we have at least 1 CF  
8.check blockSize  
9.check versions  
10.check minVersions <= maxVerions  
11.check replication scope  
12.check data replication factor, it can be 0(default value) when user has not explicitly set the value, in this case we use default replication factor set in the file system. 
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值