一、说明
(1)phoenix,中文译为“凤凰”,很美的名字。Phoenix是由saleforce.com开源的一个项目,后又捐给了Apache基金会。它相当于一个Java中间件,提供jdbc连接,操作hbase数据表。
(2)在安装phoenix之前要安装hadoop、hbase、zookeeper组件
(3)phoenix的版本要和hbase的版本相匹配,比如我的hbase版本是hbase-1.2.6,phoenix的版本则是apache-phoenix-4.12.0-HBase-1.2-bin.tar.gz
二、安装
1、下载对应Phoenix
http://archive.apache.org/dist/phoenix/
2、上传压缩包并解压
[root@master phoenix]# pwd
/opt/softWare/phoenix
[root@master phoenix]# tar -zxvf apache-phoenix-4.12.0-HBase-1.2-bin.tar.gz
[root@master phoenix]# ls
apache-phoenix-4.12.0-HBase-1.2-bin apache-phoenix-4.12.0-HBase-1.2-bin.tar.gz
[root@master phoenix]#
3、复制配置文件
(1)把hadoop的core-site.xml和hdfs-site.xml拷贝到hbase所有节点的conf目录下。
(2)将apache-phoenix-4.12.0-HBase-1.2-bin/下的phoenix-core-4.12.0-HBase-1.2.jar、phoenix-4.12.0-HBase-1.2-server.jar拷贝到各个 hbase的lib目录下。
(3)将hbase的配置文件hbase-site.xml、 Hadoop/etc/hadoop下的core-site.xml 、hdfs-site.xml放到apache-phoenix-4.12.0-HBase-1.2-bin/bin/下,替换Phoenix原来的 配置文件。
4、修改权限
修改apache-phoenix-4.12.0-HBase-1.2-bin/bin/下的psql.py和sqlline.py两个文件的权限为777
[root@master bin]# chmod 777 psql.py
[root@master bin]# chmod 777 sqlline.py
6、重启hbase
注意:hbase启动之前要先启动zookeeper、hadoop
7、登陆phoenix
[root@master bin]# ./sqlline.py master,slaves1,slaves2:2181
Setting property: [incremental, false]
Setting property: [isolation, TRANSACTION_READ_COMMITTED]
issuing: !connect jdbc:phoenix:master,slaves1,slaves2:2181 none none org.apache.phoenix.jdbc.PhoenixDriver
Connecting to jdbc:phoenix:master,slaves1,slaves2:2181
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/softWare/phoenix/apache-phoenix-4.12.0-HBase-1.2-bin/phoenix-4.12.0-HBase-1.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/softWare/hadoop/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
20/03/16 10:09:19 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Connected to: Phoenix (version 4.12)
Driver: PhoenixEmbeddedDriver (version 4.12)
Autocommit status: true
Transaction isolation: TRANSACTION_READ_COMMITTED
Building list of tables and columns for tab-completion (set fastconnect to true to skip)...
92/92 (100%) Done
Done
sqlline version 1.2.0
0: jdbc:phoenix:master,slaves1,slaves2:2181>
8、查询
0: jdbc:phoenix:master,slaves1,slaves2:2181> !table
+------------+--------------+-------------+---------------+----------+------------+----------------------------+-----------------+----+
| TABLE_CAT | TABLE_SCHEM | TABLE_NAME | TABLE_TYPE | REMARKS | TYPE_NAME | SELF_REFERENCING_COL_NAME | REF_GENERATION | IN |
+------------+--------------+-------------+---------------+----------+------------+----------------------------+-----------------+----+
| | SYSTEM | CATALOG | SYSTEM TABLE | | | | | |
| | SYSTEM | FUNCTION | SYSTEM TABLE | | | | | |
| | SYSTEM | SEQUENCE | SYSTEM TABLE | | | | | |
| | SYSTEM | STATS | SYSTEM TABLE | | | | | |
+------------+--------------+-------------+---------------+----------+------------+----------------------------+-----------------+----+
0: jdbc:phoenix:master,slaves1,slaves2:2181>
三、遇到的坑
1、启动的坑
正确命令
[root@master bin]# ./sqlline.py master,slaves1,slaves2:2181
错误命令
[root@master bin]# ./sqlline.py master:2181,slaves1:2181,slaves2:2181
2、启动phoenix报错
Error: org.apache.hadoop.hbase.DoNotRetryIOException: Class org.apache.phoenix.coprocessor.MetaDataEndpointImpl cannot be loaded Set hbase.table.sanity.checks to false at conf or table descriptor if you want to bypass sanity checks
at org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1754)
at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1615)
at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1541)
at org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:463)
at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55682)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2196)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:748) (state=08000,code=101)
org.apache.phoenix.exception.PhoenixIOException: org.apache.hadoop.hbase.DoNotRetryIOException: Class org.apache.phoenix.coprocessor.MetaDataEndpointImpl cannot be loaded Set hbase.table.sanity.checks to false at conf or table descriptor if you want to bypass sanity checks
百度翻译:
解决办法:
在hbase/conf的hbase-site.xml上加上如下配置:
<property>
<name>hbase.table.sanity.checks</name>
<value>false</value>
</property>