导读
- 前言
- 安装步骤
- 碰到的问题及其解决方案
前言
客户生产环境用的CDH 5.12.2企业版本,其他组件主要版本为kafka 0.9,hbase 1.2,spark1.6+spark2.1,phoenix 4.14.0,JDK 1.7.0.67,由于公司开发时跟客户环境不完全一致,主要在HDP版本和apache版本的大数据平台测试相关应用,由于各种配置的问题,特别是Kerberos认证,导致线上运行特别不稳定。不得不在公司安装CDH的社区版来做基本的测试,为了造phoenix4.14 +hbase1.2 +spark streaming2.1.3的环境,更可恨不让下载CDH parcels的方式来安装,CDH官网下载还得注册,注册,社区用户还不知道能不能下载,于是只能去phoenix的官网下载apache-phoenix-4.14.0-cdh5.13.2-bin.tar.gz,都是逼的,直接用CDH的parcles安装简单多了,可惜不能。
公司按照的版本是CDH-5.16.2版本,截图如下
下载地址:http://phoenix.apache.org/download.html
其他版本的phoenix安装方法,凭多年的经验,应该是类似的。CDH的版本一般临近的差不多的,一般从CDH官网各个版本的manifest.json文件中可以看到各个组件的限制。
从manifest.json可以看出来能适配那些CDH的平台,如下图所示:
废话不多说了,直接安装
安装步骤
- 1. 下载apache-phoenix-4.14.0-cdh5.13.2-bin.tar.gz,上传到linux(本文放到了/opt目录下)
[root@hadoop60 opt]# tar -zxf apache-phoenix-4.14.0-cdh5.13.2-bin.tar.gz
[root@hadoop60 opt]# mv apache-phoenix-4.14.0-cdh5.13.2-bin /opt/cloudera/APACHE_PHOENIX
[root@hadoop60 opt]# cd cloudera
[root@hadoop60 cloudera]# ll
total 20
drwxrwxr-x 5 it it 4096 Jun 5 2018 APACHE_PHOENIX
drwxr-xr-x 9 cloudera-scm cloudera-scm 4096 Feb 29 16:01 csd
drwxr-xr-x 2 cloudera-scm cloudera-scm 4096 Feb 29 16:07 parcel-cache
drwxr-xr-x 2 cloudera-scm cloudera-scm 4096 Feb 29 16:02 parcel-repo
drwxr-xr-x 6 cloudera-scm cloudera-scm 4096 Feb 29 16:08 parcels
[root@hadoop60 cloudera]# chown -R cloudera-scm:cloudera-scm APACHE_PHOENIX
- 2. 拷贝 phoenix-4.14.0-cdh5.13.2-server.jar到当前主机hbase/lib目录
拷贝文件到/opt/cloudera/parcels/CDH-5.16.2-1.cdh5.16.2.p0.8/lib/hbase/lib
[root@hadoop60 cloudera]# cd APACHE_PHOENIX/
[root@hadoop60 APACHE_PHOENIX]# ln -s /opt/cloudera/APACHE_PHOENIX/phoenix-4.14.0-cdh5.13.2-server.jar /opt/cloudera/parcels/CDH-5.16.2-1.cdh5.16.2.p0.8/lib/hbase/lib/phoenix-4.14.0-cdh5.13.2-server.jar
[root@hadoop60 APACHE_PHOENIX]# cd /opt/cloudera/parcels/CDH-5.16.2-1.cdh5.16.2.p0.8/lib/hbase/lib
[root@hadoop60 lib]# chown -R cloudera-scm:cloudera-scm phoenix-4.14.0-cdh5.13.2-server.jar
- 3. 拷贝拷贝 phoenix-4.14.0-cdh5.13.2-server.jar到所有RegionServer所在及其的hbase/lib目录
其实所有涉及到hbase的机器都传一下
[root@hadoop60 APACHE_PHOENIX]# scp phoenix-4.14.0-cdh5.13.2-server.jar root@hadoop61:/opt/cloudera/parcels/CDH-5.16.2-1.cdh5.16.2.p0.8/lib/hbase/lib
phoenix-4.14.0-cdh5.13.2-server.jar 100% 37MB 150.5MB/s 00:00
[root@hadoop60 APACHE_PHOENIX]# scp phoenix-4.14.0-cdh5.13.2-server.jar root@hadoop62:/opt/cloudera/parcels/CDH-5.16.2-1.cdh5.16.2.p0.8/lib/hbase/lib
phoenix-4.14.0-cdh5.13.2-server.jar 100% 37MB 159.3MB/s 00:00
[root@hadoop60 APACHE_PHOENIX]# scp phoenix-4.14.0-cdh5.13.2-server.jar root@hadoop63:/opt/cloudera/parcels/CDH-5.16.2-1.cdh5.16.2.p0.8/lib/hbase/lib
phoenix-4.14.0-cdh5.13.2-server.jar 100% 37MB 138.2MB/s 00:00
[root@hadoop60 APACHE_PHOENIX]# scp phoenix-4.14.0-cdh5.13.2-server.jar root@hadoop64:/opt/cloudera/parcels/CDH-5.16.2-1.cdh5.16.2.p0.8/lib/hbase/lib
phoenix-4.14.0-cdh5.13.2-server.jar 100% 37MB 145.1MB/s 00:00
[root@hadoop60 APACHE_PHOENIX]# scp phoenix-4.14.0-cdh5.13.2-server.jar root@hadoop65:/opt/cloudera/parcels/CDH-5.16.2-1.cdh5.16.2.p0.8/lib/hbase/lib
phoenix-4.14.0-cdh5.13.2-server.jar 100% 37MB 152.7MB/s 00:00
[root@hadoop60 APACHE_PHOENIX]#
- 4. 修改hbase的hbase-site.xml配置文件,并重启hbase服务
<property>
<name>hbase.regionserver.wal.codec</name>
<value>org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec</value>
</property>
<property>
<name>phoenix.schema.isNamespaceMappingEnabled</name>
<value>true</value>
</property>
<property>
<name>phoenix.schema.mapSystemTablesToNamespace</name>
<value>true</value>
</property>
<property>
<name>phoenix.functions.allowUserDefinedFunctions</name>
<value>true</value>
<description>enable UDF functions</description>
</property>
使用界面去更改,不然没保证全部机器已经更改成功
HMaster的hbase-site.xml也需要更改,暂时不太清楚有没有一步到位的
反正涉及到hbase-site.xml的最好都添加一下。修改之后,记得重启hbase服务。
- 5. 拷贝hbase-site.xml文件到phoenix/bin目录
[root@hadoop60 APACHE_PHOENIX]# cd /opt/cloudera/parcels/CDH-5.16.2-1.cdh5.16.2.p0.8/lib/hbase/conf
[root@hadoop60 conf]# cp -f hbase-site.xml /opt/cloudera/APACHE_PHOENIX/bin/
cp: overwrite /opt/cloudera/APACHE_PHOENIX/bin/hbase-site.xml y #此处输入y
[root@hadoop60 conf]#
一定要确保所有hbase-site.xml都是最新的,确保CDH Manager真正更新到了xml文件,有时发生修改配置,但是没有更新到文件的情况。
- 6. 测试phoenix
首先切换到phoenix的安装目录,进入到bin目录
[root@hadoop60 bin]# ./sqlline.py
Setting property: [incremental, false]
Setting property: [isolation, TRANSACTION_READ_COMMITTED]
issuing: !connect jdbc:phoenix: none none org.apache.phoenix.jdbc.PhoenixDriver
Connecting to jdbc:phoenix:
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/cloudera/APACHE_PHOENIX/phoenix-4.14.0-cdh5.13.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-5.16.2-1.cdh5.16.2.p0.8/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
20/03/01 01:13:14 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Connected to: Phoenix (version 4.14)
Driver: PhoenixEmbeddedDriver (version 4.14)
Autocommit status: true
Transaction isolation: TRANSACTION_READ_COMMITTED
Building list of tables and columns for tab-completion (set fastconnect to true to skip)...
133/133 (100%) Done
Done
sqlline version 1.2.0
0: jdbc:phoenix:> !tables
+------------+--------------+-------------+---------------+----------+------------+----------------------------+-----------------+--------------+-----------------+---------------+----------------+
| TABLE_CAT | TABLE_SCHEM | TABLE_NAME | TABLE_TYPE | REMARKS | TYPE_NAME | SELF_REFERENCING_COL_NAME | REF_GENERATION | INDEX_STATE | IMMUTABLE_ROWS | SALT_BUCKETS | MULTI_TENANT |
+------------+--------------+-------------+---------------+----------+------------+----------------------------+-----------------+--------------+-----------------+---------------+----------------+
| | SYSTEM | CATALOG | SYSTEM TABLE | | | | | | false | null | false |
| | SYSTEM | FUNCTION | SYSTEM TABLE | | | | | | false | null | false |
| | SYSTEM | LOG | SYSTEM TABLE | | | | | | true | 32 | false |
| | SYSTEM | SEQUENCE | SYSTEM TABLE | | | | | | false | null | false |
| | SYSTEM | STATS | SYSTEM TABLE | | | | | | false | null | false |
+------------+--------------+-------------+---------------+----------+------------+----------------------------+-----------------+--------------+-----------------+---------------+----------------+
0: jdbc:phoenix:>
搭建过程中碰到的异常
0: jdbc:phoenix:> !table
Error: ERROR 1012 (42M03): Table undefined. tableName=SYSTEM.CATALOG (state=42M03,code=1012)
org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table undefined. tableName=SYSTEM.CATALOG
at org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:577)
at org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.<init>(FromCompiler.java:391)
at org.apache.phoenix.compile.FromCompiler.getResolverForQuery(FromCompiler.java:228)
at org.apache.phoenix.compile.FromCompiler.getResolverForQuery(FromCompiler.java:206)
at org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:482)
at org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:456)
at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:302)
at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:291)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:290)
at org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:283)
at org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:1793)
at org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.getTables(PhoenixDatabaseMetaData.java:1194)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sqlline.Reflector.invoke(Reflector.java:75)
at sqlline.Commands.metadata(Commands.java:194)
at sqlline.Commands.tables(Commands.java:332)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:38)
at sqlline.SqlLine.dispatch(SqlLine.java:809)
at sqlline.SqlLine.begin(SqlLine.java:686)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:291)
一开始因为缺少一项配置,导致以上异常,后来增加一项配置解决了此异常。网上有很多答案,有些人说关闭命名空间,删除系统表,都是扯蛋的,多个项目共用一个hbase时,不同的项目组,那不就是很容易造成换乱。本来就想用phoenix的nameSpaceMapping的,启用和不启用,无非就是冒号和点的问题,从hbase上来看的话。
启用的情况下,如下图所示:
[root@hadoop60 conf]# hbase shell
Java HotSpot(TM) 64-Bit Server VM warning: Using incremental CMS is deprecated and will likely be removed in a future release
20/03/01 00:39:03 INFO Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.2.0-cdh5.16.2, rUnknown, Mon Jun 3 03:50:03 PDT 2019
hbase(main):001:0> list
TABLE
SYSTEM:CATALOG
SYSTEM:FUNCTION
SYSTEM:LOG
SYSTEM:MUTEX
SYSTEM:SEQUENCE
SYSTEM:STATS
6 row(s) in 0.2150 seconds
=> ["SYSTEM:CATALOG", "SYSTEM:FUNCTION", "SYSTEM:LOG", "SYSTEM:MUTEX", "SYSTEM:SEQUENCE", "SYSTEM:STATS"]
不启用NamespaceMapping看到的是点,类似SYSTEM.CATALOG的表名
解决方案
通过CDH Manager增加hbase-site.xml的配置,不建议手动去修改集群配置,增加配置如下:
<property>
<name>phoenix.schema.mapSystemTablesToNamespace</name>
<value>true</value>
</property>