phoenix简介
- 原生的HBaseAPI使用起来比较麻烦。Apache推出的Phoenix提供了jdbc接口对HBase表进行操作。
pom依赖
<dependency>
<groupId>org.apache.phoenix</groupId>
<artifactId>phoenix-core</artifactId>
<version>5.0.0-HBase-2.0</version>
</dependency>
jdbc连接无kerberos认证
- 实例代码
public static void main(String[] args) throws SQLException, IOException { Connection conn = null; try { Class.forName("org.apache.phoenix.jdbc.PhoenixDriver"); //host和port为zookeeper的主机配置 String url = "jdbc:phoenix:slave6:2181"; conn = DriverManager.getConnection(url); System.out.println(conn); Statement stmt = conn.createStatement(); ResultSet rs = stmt.executeQuery("select * from \"test\".\"student\""); while (rs.next()) { System.out.print("id:" + rs.getString("ID")); System.out.println(",name:" + rs.getString("NAME")); } } catch (ClassNotFoundException | SQLException e) { e.printStackTrace(); } finally { if (conn != null) { System.out.println("连接通道关闭"); conn.close(); } } }
jdbc连接有kerberos认证的集群
这种方式配置比较繁琐,bug也比较多,说句题外话,Kerberos真烦
准备测试数据
- phoenix命令行并插入测试数据
Setting property: [incremental, false] Setting property: [isolation, TRANSACTION_READ_COMMITTED] issuing: !connect jdbc:phoenix:thin:url=http://slave6:8765;serialization=PROTOBUF;authentication=SPNEGO none none org.apache.phoenix.queryserver.client.Driver Connecting to jdbc:phoenix:thin:url=http://slave6:8765;serialization=PROTOBUF;authentication=SPNEGO Connected to: Apache Phoenix (version unknown version) Driver: Phoenix Remote JDBC Driver (version unknown version) Autocommit status: true Transaction isolation: TRANSACTION_READ_COMMITTED Building list of tables and columns for tab-completion (set fastconnect to true to skip)... 141/141 (100%) Done Done sqlline version 1.2.0 0: jdbc:phoenix:thin:url=http://slave6:8765> select * from "test"."student"; +-------+-------+------+-------------+ | ID | NAME | AGE | DATE | +-------+-------+------+-------------+ | 1001 | 张三 | 21 | 2020-09-18 | | 1002 | 李四 | 22 | 2020-09-18 | | 1003 | 王五 | 28 | 2021-06-01 | | 1004 | 赵六 | 31 | 2020-10-07 |
- 实例代码
public static void main(String[] args) throws SQLException, IOException { Connection conn = null; // 连接hadoop环境,进行 Kerberos认证 Configuration conf = null; try { //使用原生hadoop的kerberos认证 conf = HBaseConfiguration.create(); conf.set("hbase.zookeeper.quorum", "slave2"); conf.set("hbase.zookeeper.property.clientPort", "2181"); conf.set("hadoop.security.authentication", "Kerberos"); conf.set("hbase.security.authentication", "kerberos"); UserGroupInformation.setConfiguration(conf); String user = "cs"; String keyPath = "C:\\Users\\hu'shao'nan\\Desktop\\kerberosfiles\\cs.keytab"; UserGroupInformation.loginUserFromKeytab(user, keyPath); Class.forName("org.apache.phoenix.jdbc.PhoenixDriver"); String url = "jdbc:phoenix:slave6:2181"; conn = DriverManager.getConnection(url); System.out.println(conn); Statement stmt = conn.createStatement(); ResultSet rs = stmt.executeQuery("select * from \"test\".\"student\""); while (rs.next()) { System.out.print("id:" + rs.getString("ID")); System.out.println(",name:" + rs.getString("NAME")); } } catch (ClassNotFoundException | SQLException e) { e.printStackTrace(); } finally { if (conn != null) { System.out.println("连接通道关闭"); conn.close(); } } }
连接异常1
-
报错信息
Caused by: java.lang.IllegalArgumentException: Can't get Kerberos realm at org.apache.hadoop.security.HadoopKerberosName.setConfiguration(HadoopKerberosName.java:65) at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:275) at org.apache.hadoop.security.UserGroupInformation.setConfiguration(UserGroupInformation.java:311) at com.alibaba.datax.plugin.reader.hdfsreader.DFSUtil.kerberosAuthentication(DFSUtil.java:88) at com.alibaba.datax.plugin.reader.hdfsreader.DFSUtil.<init>(DFSUtil.java:81) at com.alibaba.datax.plugin.reader.hdfsreader.HdfsReader$Job.init(HdfsReader.java:51) at com.alibaba.datax.core.job.JobContainer.initJobReader(JobContainer.java:673) at com.alibaba.datax.core.job.JobContainer.init(JobContainer.java:303) at com.alibaba.datax.core.job.JobContainer.start(JobContainer.java:113)
-
原因:认证需要的krb5.conf文件未找到
-
解决:将krb5.conf的文件路径配置为环境变量
System.setProperty("java.security.krb5.conf","C:\\Users\\admin\\Desktop\\kerberosfiles\\krb5.conf"); System.setProperty("javax.security.auth.useSubjectCredsOnly", "false");
连接异常2
-
报错信息
Kerberos principal does not have the expected format: cs@EXAMPLE.COM
-
原因:调用服务时的Kerberos凭证异常,此凭证在hbase或hive的配置文件里指定,不同于用户认证的kerberos凭证
<property> <name>hbase.master.kerberos.principal</name> <value>hbase/_HOST@EXAMPLE.COM</value> </property> <property> <name>hbase.regionserver.kerberos.principal</name> <value>hbase/_HOST@EXAMPLE.COM</value> </property>
-
解决:
1.将对应的配置文件,如连接hbase的hbase-site.xml放在资源根目录下,java会自动读取相关配置
2.将对应参数配置到代码中//前面代码同上 Class.forName("org.apache.phoenix.jdbc.PhoenixDriver"); String url = "jdbc:phoenix:slave6:2181"; Properties properties = new Properties(); // 配置hbase服务的kerberos凭证信息 properties.setProperty("hbase.master.kerberos.principal", "hbase/_HOST@EXAMPLE.COM"); properties.setProperty("hbase.regionserver.kerberos.principal", "hbase/_HOST@EXAMPLE.COM"); // 配置phoenix服务的kerberos凭证信息 properties.setProperty("phoenix.queryserver.kerberos.principal", "phoenix/slave6@EXAMPLE.COM"); conn = DriverManager.getConnection(url, properties);
连接异常3
-
报错信息
Inconsistent namespace mapping properties. Cannot initiate connection as SYSTEM:CATALOG is found but client does not have phoenix.schema.isNamespaceMappingEnabled enabled
-
原因:Hbase服务端或客户端的Hbase-site.xml文件中没有添加相关配置
-
解决
- 集群上:cloudera manager服务端或者客户端添加配置
- 集群上:cloudera manager服务端或者客户端添加配置
- 本地测试,在项目根目录下配置Hbase-site.xml,配置内容添加如下:
<property> <name>phoenix.schema.isNamespaceMappingEnabled</name> <value>true</value> <description>命名空间开启</description> </property> <property> <name>hbase.regionserver.wal.codec</name> <value>org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec</value> <description>二级索引支持</description> </property>
连接异常4
-
报错信息
java.sql.SQLException: java.lang.NoClassDefFoundError: com/google/protobuf/LiteralByteString at org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1390) at org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1351) at org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1538) at org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2721) at org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:1114) at org.apache.phoenix.compile.CreateTableCompiler$1.execute(CreateTableCompiler.java:192) at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:408) at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:391) at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:390) at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378) at org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1806) at org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2569) at org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2532) at org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76) at org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2532) at org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255) at org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150) at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221) at java.sql.DriverManager.getConnection(DriverManager.java:664) at java.sql.DriverManager.getConnection(DriverManager.java:208) at hbase.Phoenix_Connection.main(Phoenix_Connection.java:49) Caused by: java.lang.NoClassDefFoundError: com/google/protobuf/LiteralByteString
-
原因:Protobuf(全称 Protocol Buffers)是 Google 开发的一种数据描述语言,能够将结构化数据序列化,可用于数据存储、通信协议等方面。在 HBase 里面用使用了 Protobuf 的类库。
-
解决:
- 在pom文件里添加对应依赖
<dependency> <groupId>com.google.protobuf</groupId> <artifactId>protobuf-java</artifactId> <version>3.6.1</version> </dependency> <!-- 如果仍未解决,降低版本--> <dependency> <groupId>com.google.protobuf</groupId> <artifactId>protobuf-java</artifactId> <version>2.5.0</version> </dependency>
- 更换hbase客户端依赖,hbase-shaded-client是社区里有人将HBase里面比较常见的依赖进行了重命名,在pom文件中我们可以将引入的hbase-client替换成hbase-shaded-client(未尝试,我使用第一张方式已经解决)
<dependency> <groupId>org.apache.hbase</groupId> <artifactId>hbase-shaded-client</artifactId> <version>2.0.0</version> </dependency>
- 在pom文件里添加对应依赖
连接成功
```java
org.apache.phoenix.jdbc.PhoenixConnection@6999cd39
id:1001,name:张三
id:1002,name:李四
id:1003,name:王五
id:1004,name:赵六
连接通道关闭
Process finished with exit code 0
```