apache-phoenix-5.0.0-HBase-2.0安装与简单使用
文章目录
- apache-phoenix-5.0.0-HBase-2.0安装与简单使用
- 安装步骤:
- 1. 解压下载的版本后,放在Master目录下:
- 2. 复制phoenix安装目录下phoenix-core-5.0.0-HBase-2.0.jar和phoenix-5.0.0-HBase-2.0-server.jar 到 各个节点的hbase的lib目录
- 3. 复制hbase安装目录下的conf目录下`hbase-site.xml`到phoenix安装目录下的bin中:
- 4. 复制 hadoop安装目录即/opt/apps/hadoop-3.1.2/etc/hadoop目录下的core-site.xml hdfs-site.xml到phoenix安装目录下的bin中:
- 5.编辑环境变量
- 6. 修改权限(在/opt/apps/apache-phoenix-5.0.0-HBase-2.0-bin/bin目录下):
- 7.启动Zookeeper,启动Hadoop ,启动Hbase
- 8.启动phoenix
- 9. 退出(!exit)
- 10. 与HBase中原有的表建立映射关系create table
- 11. 在IDEA中使用
HBase版本:hbase-2.0.6
Phoenix版本:phoenix-5.0.0-HBase-2.0
集群结构
类型 | zookeeper | Hbase | |
---|---|---|---|
d01 | NameNodes | 是 | HMaster |
d02 | DataNode | 是 | HRegionServer |
d03 | DataNode | 是 | HRegionServer |
安装步骤:
1. 解压下载的版本后,放在Master目录下:
/opt/apps/apache-phoenix-5.0.0-HBase-2.0-bin
-- 改名
mv apache-phoenix-5.0.0-HBase-2.0-bin phoenix-5.0.0-HBase-2.0
2. 复制phoenix安装目录下phoenix-core-5.0.0-HBase-2.0.jar和phoenix-5.0.0-HBase-2.0-server.jar 到 各个节点的hbase的lib目录
cp phoenix-core-5.0.0-HBase-2.0.jar phoenix-5.0.0-HBase-2.0-server.jar /opt/apps/hbase-2.0.5/lib/
scp命令,复制jar包到其他节点
cd /opt/apps/hbase-2.0.5/lib/
scp phoenix-core-5.0.0-HBase-2.0.jar phoenix-5.0.0-HBase-2.0-server.jar d02:`pwd`
scp phoenix-core-5.0.0-HBase-2.0.jar phoenix-5.0.0-HBase-2.0-server.jar d03:`pwd`
3. 复制hbase安装目录下的conf目录下hbase-site.xml
到phoenix安装目录下的bin中:
cp hbase-site.xml /opt/apps/phoenix-5.0.0-HBase-2.0n/bin
4. 复制 hadoop安装目录即/opt/apps/hadoop-3.1.2/etc/hadoop目录下的core-site.xml hdfs-site.xml到phoenix安装目录下的bin中:
cp core-site.xml hdfs-site.xml /opt/apps/phoenix-5.0.0-HBase-2.0/bin/
5.编辑环境变量
vi /etc/profile
新添加:
#phoenix
export PHOENIX_HOME=/opt/apps/apache-phoenix-5.0.0-HBase-2.0-bin
export PHOENIX_CLASSPATH=$PHOENIX_HOME
export PATH=$PATH:$PHOENIX_HOME/bin
6. 修改权限(在/opt/apps/apache-phoenix-5.0.0-HBase-2.0-bin/bin目录下):
chmod 777 psql.py
chmod 777 sqlline.py
7.启动Zookeeper,启动Hadoop ,启动Hbase
8.启动phoenix
sqlline.py d01,d02,d03:2181
运行结果;
[root@d01 bin]$ sqlline.py d01,d02,d03:2181
Setting property: [incremental, false]
Setting property: [isolation, TRANSACTION_READ_COMMITTED]
issuing: !connect jdbc:phoenix:master,slave1,slave2,slave3:2181 none none org.apache.phoenix.jdbc.PhoenixDriver
Connecting to jdbc:phoenix:d01,d02,d03:2181
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/apps/apache-phoenix-5.0.0-HBase-2.0-bin/phoenix-5.0.0-HBase-2.0-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/hadoop-3.1.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
19/08/26 22:31:08 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Connected to: Phoenix (version 5.0)
Driver: PhoenixEmbeddedDriver (version 5.0)
Autocommit status: true
Transaction isolation: TRANSACTION_READ_COMMITTED
Building list of tables and columns for tab-completion (set fastconnect to true to skip)...
133/133 (100%) Done
Done
sqlline version 1.2.0
0: jdbc:phoenix:d01,d02,d03:2181>
9. 退出(!exit)
第一次好像有点慢,没法退出,过了好久重新使用该命令才退出成功
0: jdbc:phoenix:d01,d02,d03:2181> !exit
10. 与HBase中原有的表建立映射关系create table
由于Phoenix的新版本会自动将列编码,导致映射表无法映射到原有表的列,需要添加column_encoded_bytes=0,否则无法与列建立映射关系如:
create table "DEMO"(pk varchar primary key, "f1"."name" varchar) column_encoded_bytes=0
11. 在IDEA中使用
<!-- https://mvnrepository.com/artifact/org.apache.phoenix/phoenix -->
<dependency>
<groupId>org.apache.phoenix</groupId>
<artifactId>phoenix</artifactId>
<version>5.0.0-HBase-2.0</version>
<type>pom</type>
</dependency>
连接HBase测试源码:
public static void main(String[] args) throws Throwable {
try {
Class.forName("org.apache.phoenix.jdbc.PhoenixDriver");
//这里配置zookeeper的地址,可单个,多个(用","分隔)可以是域名或者ip
String url = "jdbc:phoenix:d01,d02,d03:2181";
Connection conn = DriverManager.getConnection(url);
Statement statement = conn.createStatement();
long time = System.currentTimeMillis();
ResultSet rs = statement.executeQuery("select * from test");
while (rs.next()) {
String myName = rs.getString("name"); //表中的列名
System.out.println("myName=" + myName);
}
long timeUsed = System.currentTimeMillis() - time;
System.out.println("time " + timeUsed + "mm");
// 关闭连接
rs.close();
statement.close();
conn.close();
} catch (Exception e) {
e.printStackTrace();
}
}