Hbase单机安装
本测试机的信息如下
centos 7
192.168.1.101 master
关掉所有的zookeeper,使用外部的ZK,导致冲突。单机版会使用自带的zookeeper启动,避免端口被占用,这个地方很容易进坑
查看zk端口是否被占用,被占用的一律关掉,只有在分布式情况下使用外部zk需要配置才可以使用
netstat -antp | fgrep 2181
hotname最好提前设置好,否则远程客户端(API链接)会找不到服务,其内部路径命名全部为hotname方式,这是个大坑
hostname master
vi /etc/hosts
192.168.1.101 master
vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=master
客户端如果要连接,也需要在host中设置,windows在C:\Windows\System32\drivers\etc\hosts设置
192.168.1.101 master
wget http://mirror.bit.edu.cn/apache/hbase/1.4.9/hbase-1.4.9-bin.tar.gz
tar -zxvf hbase-1.4.9-bin.tar.gz
cd hbase-1.4.9/conf
vi hbase-env.sh
export JAVA_HOME=/mnt/java-1.8.0
vi hbase-1.4.9/conf/hbase-site.xml
<configuration>
<property>
<name>hbase.rootdir</name>
<value>file:/mnt/hadoop/HBase/HFiles</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/mnt/hadoop/HBase/zookeeper</value>
</property>
</property>
<!--指定zk地址为机器名称-->
<property>
<name>hbase.zookeeper.quorum</name>
<value>master</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
</configuration>
进入/usr/local/HBase/bin目录,输入以下命令启动HBase:
./start-hbase.sh
starting master, logging to /usr/local/HBase/bin/../logs/hbase-root-master-ubuntu.out
停止HBase
./stop-hbase.sh
#进入hbase shell
./hbase shell
启动成功后,可以在浏览器上输入http://localhost:16010/查看HBase Web UI。注:192.168.0.156为博主Linux机器地址
JAVA API 链接
maven依赖jar
<!--HBASE START-->
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-client</artifactId>
<version>1.4.9</version>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-server</artifactId>
<version>1.4.9</version>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-common</artifactId>
<version>1.4.9</version>
</dependency>
<!--HBASE END-->
java API
package com.bamboo.demo;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.HColumnDescriptor;
import org.apache.hadoop.hbase.HTableDescriptor;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.client.*;
import org.apache.hadoop.hbase.util.Bytes;
import org.junit.After;
import org.junit.Before;
import java.util.ArrayList;
public class HbaseTest {
/**
* 配置ss
*/
static Configuration config = null;
private static Connection connection = null;
private static Table table = null;
@Before
public static void init() throws Exception {
System.setProperty("hadoop.home.dir", "d:/");
config = HBaseConfiguration.create();// 配置
config.set("hbase.zookeeper.quorum", "192.168.1.101");// zookeeper地址
config.set("hbase.zookeeper.property.clientPort", "2181");// zookeeper端口
connection = ConnectionFactory.createConnection(config);
table = connection.getTable(TableName.valueOf("dept_test"));
}
public static void main(String[] args) throws Exception {
init();
try {
showTable();
// createTable();
} finally {
close();
}
}
/**
* 创建数据库表dept,并增加列族info和subdept
*
* @throws Exception
*/
public static void createTable() throws Exception {
// 创建表管理类
Admin admin = connection.getAdmin(); // hbase表管理
// 创建表描述类
// TableName tableName = TableName.valueOf("dept_test"); // 表名称
HTableDescriptor desc = new HTableDescriptor(table.getName());
// 创建列族的描述类
HColumnDescriptor family = new HColumnDescriptor("info"); // 列族
// 将列族添加到表中
desc.addFamily(family);
HColumnDescriptor family2 = new HColumnDescriptor("subdept"); // 列族
// 将列族添加到表中
desc.addFamily(family2);
// 创建表
admin.createTable(desc); // 创建表
System.out.println("创建表成功!");
}
public static void showTable() throws Exception {
Admin admin = connection.getAdmin(); // hbase表管理
HTableDescriptor tableDescriptor = admin.getTableDescriptor(table.getName());
System.out.println("3333");
byte[] name = tableDescriptor.getName();
System.out.println("下面开始输出结果:");
System.out.println("表名:" + new String(name));
HColumnDescriptor[] columnFamilies = tableDescriptor.getColumnFamilies();
for (HColumnDescriptor d : columnFamilies)
{
System.out.println("列族名:" + d.getNameAsString());
}
}
@After
public static void close() throws Exception {
table.close();
connection.close();
}
}
控制台结果
22:41:56.261 [main] DEBUG org.apache.hadoop.hbase.ipc.BlockingRpcConnection - Connecting to master/192.168.1.101:43115
3333
下面开始输出结果:
表名:dept_test
列族名:info
列族名:subdept
22:41:56.400 [main] INFO org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation - Closing master protocol: MasterService
22:41:56.400 [main] INFO org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation - Closing zookeeper sessionid=0x16813f9aaff0011
22:41:56.400 [main] DEBUG org.apache.zookeeper.ZooKeeper - Closing session: 0x16813f9aaff0011
22:41:56.400 [main] DEBUG org.apache.zookeeper.ClientCnxn - Closing client for session: 0x16813f9aaff0011
22:41:56.401 [main-SendThread(192.168.1.101:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x16813f9aaff0011, packet:: clientPath:null serverPath:null finished:false header:: 5,-11 replyHeader:: 5,334,0 request:: null response:: null
22:41:56.402 [main-SendThread(192.168.1.101:2181)] DEBUG org.apache.zookeeper.ClientCnxn - An exception was thrown while closing send thread for session 0x16813f9aaff0011 : Unable to read additional data from server sessionid 0x16813f9aaff0011, likely server has closed socket
22:41:56.402 [main] DEBUG org.apache.zookeeper.ClientCnxn - Disconnecting client for session: 0x16813f9aaff0011
22:41:56.402 [main] INFO org.apache.zookeeper.ZooKeeper - Session: 0x16813f9aaff0011 closed
22:41:56.402 [main] DEBUG org.apache.hadoop.hbase.ipc.AbstractRpcClient - Stopping rpc client
22:41:56.402 [main-EventThread] INFO org.apache.zookeeper.ClientCnxn - EventThread shut down for session: 0x16813f9aaff0011
Process finished with exit code 0
在服务器上客户端链接查看表是否创建
./hbase shell
hbase(main):001:0> list
TABLE
dept_test
user_info
2 row(s) in 0.5350 seconds
=> ["dept_test", "user_info"]
异常信息
clientPath:null serverPath:null
HBASE和java客户机没有设置hostname导致zk无法设置服务路径