首先修改过windows客户端的C:\Windows\System32\drivers\etc\hosts文件,添加hadoop集群的节点IP地址:
192.168.180.137 master
192.168.180.135 slave-1
192.168.180.134 slave-2
我这是用的maven管理jar包,现在pom.xml加入Hbase的jar包依赖:
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase</artifactId>
<version>1.3.1</version>
<type>pom</type>
</dependency>
启动hadoop集群以及Hbase,这里就不多说了,直接进入代码阶段:
连接Linux系统上Hbase数据库,这里的192.168.180.137:2181要与Linux上Hbase的hbase-site.xml一致,以确保能连接上HBase数据库:
Configuration HBASE_CONFIG = new Configuration();
HBASE_CONFIG.set("hbase.zookeeper.quorum", "192.168.180.137:2181");
cfg = new HBaseConfiguration(HBASE_CONFIG);
admin = new HBaseAdmin(cfg);
首先先试试创建一张student表:
// 创建一张表,指定表名,列族
public void createTable(String tableName, String columnFarily)
throws Exception {
if (admin.tableExists(tableName)) {
System.out.println(tableName + "存在!");
System.exit(0);
} else {
HTableDescriptor tableDesc = new HTableDescriptor(tableName);
tableDesc.addFamily(new HColumnDescriptor(columnFarily));
admin.createTable(tableDesc);
System.out.println("创建表成功!");
}
}
测试:
public static void main(String[] args) {
try {
HbaseTest hbase = new HbaseTest();
hbase.createTable("student", "fam1");
//hbase.getAllTables();
}catch (Exception e) {
e.printStackTrace();
}
}
这里我们可以看到,在hadoop的Hbase数据库中有了一张student表。
这是一个插入数据的例子,插入数据,主要用到的是Hbase的Put方法:
package Hbase.HbaseTest;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.client.HTable;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.util.Bytes;
public class InsertData {
public static void main(String[] args) throws IOException {
// Instantiating Configuration class
//Configuration config = HBaseConfiguration.create();
Configuration HBASE_CONFIG = new Configuration();
HBASE_CONFIG.set("hbase.zookeeper.quorum", "192.168.180.137:2181");
HBaseConfiguration config = new HBaseConfiguration(HBASE_CONFIG);
// Instantiating HTable class
HTable hTable = new HTable(config, "emp");
// Instantiating Put class
// accepts a row name.
Put p = new Put(Bytes.toBytes("row1"));
// adding values using add() method
// accepts column family name, qualifier/row name ,value
p.add(Bytes.toBytes("personal data"), Bytes.toBytes("name"), Bytes.toBytes("raju"));
p.add(Bytes.toBytes("personal data"), Bytes.toBytes("city"), Bytes.toBytes("hyderabad"));
p.add(Bytes.toBytes("professional data"), Bytes.toBytes("designation"), Bytes.toBytes("manager"));
p.add(Bytes.toBytes("professional data"), Bytes.toBytes("salary"), Bytes.toBytes("50000"));
// Saving the put Instance to the HTable.
hTable.put(p);
System.out.println("data inserted");
// closing HTable
hTable.close();
}
}
接下来是查询例子,主要用到的是get()方法,这里查询的是personal data的name和city的值:
package Hbase.HbaseTest;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.client.Get;
import org.apache.hadoop.hbase.client.HTable;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.util.Bytes;
public class RetriveData {
public static void main(String[] args) throws IOException {
Configuration HBASE_CONFIG = new Configuration();
HBASE_CONFIG.set("hbase.zookeeper.quorum", "192.168.180.137:2181");
HBaseConfiguration config = new HBaseConfiguration(HBASE_CONFIG);
// Instantiating HTable class
HTable table = new HTable(config, "emp");
Get g = new Get(Bytes.toBytes("row1"));
// Reading the data
Result result = table.get(g);
// Reading values from Result class object
byte[] value = result.getValue(Bytes.toBytes("personal data"), Bytes.toBytes("name"));
byte[] value1 = result.getValue(Bytes.toBytes("personal data"), Bytes.toBytes("city"));
// Printing the values
String name = Bytes.toString(value);
String city = Bytes.toString(value1);
System.out.println("name: " + name + " city: " + city);
}
}
控制台输出的信息与shell命名查询对比,我们可以看到查询到的信息与HBase数据库里的信息一致。