【前提】你已经安装好了hbase,即hbase通过使用hbase shell已经可以正常操作的前提下进行的。
【hbase所在的环境】 Linux(centos)
【JavaAPI】在win7下的eclipse下进行
【实例】- 本文利用一个简答的遍历habse表的方式,来验证是否操作正确
public class HBaseTest {
public static Configuration conf;
static{
conf = HBaseConfiguration.create();
conf.set("hbase.zookeeper.property.clientPort","2181");
conf.set("hbase.zookeeper.quorum", "hadoop1");
conf.set("hbase.master", "hadoop1:60000");
}
public static void main(String args[]) throws IOException {
queryByCondition("hbasetable");
}
public static void queryByCondition(String tableName) throws IOException {
HTablePool pool = new HTablePool(conf, 5);
HTableInterface table = pool.getTable(tableName);
Scan s = new Scan();
ResultScanner rs = table.getScanner(s);
for(Result r:rs) {
for(KeyValue kv: r.raw()) {
String rowKey = kv.getKeyString();
String column = kv.getFamily().toString();
String value = kv.getValue().toString();
System.out.println("key:" + rowKey + " column:"+ column + " value:"+ value);
}
}
}
}
【问题】 上面一个很简单的代码,始终出现异常,具体的异常这里不再展现,出现此异常的原因,大部分情况下 是因为hbase自身的javaAPI是需要反向解析ip的原因。反向解析是DNS的功能。
1. 通过运行命令:nslookup *.*.*.* [此处利用*.*.*.*是你自己的ip]
发现 ** server can't find *.*.*.*.in-addr.arpa: NXDOMAIN
2 此时我们就知道是由于DNS反向解析没有成功的原因
【解决方案】
1. 既然没有成功,那么我们自己就搭建一个dns ,利用bind来搭建DNS,对bind不熟悉的同学可以自行了解一下
2. 首先看下linux系统下bind所需的包是否都已存在
[root@hadoop1 ~]# rpm -qa | grep bind
bind-9.3.6-25.P1.el5_11.2
ypbind-1.19-12.el5
bind-libs-9.3.6-25.P1.el5_11.2
bind-utils-9.3.6-25.P1.el5_11.2
bind-chroot-9.3.6-25.P1.el5_11.2
bind-devel-9.3.6-25.P1.el5_11.2
bind-devel-9.3.6-25.P1.el5_11.2
bind-libs-9.3.6-25.P1.el5_11.2
要保证上面的包都已经存在,若没有话,我们需要自行安装。
yum install -y bind bind-chroot bind-utils bind-devel bind-libs
3.将需要的包安装完之后, service named start 来启动服务
4. 修改配置文件【这里是进行DNS正向解析与反向解析的关键地方】
注意:我们将ip与域名hadoop1连接起来
4.1 编辑named.conf
在文件/var/named/chroot/etc下配置named.conf ,而此文件是不存在,存在的是named.caching-nameserver.conf
则我们复制一份 cp named.caching-nameserver.conf named.conf
[root@hadoop1 etc]# cat named.conf
//
// named.caching-nameserver.conf
//
// Provided by Red Hat caching-nameserver package to configure the
// ISC BIND named(8) DNS server as a caching only nameserver
// (as a localhost DNS resolver only).
//
// See /usr/share/doc/bind*/sample/ for example named configuration files.
//
// DO NOT EDIT THIS FILE - use system-config-bind or an editor
// to create named.conf - edits to this file will be lost on
// caching-nameserver package upgrade.
//
options {
listen-on port 53 {any;};
listen-on-v6 port 53 { ::1; };
directory "/var/named";
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
memstatistics-file "/var/named/data/named_mem_stats.txt";
// Those options should be used carefully because they disable port
// randomization
// query-source port 53;
// query-source-v6 port 53;
allow-query {any;};
allow-query-cache { localhost; };
};
logging {
channel default_debug {
file "data/named.run";
severity dynamic;
};
};
view localhost_resolver {
match-clients {any;};
match-destinations {any;};
recursion yes;
include "/etc/named.zones";
};
上面文件中标红的是需要修改的地方,此处我们注意到 named.zones是不存在,则我们要重新创建一个
4.2 cp named.rfc1912.zones named.zones
将下面的配置添加到named.zones中
zone "hadoop1" IN {
type master;
file "hadoop1.zone";
allow-update { none; };
};
zone "168.192.in-addr.arpa" IN {
type master;
file "192.168.zone";
allow-update { none; };
};
4.3 新建hadoop1.zone 以及192.168.zone
进入路径 /var/named/chroot/var/named
cp named.zero hadoop1.zone
hadoop1.zone配置如下
[root@hadoop1 named]# cat hadoop1.zone
$TTL 86400
@ IN SOA hadoop1. root (
42 ; serial (d. adams)
3H ; refresh
15M ; retry
1W ; expiry
1D ) ; minimum
@ IN NS hadoop1.
@ IN A 192.168.132.149
cp named.local 192.168.zone
[root@hadoop1 named]# cat 192.168.zone
$TTL 86400
@ IN SOA hadoop1. root.hadoop1. (
42 ; serial (d. adams)
3H ; refresh
15M ; retry
1W ; expiry
1D ) ; minimum
@ IN NS hadoop1.
149.132 IN PTR hadoop1
5. 配置完上述之后 ,重启DNS服务器
service named restart
6. 第二重要的关注点:
6.1 要保证所有文件具有相应的读写权限
6.2 记得要在/etc/resolve.conf文件中添加
nameserver ip
6.3 在/etc/hosts中配置域名映射
6.4 在win7hosts中配置域名映射
7 . 结束