java远程连接standalone hbase

本文主要描述如何实现在一台Linux机器上搭建一个standalone的hbase,在另外一台机器上通过API访问

服务器端环境搭建:
1.版本信息
java:1.8.0_172
hbase: 1.2.6

2.设置JAVA_HOME
在~/.bash_profile中追加如下内容


JAVA_HOME=/usr/java/jdk1.8.0_172-amd64
export JAVA_HOME

执行如下命令使上述设置立即生效
 source ~/.bash_profile

3.设置hostname
hostname docker05

4.下载hbase, [url=https://www.apache.org/dyn/closer.lua/hbase/]点击下载[/url]
5.解压下载的文件
$ tar xzvf hbase-1.2.6-bin.tar.gz


6.修改conf下的文件,使能通过远程连接
6.1 进入conf文件夹
cd ./hbase-1.2.6/conf

6.2 修改hbase-site.xml文件,最主要是追加hbase.zookeeper.quorum属性,因为默认是localhost,只能在本机访问,修改成hostname才能远程访问

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
/**
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
-->
<configuration>
<property>
<name>hbase.rootdir</name>
<value>file:///root/data/hbase/data</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/root/data/hbase/zookeeper</value>
</property>
<property>
<name>hbase.client.retries.number</name>
<value>5</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<!--注意这里改为自己的hostname-->
<value>docker05</value>
</property>
<property>
<name>hbase.zookeeper.property.clientport</name>
<value>2181</value>
</property>
<property>
<name>hbase.unsafe.stream.capability.enforce</name>
<value>false</value>
<description>
Controls whether HBase will check for stream capabilities (hflush/hsync).

Disable this if you intend to run on LocalFileSystem, denoted by a rootdir
with the 'file://' scheme, but be mindful of the NOTE below.

WARNING: Setting this to false blinds you to potential data loss and
inconsistent system state in the event of process and/or node failures. If
HBase is complaining of an inability to use hsync or hflush it's most
likely not a false positive.
</description>
</property>
</configuration>

6.3 修改regionservers文件
删除localhost,修改成hostname

[img]http://dl2.iteye.com/upload/attachment/0129/9616/2dd6e787-b629-336c-8c41-c121abaadbf6.png[/img]

7.启动hbase
cd ./hbase-1.2.6/bin

执行
./start-hbase.sh

正常情况下hbase就会启动好了,可通过jps命令查看是否有 **** HMaster 来判断

[img]http://dl2.iteye.com/upload/attachment/0129/9622/11d4dbd9-e141-3239-88eb-39efaf7cc12e.png[/img]

客户端配置
1.配置hostname
在客户端机器上也需要设置hostname,如192.168.0.172 docker05
2.将服务器上的hbase-site.xml 拷贝到客户端并放到工程根目录下
3.工程目录如下


[img]http://dl2.iteye.com/upload/attachment/0129/9626/bc4c244c-e01b-323a-b366-97ceaca5645f.png[/img]

4.工程的pom.xml文件

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>falcon.chengf</groupId>
<artifactId>simple-hbase-test</artifactId>
<version>0.0.1-SNAPSHOT</version>

<dependencies>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-client</artifactId>
<version>1.3.1</version>
</dependency>
</dependencies>
</project>

5.测试类

/**
*
*/
package simple.hbase.test;

import java.io.IOException;
import java.io.InputStream;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.HColumnDescriptor;
import org.apache.hadoop.hbase.HTableDescriptor;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.client.Admin;
import org.apache.hadoop.hbase.client.Connection;
import org.apache.hadoop.hbase.client.ConnectionFactory;
import org.apache.hadoop.hbase.io.compress.Compression.Algorithm;

/**
* @author: 作者: chengaofeng
* @date: 创建时间:2018-05-31 16:12:35
* @Description: TODO
* @version V1.0
*/
public class HbaseTest {
public static void main(String[] args) throws IOException {
Configuration config = HBaseConfiguration.create();
InputStream input = HbaseTest.class.getResourceAsStream("/hbase-site.xml");
config.addResource(input);
try (Connection connection = ConnectionFactory.createConnection(config); Admin admin = connection.getAdmin()) {

HTableDescriptor table = new HTableDescriptor(TableName.valueOf("chengf"));
table.addFamily(new HColumnDescriptor("columns").setCompressionType(Algorithm.NONE));

System.out.print("Creating table. ");
if (admin.tableExists(table.getTableName())) {
admin.disableTable(table.getTableName());
admin.deleteTable(table.getTableName());
}
admin.createTable(table);
System.out.println(" create table ok.");
}
}
}


6.执行后,控制台信息如下
[img]http://dl2.iteye.com/upload/attachment/0129/9624/d5e3fc4c-5895-3151-b319-9bf385490902.png[/img]
7.登录hbase服务器,可以看到chengf表创建成功

[img]http://dl2.iteye.com/upload/attachment/0129/9620/b4cd6777-3ed5-3f96-b41b-614ae333a263.png[/img]

8.遇到的异常
一开始,只是设置了服务器的hostname,没有修改hbase-site.xml 和regionservers的内容,最终结果只能在服务器端通过shell操作hbase,在client机器上连接一直报org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: localhost/127.0.0.1:60020 一类的错误,后来把hbase-site.xml 和regionservers改完后,重新启动才好了
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值