段智华的博客

热烈祝贺王家林大咖2018年清华大学出版新书《SPARK大数据商业实战三部曲》!清华大学出版社官方旗舰店(天猫) https://qhdx.tmall.com/?spm=a220o.1000855.1...

HBase实战(4):使用JAVA操作分布式集群HBASE

HBase实战(4):使用JAVA操作分布式集群HBASE

    Hbase开发测试程序在windows 10的IDEA中,Vmvare虚拟机部署Hadoop、hbase等集群,虚拟机操作系统linux。将通过windows本地IDEA连接虚拟机的Hbase系统,进行操作。

    1,更改C:\Windows\System32\drivers\etc 的HOSTS文件  

192.168.189.1 master
192.168.189.2 worker1
192.168.189.3 worker2
192.168.189.4 worker3
    
Microsoft Windows [版本 10.0.16299.371]
(c) 2017 Microsoft Corporation。保留所有权利。

C:\Users\lenovo>ping master

正在 Ping master [192.168.189.1] 具有 32 字节的数据:
来自 192.168.189.1 的回复: 字节=32 时间<1ms TTL=64
来自 192.168.189.1 的回复: 字节=32 时间<1ms TTL=64

192.168.189.1 的 Ping 统计信息:
    数据包: 已发送 = 2,已接收 = 2,丢失 = 0 (0% 丢失),
往返行程的估计时间(以毫秒为单位):
    最短 = 0ms,最长 = 0ms,平均 = 0ms
Control-C
^C
C:\Users\lenovo>ping worker1

正在 Ping worker1 [192.168.189.2] 具有 32 字节的数据:
来自 192.168.189.2 的回复: 字节=32 时间<1ms TTL=64
来自 192.168.189.2 的回复: 字节=32 时间<1ms TTL=64
来自 192.168.189.2 的回复: 字节=32 时间<1ms TTL=64
来自 192.168.189.2 的回复: 字节=32 时间<1ms TTL=64

192.168.189.2 的 Ping 统计信息:
    数据包: 已发送 = 4,已接收 = 4,丢失 = 0 (0% 丢失),
往返行程的估计时间(以毫秒为单位):
    最短 = 0ms,最长 = 0ms,平均 = 0ms

C:\Users\lenovo>ping worker3

正在 Ping worker3 [192.168.189.4] 具有 32 字节的数据:
来自 192.168.189.4 的回复: 字节=32 时间<1ms TTL=64
来自 192.168.189.4 的回复: 字节=32 时间<1ms TTL=64
来自 192.168.189.4 的回复: 字节=32 时间<1ms TTL=64
来自 192.168.189.4 的回复: 字节=32 时间<1ms TTL=64

192.168.189.4 的 Ping 统计信息:
    数据包: 已发送 = 4,已接收 = 4,丢失 = 0 (0% 丢失),
往返行程的估计时间(以毫秒为单位):
    最短 = 0ms,最长 = 0ms,平均 = 0ms

C:\Users\lenovo>

2,新建maven项目,编写pom.xml文件。下载HBASE的依赖包。

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>noc_hbase_test</groupId>
    <artifactId>noc_hbase_test</artifactId>
    <version>1.0-SNAPSHOT</version>

    <properties>
        <scala.version>2.11.8</scala.version>
        <spark.version>2.2.1</spark.version>
        <jedis.version>2.8.2</jedis.version>
        <fastjson.version>1.2.14</fastjson.version>
        <jetty.version>9.2.5.v20141112</jetty.version>
        <container.version>2.17</container.version>
        <java.version>1.8</java.version>
        <hbase.version>1.2.0</hbase.version>
    </properties>


    <repositories>
        <repository>
            <id>scala-tools.org</id>
            <name>Scala-Tools Maven2 Repository</name>
            <url>http://scala-tools.org/repo-releases</url>
        </repository>
    </repositories>

    <pluginRepositories>
        <pluginRepository>
            <id>scala-tools.org</id>
            <name>Scala-Tools Maven2 Repository</name>
            <url>http://scala-tools.org/repo-releases</url>
        </pluginRepository>
    </pluginRepositories>

    <dependencies>
        <!-- https://mvnrepository.com/artifact/org.apache.hbase/hbase -->
        <!-- https://mvnrepository.com/artifact/org.apache.hbase/hbase -->
        <!-- hbase依赖包 -->
        <dependency>
            <groupId>org.apache.hbase</groupId>
            <artifactId>hbase-client</artifactId>
            <version>${hbase.version}</version>
            <exclusions>
                <exclusion>
                    <groupId>org.slf4j</groupId>
                    <artifactId>slf4j-log4j12</artifactId>
                </exclusion>
            </exclusions>
        </dependency>
        <dependency>
            <groupId>org.apache.hbase</groupId>
            <artifactId>hbase-common</artifactId>
            <version>${hbase.version}</version>
            <exclusions>
                <exclusion>
                    <groupId>org.slf4j</groupId>
                    <artifactId>slf4j-log4j12</artifactId>
                </exclusion>
            </exclusions>
        </dependency>
        <dependency>
            <groupId>org.apache.hbase</groupId>
            <artifactId>hbase-server</artifactId>
            <version>${hbase.version}</version>
            <exclusions>
                <exclusion>
                    <groupId>org.slf4j</groupId>
                    <artifactId>slf4j-log4j12</artifactId>
                </exclusion>
            </exclusions>
        </dependency>

        <!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-common -->
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-common</artifactId>
            <version>2.6.0</version>
        </dependency>

        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-client</artifactId>
            <version>2.6.0</version>
        </dependency>

        <!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-hdfs -->
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-hdfs</artifactId>
            <version>2.6.0</version>
        </dependency>


    </dependencies>

    <build>
        <plugins>
            <plugin>
                <artifactId>maven-assembly-plugin</artifactId>
                <configuration>
                    <classifier>dist</classifier>
                    <appendAssemblyId>true</appendAssemblyId>
                    <descriptorRefs>
                        <descriptor>jar-with-dependencies</descriptor>
                    </descriptorRefs>
                </configuration>
                <executions>
                    <execution>
                        <id>make-assembly</id>
                        <phase>package</phase>
                        <goals>
                            <goal>single</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>

            <plugin>
                <artifactId>maven-compiler-plugin</artifactId>
                <configuration>
                    <source>1.7</source>
                    <target>1.7</target>
                </configuration>
            </plugin>

            <plugin>
                <groupId>net.alchim31.maven</groupId>
                <artifactId>scala-maven-plugin</artifactId>
                <version>3.2.2</version>
                <executions>
                    <execution>
                        <id>scala-compile-first</id>
                        <phase>process-resources</phase>
                        <goals>
                            <goal>compile</goal>
                        </goals>
                    </execution>
                </executions>
                <configuration>
                    <scalaVersion>${scala.version}</scalaVersion>
                    <recompileMode>incremental</recompileMode>
                    <useZincServer>true</useZincServer>
                    <args>
                        <arg>-unchecked</arg>
                        <arg>-deprecation</arg>
                        <arg>-feature</arg>
                    </args>
                    <jvmArgs>
                        <jvmArg>-Xms1024m</jvmArg>
                        <jvmArg>-Xmx1024m</jvmArg>
                    </jvmArgs>
                    <javacArgs>
                        <javacArg>-source</javacArg>
                        <javacArg>${java.version}</javacArg>
                        <javacArg>-target</javacArg>
                        <javacArg>${java.version}</javacArg>
                        <javacArg>-Xlint:all,-serial,-path</javacArg>
                    </javacArgs>
                </configuration>
            </plugin>

            <plugin>
                <groupId>org.antlr</groupId>
                <artifactId>antlr4-maven-plugin</artifactId>
                <version>4.3</version>
                <executions>
                    <execution>
                        <id>antlr</id>
                        <goals>
                            <goal>antlr4</goal>
                        </goals>
                        <phase>none</phase>
                    </execution>
                </executions>
                <configuration>
                    <outputDirectory>src/test/java</outputDirectory>
                    <listener>true</listener>
                    <treatWarningsAsErrors>true</treatWarningsAsErrors>
                </configuration>
            </plugin>
        </plugins>
    </build>

</project>

3.在IDEA项目下面放上linux环境配置hadoop和hbase配置文件,hbase-site.xml和hdfs-site.xml.

hbase-site.xml

<configuration>  
     
        <property>  
            <name>hbase.rootdir</name>  
            <value>hdfs://master:9000/hbase</value>  
        </property>  
        
        <property>  
            <name>hbase.cluster.distributed</name>  
            <value>true</value>  
        </property>  
          
        <property>  
            <name>hbase.zookeeper.quorum</name>  
            <value>192.168.189.1:2181,192.168.189.2:2181,192.168.189.3:2181</value>  
        </property>  


<property>
	<name>hbase.master.info.port</name>
	<value>60010</value>
</property>


    </configuration>  

hdfs-site.xml
 <configuration>
    <property>
        <name>dfs.replication</name>
        <value>3</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>/usr/local/hadoop-2.6.0/tmp/dfs/name</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value> /usr/local/hadoop-2.6.0/tmp/dfs/data</value>
    </property>
</configuration>
HBASE测试代码:
package HbaseTest;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HTableDescriptor;
import org.apache.hadoop.hbase.client.Connection;
import org.apache.hadoop.hbase.client.Admin;

import java.io.IOException;

public class HbaseMyTest {
    public static Configuration configuration;
    public static Connection connection;
    public static Admin admin;
    public static void main(String[] args) throws IOException {
     listTables();
    }

    public static void listTables() throws IOException {
        HbaseUtils.init();
        HTableDescriptor hTableDescriptors[] = admin.listTables();
        for (HTableDescriptor hTableDescriptor : hTableDescriptors) {
            System.out.println("IDEA本地程序查询Hbase的表名: "+hTableDescriptor.getNameAsString());
        }
        HbaseUtils.close();
    }

}
package HbaseTest;

import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.client.ConnectionFactory;

import java.io.IOException;

public class HbaseUtils {
    public static void init() {
        HbaseMyTest.configuration = HBaseConfiguration.create();
        HbaseMyTest.configuration.set("hbase.zookeeper.property.clientPort", "2181");
        HbaseMyTest.configuration.set("hbase.zookeeper.quorum", "192.168.189.1,192.168.189.2,192.168.189.3");
        HbaseMyTest.configuration.set("hbase.master", "192.168.189.1:60000");

        try {
            HbaseMyTest.connection = ConnectionFactory.createConnection(HbaseMyTest.configuration);
            HbaseMyTest.admin = HbaseMyTest.connection.getAdmin();
        } catch (IOException e) {
            e.printStackTrace();
        }
    }

    public static void close() {
        try {
            if (null != HbaseMyTest.admin)
                HbaseMyTest.admin.close();
            if (null != HbaseMyTest.connection)
                HbaseMyTest.connection.close();
        } catch (IOException e) {
            e.printStackTrace();
        }

    }
}

运行结果为:






阅读更多

扫码向博主提问

去开通我的Chat快问

duan_zhihua

博客专家

非学,无以致疑;非问,无以广识
  • 擅长领域:
  • 王家林老师AI盘古
  • AI人工智能作业答疑
  •  Spark+AI
版权声明:王家林大咖2018年新书《SPARK大数据商业实战三部曲》清华大学出版,清华大学出版社官方旗舰店(天猫)https://qhdx.tmall.com/?spm=a220o.1000855.1997427721.d4918089.4b2a2e5dT6bUsM https://blog.csdn.net/duan_zhihua/article/details/80674047
想对作者说点什么? 我来说一句

HBase 实战

2018年04月04日 6.7MB 下载

没有更多推荐了,返回首页

加入CSDN,享受更精准的内容推荐,与500万程序员共同成长!
关闭
关闭