hbase安装及使用(附带依赖zookeeper安装方法)

安装hbase前提是Hadoop集群和Zookeeper已经安装完毕,并能正确运行。
这里hadoop已安装完毕 介绍如何安装zookeeper
下载zookeeper:http://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.5.5/
下载二进制包:apache-zookeeper-3.5.5-bin.tar.gz
解压 tar zxf apache-zookeeper-3.5.5-bin.tar.gz -C /opt
修改配置文件:

vim /opt/apache-zookeeper-3.5.5-bin/conf/zoo.cfg

tickTime=2000
initLimit=10
syncLimit=5
dataDir=/tmp/zookeeper
clientPort=2181
#开启四字命令
4lw.commands.whitelist=*

#配置集群
server.1=hadoop.master:2888:3888
server.2=hadoop.slave1:2888:3888
server.3=hadoop.slave2:2888:3888
  • initLimit

ZooKeeper集群模式下包含多个zk进程,其中一个进程为leader,余下的进程为follower。
当follower最初与leader建立连接时,它们之间会传输相当多的数据,尤其是follower的数据落后leader很多。initLimit配置follower与leader之间建立连接后进行同步的最长时间。

  • syncLimit

配置follower和leader之间发送消息,请求和应答的最大时间长度。

  • tickTime

tickTime则是上述两个超时配置的基本单位,例如对于initLimit,其配置值为5,说明其超时时间为 2000ms * 5 = 10秒。

  • server.id=host:port1:port2

其中id为一个数字,表示zk进程的id,这个id也是dataDir目录下myid文件的内容。
host是该zk进程所在的IP地址,port1表示follower和leader交换消息所使用的端口,port2表示选举leader所使用的端口。

  • dataDir

其配置的含义跟单机模式下的含义类似,不同的是集群模式下还有一个myid文件。myid文件的内容只有一行,且内容只能为1 - 255之间的数字,这个数字亦即上面介绍server.id中的id,表示zk进程的id。

假如我们打算在三台不同的机器 上各部署一个zk进程,以构成一个zk集群。
三个zk进程均使用相同的 zoo.cfg 配置
然后分别在这三台机器上启动zk进程,这样我们便将zk集群启动了起来

./zkServer.sh start

使用jps命令 发现 QuorumPeerMain进程 则启动成功
可以使用以下命令来连接一个zk集群

./zkCli.sh -server hadoop.master:2181,hadoop.slave1:2181,hadoop.slave2:2181

发现报错:拒绝连接 使用jps发现进程没起来 查看日志 发现如下错误

2019-10-14 14:43:28,487 [myid:] - ERROR [main:QuorumPeerMain@89] - Invalid config, exiting abnormally
org.apache.zookeeper.server.quorum.QuorumPeerConfig$ConfigException: Error processing /opt/apache-zookeeper-3.5.5-bin/bin/../conf/zoo.cfg
	at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:154)
	at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:113)
	at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:82)
Caused by: java.lang.IllegalArgumentException: myid file is missing
	at org.apache.zookeeper.server.quorum.QuorumPeerConfig.checkValidity(QuorumPeerConfig.java:734)
	at org.apache.zookeeper.server.quorum.QuorumPeerConfig.setupQuorumPeerConfig(QuorumPeerConfig.java:605)
	at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parseProperties(QuorumPeerConfig.java:420)
	at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:150)
	... 2 more
Invalid config, exiting abnormally

原因 :zk集群中的节点需要获取myid文件内容来标识该节点,缺失则无法启动;
解决办法 :在每台机器上创建myid文件
注意 每台机器的myid不能重复

cd /tmp/zookeeper
echo 1 > myid

下一步

安装hbase

三台虚拟机hadoop.master,hadoop.slave1,hadoop.slave2 已经安装好hadoop zookeeper
http://hbase.apache.org/downloads.html
下载完 解压安装包

tar zxf hbase-2.2.1-bin.tar.gz -C /opt

添加hbase 到环境变量

#hbase 环境变量
export HBASE_HOME=/opt/hbase-2.2.1
export PATH=$PATH:$HBASE_HOME/bin

配置Hbase
配置文件在:/opt/hbase-2.2.1/conf
(1)配置hbase-env.sh

export JAVA_HOME=/usr/java/jdk1.8.0_221             #Java安装路径
export HBASE_CLASSPATH=/opt/hbase-2.2.1/conf        #HBase类路径
export HBASE_MANAGES_ZK=false                       #由HBase负责启动和关闭Zookeeper

(2)配置hbase-site.xml

<configuration>
  <property>
    <name>hbase.rootdir</name>
    <value>hdfs://hadoop.master:8020/hbase</value>
  </property>
  <property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
  </property>
  <property>
    <name>hbase.zookeeper.quorum</name>
    <value>hadoop.master,hadoop.slave1,hadoop.slave2</value>
  </property>
  <property>
    <name>hbase.unsafe.stream.capability.enforce</name>
    <value>false</value>
  </property>
</configuration>

hbase.zookeeper.quorum为zookeeper的节点主机名
hbase.rootdir为hbase的存储根目录,设为hadoopHDFS根目录下的hbase
hbase.cluster.distributed 设置hbase模式为集群模式

(3)配置conf/regionservers

#添加需要安装regionserver的机器节点
hadoop.slave1
hadoop.slave2

根据官方说明,拷贝Hadoop的hdfs-site.xml文件至${HBASE_HOME}/conf 目录;

Hbase的启动
启动Hbase需要首先启动Hadoop和zookeeper
只需要主节点启动即可

./start-hbase.sh

使用jps命令查看java进程
主节点出现:HMaster
从节点出现:HRegionServer
启动成功

使用hbase

进入hbase shell命令模式

hbase shell

查看服务器状态

hbase(main):005:0> status

查看当前所有表

hbase(main):001:0> list

创建表
表名称:hbase_table 后边是列族名称

hbase(main):002:0> create 'hbase_table' , 'mate_data', 'action'

查看表详情

hbase(main):004:0> desc 'hbase_table'

增加列簇

hbase(main):007:0> alter 'hbase_table', {NAME => 'new', VERSIONS => '2'}

删除列簇

hbase(main):009:0> alter 'hbase_table', {NAME => 'new', METHOD => 'delete'}

删除表

#首先disable
hbase(main):016:0> disable 'hbase_table'
#然后再删除
hbase(main):017:0> drop 'hbase_table'
#查看是否删除
hbase(main):018:0> list

往表里写数据并查看

#向mate_data列族里添加name列 列可以动态扩展  最后是要加入的数据
hbase(main):011:0> put 'hbase_table', '1001', 'mate_data:name', 'zhangsan'
#查看数据
hbase(main):015:0> get 'hbase_table','1001'
#查看全部数据
hbase(main):015:0> scan 'hbase_table'
#查看行数
hbase(main):016:0> count 'hbase_table'
#删除数据
hbase(main):021:0> delete 'hbase_table','1001','mate_data:name'

java 操作hbase实例
maven

<dependency>
	    <groupId>org.apache.hbase</groupId>
	    <artifactId>hbase-client</artifactId>
	    <version>2.2.1</version>
</dependency>

java代码

package com.nqwl.hbase;

import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.Cell;
import org.apache.hadoop.hbase.CompareOperator;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.client.Admin;
import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor;
import org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder;
import org.apache.hadoop.hbase.client.Connection;
import org.apache.hadoop.hbase.client.ConnectionFactory;
import org.apache.hadoop.hbase.client.Delete;
import org.apache.hadoop.hbase.client.Get;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.client.ResultScanner;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.client.Table;
import org.apache.hadoop.hbase.client.TableDescriptor;
import org.apache.hadoop.hbase.client.TableDescriptorBuilder;
import org.apache.hadoop.hbase.filter.ColumnPrefixFilter;
import org.apache.hadoop.hbase.filter.FilterList;
import org.apache.hadoop.hbase.filter.FilterList.Operator;
import org.apache.hadoop.hbase.filter.RegexStringComparator;
import org.apache.hadoop.hbase.filter.RowFilter;
import org.apache.hadoop.hbase.filter.SingleColumnValueFilter;
import org.apache.hadoop.hbase.util.Bytes;

/*
 * 	测试操作hbase数据
 */
public class HbaseTest {
	 Configuration conf = null;
	 Connection conn = null;
	 
	 HbaseTest() {
		conf = HBaseConfiguration.create();
        conf.set("hbase.zookeeper.quorum", "hadoop.master,hadoop.slave1,hadoop.slave2");
        conf.set("hbase.zookeeper.property.clientPort", "2181");
        try {
            conn = ConnectionFactory.createConnection(conf);
        } catch (IOException e) {
            e.printStackTrace();
        }  
	 }
	 
	 public static void main( String[] args ) throws IOException
	 {
		 HbaseTest hbase = new HbaseTest();
		 hbase.scanTable();
		 hbase.conn.close();
	 }
	 
	 public void createTable() throws IOException {
	            Admin admin = conn.getAdmin();
	            if(!admin.isTableAvailable(TableName.valueOf("hbase_table"))) {
	                TableName tableName = TableName.valueOf("hbase_table");
	                //表描述器构造器
	                TableDescriptorBuilder  tdb  =TableDescriptorBuilder.newBuilder(tableName)  ;
	                //列族描述起构造器
	                ColumnFamilyDescriptorBuilder cdb =  ColumnFamilyDescriptorBuilder.newBuilder(Bytes.toBytes("user"));
	                //获得列描述起
	                ColumnFamilyDescriptor  cfd = cdb.build();
	                //添加列族
	                tdb.setColumnFamily(cfd);
	                //获得表描述器
	                TableDescriptor td = tdb.build();
	                //创建表
	                //admin.addColumnFamily(tableName, cfd); //给标添加列族
	                admin.createTable(td);
	            }
	            //关闭链接
	    }
	    //单条插入
	    public void insertOneData() throws IOException {
	        //new 一个列  ,hgs_000为row key
	        Put put = new Put(Bytes.toBytes("hgs_000"));
	        //下面三个分别为,列族,列名,列值
	        put.addColumn(Bytes.toBytes("testfm"),Bytes.toBytes("name") , Bytes.toBytes("hgs"));
	        TableName tableName = TableName.valueOf("hbase_table");
	        //得到 table
	        Table table = conn.getTable(tableName);
	        //执行插入
	        table.put(put);            
	    }
	    //插入多个列
	    public void insertManyData() throws IOException {
	        Table table = conn.getTable(TableName.valueOf("hbase_table"));
	        List<Put> puts = new ArrayList<Put>();
	        Put put1 = new Put(Bytes.toBytes("hgs_001"));
	        put1.addColumn(Bytes.toBytes("testfm"),Bytes.toBytes("name") , Bytes.toBytes("wd"));
	         
	        Put put2 = new Put(Bytes.toBytes("hgs_001"));
	        put2.addColumn(Bytes.toBytes("testfm"),Bytes.toBytes("age") , Bytes.toBytes("25"));
	         
	        Put put3 = new Put(Bytes.toBytes("hgs_001"));
	        put3.addColumn(Bytes.toBytes("testfm"),Bytes.toBytes("weight") , Bytes.toBytes("60kg"));
	         
	        Put put4 = new Put(Bytes.toBytes("hgs_001"));
	        put4.addColumn(Bytes.toBytes("testfm"),Bytes.toBytes("sex") , Bytes.toBytes("男"));
	        puts.add(put1);
	        puts.add(put2);
	        puts.add(put3);
	        puts.add(put4);    
	        table.put(puts);
	        table.close();
	}
	    //同一条数据的插入
	    public void singleRowInsert() throws IOException {
	        Table table = conn.getTable(TableName.valueOf("hbase_table"));
	         
	        Put put1 = new Put(Bytes.toBytes("hgs_005"));
	         
	        put1.addColumn(Bytes.toBytes("testfm"),Bytes.toBytes("name") , Bytes.toBytes("cm"));     
	        put1.addColumn(Bytes.toBytes("testfm"),Bytes.toBytes("age") , Bytes.toBytes("22"));      
	        put1.addColumn(Bytes.toBytes("testfm"),Bytes.toBytes("weight") , Bytes.toBytes("88kg"));
	        put1.addColumn(Bytes.toBytes("testfm"),Bytes.toBytes("sex") , Bytes.toBytes("男"));   
	         
	        table.put(put1);
	        table.close();
	    }
	    //数据的更新,hbase对数据只有追加,没有更新,但是查询的时候会把最新的数据返回给哦我们
	    public void updateData() throws IOException {
	        Table table = conn.getTable(TableName.valueOf("hbase_table"));
	        Put put1 = new Put(Bytes.toBytes("hgs_002"));
	        put1.addColumn(Bytes.toBytes("testfm"),Bytes.toBytes("weight") , Bytes.toBytes("63kg"));
	        table.put(put1);
	        table.close();
	    }
	     
	    //删除数据
	    public void deleteData() throws IOException {
	        Table table = conn.getTable(TableName.valueOf("hbase_table"));
	        //参数为 row key
	        //删除一列
	        Delete delete1 = new Delete(Bytes.toBytes("hgs_000"));
	        delete1.addColumn(Bytes.toBytes("testfm"), Bytes.toBytes("weight"));
	        //删除多列
	        Delete delete2 = new Delete(Bytes.toBytes("hgs_001"));
	        delete2.addColumns(Bytes.toBytes("testfm"), Bytes.toBytes("age"));
	        delete2.addColumns(Bytes.toBytes("testfm"), Bytes.toBytes("sex"));
	        //删除某一行的列族内容
	        Delete delete3 = new Delete(Bytes.toBytes("hgs_002"));
	        delete3.addFamily(Bytes.toBytes("testfm"));
	         
	        //删除一整行
	        Delete delete4 = new Delete(Bytes.toBytes("hgs_003"));
	        table.delete(delete1);
	        table.delete(delete2);
	        table.delete(delete3);
	        table.delete(delete4);
	        table.close();
	    }
	     
	    //查询

	    public void querySingleRow() throws IOException {
	        Table table = conn.getTable(TableName.valueOf("hbase_table"));
	        //获得一行
	        Get get = new Get(Bytes.toBytes("hgs_000"));
	        Result set = table.get(get);
	        Cell[] cells  = set.rawCells();
	        for(Cell cell : cells) {
	            System.out.println(Bytes.toString(cell.getQualifierArray(), cell.getQualifierOffset(), cell.getQualifierLength())+"::"+
	                            Bytes.toString(cell.getValueArray(), cell.getValueOffset(), cell.getValueLength()));
	        }
	        table.close();
	        //Bytes.toInt(result.getValue(Bytes.toBytes("info"), Bytes.toBytes("password")))
	         
	    }
	    //全表扫描
	    public void scanTable() throws IOException {
	        Table table = conn.getTable(TableName.valueOf("hbase_table"));
	        Scan scan = new Scan();
	        //scan.addFamily(Bytes.toBytes("info"));
	        //scan.addColumn(Bytes.toBytes("info"), Bytes.toBytes("password"));
	        //scan.setStartRow(Bytes.toBytes("wangsf_0"));
	        //scan.setStopRow(Bytes.toBytes("wangwu"));
	        ResultScanner rsacn = table.getScanner(scan);
	        for(Result rs:rsacn) {
	            String rowkey = Bytes.toString(rs.getRow());
	            System.out.println("row key :"+rowkey);
	            Cell[] cells  = rs.rawCells();
	            for(Cell cell : cells) {
	                System.out.println(Bytes.toString(cell.getQualifierArray(), cell.getQualifierOffset(), cell.getQualifierLength())+"::"+
	                                Bytes.toString(cell.getValueArray(), cell.getValueOffset(), cell.getValueLength()));
	            }
	            System.out.println("-----------------------------------------");
	        }
	    }
	    //过滤器

	    //列值过滤器
	    public void singColumnFilter() throws IOException {
	        Table table = conn.getTable(TableName.valueOf("hbase_table"));
	        Scan scan = new Scan();
	        //下列参数分别为,列族,列名,比较符号,值
	        SingleColumnValueFilter filter =  new SingleColumnValueFilter( Bytes.toBytes("testfm"),  Bytes.toBytes("name"),
	                 CompareOperator.EQUAL,  Bytes.toBytes("wd")) ;
	        scan.setFilter(filter);
	        ResultScanner scanner = table.getScanner(scan);
	        for(Result rs:scanner) {
	            String rowkey = Bytes.toString(rs.getRow());
	            System.out.println("row key :"+rowkey);
	            Cell[] cells  = rs.rawCells();
	            for(Cell cell : cells) {
	                System.out.println(Bytes.toString(cell.getQualifierArray(), cell.getQualifierOffset(), cell.getQualifierLength())+"::"+
	                                Bytes.toString(cell.getValueArray(), cell.getValueOffset(), cell.getValueLength()));
	            }
	            System.out.println("-----------------------------------------");
	        }
	    }
	    //row key过滤器
	    public void rowkeyFilter() throws IOException {
	        Table table = conn.getTable(TableName.valueOf("hbase_table"));
	        Scan scan = new Scan();
	        RowFilter filter = new RowFilter(CompareOperator.EQUAL,new RegexStringComparator("^hgs_00*"));
	        scan.setFilter(filter);
	        ResultScanner scanner  = table.getScanner(scan);
	        for(Result rs:scanner) {
	            String rowkey = Bytes.toString(rs.getRow());
	            System.out.println("row key :"+rowkey);
	            Cell[] cells  = rs.rawCells();
	            for(Cell cell : cells) {
	                System.out.println(Bytes.toString(cell.getQualifierArray(), cell.getQualifierOffset(), cell.getQualifierLength())+"::"+
	                                Bytes.toString(cell.getValueArray(), cell.getValueOffset(), cell.getValueLength()));
	            }
	            System.out.println("-----------------------------------------");
	        }
	    }
	    //列名前缀过滤器
	    public void columnPrefixFilter() throws IOException {
	        Table table = conn.getTable(TableName.valueOf("hbase_table"));
	        Scan scan = new Scan();
	        ColumnPrefixFilter filter = new ColumnPrefixFilter(Bytes.toBytes("name"));
	        scan.setFilter(filter);
	        ResultScanner scanner  = table.getScanner(scan);
	        for(Result rs:scanner) {
	            String rowkey = Bytes.toString(rs.getRow());
	            System.out.println("row key :"+rowkey);
	            Cell[] cells  = rs.rawCells();
	            for(Cell cell : cells) {
	                System.out.println(Bytes.toString(cell.getQualifierArray(), cell.getQualifierOffset(), cell.getQualifierLength())+"::"+
	                                Bytes.toString(cell.getValueArray(), cell.getValueOffset(), cell.getValueLength()));
	            }
	            System.out.println("-----------------------------------------");
	        }
	    }
	     
	    //过滤器集合
	    public void FilterSet() throws IOException {
	        Table table = conn.getTable(TableName.valueOf("hbase_table"));
	        Scan scan = new Scan();
	        FilterList list = new FilterList(Operator.MUST_PASS_ALL);
	        SingleColumnValueFilter filter1 =  new SingleColumnValueFilter( Bytes.toBytes("testfm"),  Bytes.toBytes("age"),
	                CompareOperator.GREATER,  Bytes.toBytes("23")) ;
	        ColumnPrefixFilter filter2 = new ColumnPrefixFilter(Bytes.toBytes("weig"));
	        list.addFilter(filter1);
	        list.addFilter(filter2);
	         
	        scan.setFilter(list);
	        ResultScanner scanner  = table.getScanner(scan);
	        for(Result rs:scanner) {
	            String rowkey = Bytes.toString(rs.getRow());
	            System.out.println("row key :"+rowkey);
	            Cell[] cells  = rs.rawCells();
	            for(Cell cell : cells) {
	                System.out.println(Bytes.toString(cell.getQualifierArray(), cell.getQualifierOffset(), cell.getQualifierLength())+"::"+
	                                Bytes.toString(cell.getValueArray(), cell.getValueOffset(), cell.getValueLength()));
	            }
	            System.out.println("-----------------------------------------");
	        }
	         
	    }
}

踩过的坑
如果报错:

ERROR: org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet
	at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2775)
	at org.apache.hadoop.hbase.master.MasterRpcServices.isMasterRunning(MasterRpcServices.java:1134)
	at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
	at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
	at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132)
	at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338)
	at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318)

原因:hadoop进入安全模式
解决:hadoop dfsadmin -safemode leave (hadoop 退出安全模式)

如果报错hbase-env.sh export HBASE_MANAGES_ZK=false: 未找到命令 或者提示目录不存在等:
解决办法:把这行export HBASE_OPTS="$HBASE_OPTS -XX:+UseConcMarkSweepGC"注释掉 原因不明

如果报错:

java.lang.IllegalStateException: The procedure WAL relies on the ability to hsync for proper operation during component 
failures, but the underlying filesystem does not support doing so. Please check the config value of 
'hbase.procedure.store.wal.use.hsync' to set the desired level of robustness and ensure the config value of 'hbase.wal.dir' 
points to a FileSystem mount that can provide it.

解决办法 配置hbase-site.xml 添加

 <property>
    <name>hbase.unsafe.stream.capability.enforce</name>
    <value>false</value>
  </property>
  • 1
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值