大数据学习第二十天

如何理解HBase

关于HBase比较官方的解释就是:

HBase是一个高可靠性、高性能、面向列、可伸缩的分布式存储系统,它的目标是存储并处理大型的数据,HBase技术可在廉价的PC Server上搭建大规模结构化存储集群。

高可靠性:因为HBase的存储基于HDFS,有数据备份

高性能:依托于Hadoop分布式平台,实现分布式计算,速度快

面向列:HBase是一个Nosql型数据库,通过列式存储的方式存储数据;对比mysql是行式存储

可伸缩:分布式集群,可扩展,增删工作节点方便

ps:行式存储和列式存储的区别:一分钟搞懂行式存储和列式存储

(1)行式存储,若某一个数据不存在,默认会是null或0,依然占用内容;

     列式存储,若某个数据不存在,直接为空;

(2)例如表的数据一共5行x10列,现在需要读取指定某3列的数据:

     行式存储,需要读取5行,10列数据全被读取到;

     列式存储,读取数据时按列读取,只需要读取指定的3列数据即可;

列式存储的主要优点之一就是可以大幅降低系统的I/O,尤其是在海量数据查询时,而I/O向来是系统的主要瓶颈之一。

HBase与Hive、Hadoop的区别

附:Hive的概念、原理及其与Hadoop和数据库关系

  1. 从概念上区分

(1)HBase是一种Key/Value分布式存储系统,它依托于Zookeeper和HDFS;

     HBase在自身数据库上实时运行,并不运行MapReduce;

     HBase分区成表,表被分割成列族,列族将某一类型的列集合在一起查询,

     每一个Key/Value在HBase中被定义成一个cell,cell是一个字节数组,{key , value}key包括rowkey、列族、列和时间戳,value是行的具体数据;

     在HBase中,行是key-value映射的集合,这个映射通过rowkey唯一标识。

     HBase启动前,需要群起Zookeeper和HDFS。

(2)Hive是一个构建在Hadoop基础设施之上的数据仓库,它基于MapReduce、HDFS和YARN;

     Hive将HQL语言封装成对应的MapReduce程序,通过HQL语言查询存放在HDFS上的数据;

     实际上是将HQL翻译成对应的封装好的MapReduce程序,运行,从而查询HDFS中的数据;

     Hive在调度资源时,需要用到Yarn框架;

     Hive启动前,需要群起HDFS和YARN。
  1. 从应用场景上区分

(1)mysql适合处理实时动态数据的增删改查;

(2)Hive用来做对数据库的数据分析,适合处理离线静态数据的分析查询,不适合实时查询;

(3)HBase适合进行大数据的实时查询。

  1. 从数据库角度来看

关系型数据库(SQL型数据库):Mysql,Oracle

非关系型数据库(NOSQL型数据库):Redis,MongoDB,HBase

类SQL数据库:hive

关系型数据库和非关系型数据库,以及hive数据仓库的区别

  1. 小结

(1)Hadoop是基础框架,Hive和HBase是两种基于Hadoop的不同技术,

     Hive是一种类SQL的数仓工具,本质是对MapReduce的HQL封装,运行MapReduce任务,适合对数据库中静态数据作数据分析,

     HBase是一种依托于Hadoop的Nosql型数据库,以key-value存储,适合对大数据进行实时查询。

(2)Hive和HBase并不矛盾,可以同时配合使用,前者进行统计查询、后者进行实时查询;

(3)Hive中的数据可以写入HBase,HBase中的数据也可以写入Hive。

详述hbase的架构

1.HBase系统架构

在这里插入图片描述

在这里插入图片描述
2. HRegion Sever架构图

在这里插入图片描述

WAL: 即Write Ahead Log, 是HDFS上一个文件,早期版本中称为HLog,用以存储尚未进行持久化的数据。
所有写操作都会先保证将数据写入这个Log文件后,才会真正更新MemStore,最后写入HFile中。采用这种模式,可以保证HRegionServer宕机后,我们依然可以从该Log文件中读取数据,Replay所有的操作,而不至于数据丢失。这个Log文件会定期Roll出新的文件而删除旧的文件(那些已持久化到HFile中的Log可以删除)。
WAL文件存储在/hbase/WALs/${HRegionServer_Name}的目录中(在0.94之前,存储在/hbase/.logs/目录中),一般一个HRegionServer只有一个WAL实例,也就是说一个HRegionServer的所有WAL写都是串行的(就像log4j的日志写也是串行的),这当然会引起性能问题,因而在HBase
1.0之后,通过HBASE-5699实现了多个WAL并行写(MultiWAL),该实现采用HDFS的多个管道写,以单个HRegion为单位。 BlockCache: 是一个读缓存。在内存中存放经常读的数据,提升读性能。当缓存满的时候,最近最少使用的数据(Least Recently
Used data )被踢出去。
MemStore:是一个写缓存。存储尚未写到磁盘中的数据。在写到磁盘之前,数据是经过排序的。在Region中每个column
family对应一个HStore,每个HStore有一个MemStore和0到过个HFile。 Hfiles
:在磁盘上,用于存储排序后的数据行(KeyValues.)

  1. HMaster/HRegion 工作原理

HRegion Server上线

HMaster通过Zookeeper来追踪HRegion Server的状态。

HRegion Server 上线时,首先在Zookeeper的server目录中创建自己的文件,并取得文件的独占锁。

由于HMaser订阅了server目录,当目录下有文件增加或者删除时,HMaster能收到来自Zookeeper的实时通知,因此当HRegion Server上线时HMaster能马上得到消息。

HRegion Server下线

HRegion server下线时,它断掉了Zookeeper的通讯,Zookeeper便会释放代表server的文件的独占锁。

HMaster轮询Zookeeper server目录下文件的独占锁。 当HMaster发现某个Region server丢失了自己的独占锁(或者HMaster与HRegion server连续几次通讯都不成功), HMaster将尝试获取该文件的读写锁,一旦获取成功,说明:

该HRegion server与Zookeeper通讯已经断开
该HResion server挂了

  无论哪种情况,HMaster将删除Server目录下代表该server的文件,并将该server的所有region,并将其分配给其他或者的server。 如果HRegion server因为临时网络断开丢失了锁,并很快恢复与Zookeeper的通讯,只要代表其的文件没有被删除,它会继续尝试或许该文件的锁,一旦获取成功,它就可以接着服务

HMaster上线

HMaster启动时:

  1. 从 Zookeeper中获取一个代表HMaster的锁,用以阻止其他Master成为Master
  2. 扫描Zookpper中的server目录,获取HRegion server的list
  3. 与2中获取的server 通讯,获取已分配的region和region server的对应关系
    我的理解:

前面讲过region的encode值就是存放region的文件的目录名,这些目录位于hdfs中的hbase相关的数据目录中.
所以HMaster可以扫描这些目录获取region的信息,根据region的名字就可以从hbase:meta中获取相应的HRegion
Server信息

4扫描hbase:meta表,记录尚未分配的region的信息,并添加到待分配的region列表中

HMaster下线

HMaster下线时,由于它不参与client的IO操作,所以这些操作不受影响。HMaster下线仅导致元数据的操作(比如无法创建表,无法修改表结构,无法进行负载均衡,无法进行region的合并,但是split可以进行,因为split只有HRegion server参与)受影响,用户的IO操作可以继续进行。所以短时间内的HMaster下线对HBase集群影响不大。

小结:HMaster 和HRegion Server通在Zookeeper中创建Ephemeral(临时的)节点来完成注册。
具体来说,HMaster默认在 /hbasae/master目录下创建,HRegion Server在 /hbase/rs/*下创建,
创建后通过HeartBeat来维持关系。

hbase的组件

HBase的实现包括三个主要的功能组件:

–(1)库函数:链接到每个客户端

–(2)一个Master主服务器

–(3)许多个Region服务器

•主服务器Master负责管理和维护HBase表的分区信息,维护Region服务器列表,分配Region,负载均衡

•Region服务器负责存储和维护分配给自己的Region,处理来自客户端的读写请求

•客户端并不是直接从Master主服务器上读取数据,而是在获得Region的存储位置信息后,直接从Region服务器上读取数据

•客户端并不依赖Master,而是通过Zookeeper来获得Region位置信息,大多数客户端甚至从来不和Master通信,这种设计方式使得Master负载很小

一、Master服务器:

•主服务器Master主要负责表和Region的管理工作:

–管理用户对表的增加、删除、修改、查询等操作

–实现不同Region服务器之间的负载均衡

–在Region分裂或合并后,负责重新调整Region的分布

–对发生故障失效的Region服务器上的Region进行迁移

二、Region服务器
Region服务器是HBase中最核心的模块,负责维护分配给自己的Region,并响应用户的读写请求
Region的定位:

•元数据表,又名.META.表,存储了Region和Region服务器的映射关系
•当HBase表很大时, .META.表也会被分裂成多个Region
•根数据表,又名-ROOT-表,记录所有元数据的具体位置
•-ROOT-表只有唯一一个Region,名字是在程序中被写死的
•Zookeeper文件记录了-ROOT-表的位置

客户端访问数据时的“三级寻址”

•为了加速寻址,客户端会缓存位置信息,同时,需要解决缓存失效问题

•寻址过程客户端只需要询问Zookeeper服务器,不需要连接Master服务器
在这里插入图片描述
HBase的三层结构中各层次的名称和作用:
在这里插入图片描述
HBase架构中的客户端Client

1.整个HBase集群的访问入口;
2.使用HBase RPC机制与HMaster和HRegionServer进行通信;

3.使用HMaster进行通信进行管理类操作;
4.与HRegionServer进行数据读写类操作;

5.包含访问HBase的接口,并维护cache来加快对HBase的访问。
协调服务组件Zookeeper
Zookeeper的作用如下:

1.保证任何时候,集群中只有一个HMaster;
2.存储所有的HRegion的寻址入口;

3.实时监控HRegionServer的上线和下线信息,并实时通知给HMaster;
4.存储HBase的schema和table元数据;

5.Zookeeper Quorum存储-ROOT-表地址、HMaster地址。
主节点HMaster
HMaster的主要功能如下:

1.HMaster没有单节点问题,HBase中可以启动多个HMaster,通过Zookeeper的Master Election机制保证总有一个Master在运行,主要负责Table和Region的管理工作。
启动多个HMaster:
​ 通过hbase-daemons.sh启动,步骤如下:1)在hbase/conf目录下编辑backup-masters;2)编辑内容为自己的主机名;3)保存后,执行如下命令:bin/hbase-daemons.sh start master-backup。

2.管理用户对表的增删改查操作;
3.管理HRegionServer的负载均衡,调整Region分布(在命令行里面有一个tools,tools这个分组命令其实全部都是Master做的事情);

4.Region Split后,负责新Region的分布;
5.在HRegionServer停机后,负责失效HRegionServer上Region迁移工作。
Region节点HRegionServer
HRegionServer的功能如下:

1.维护HRegion,处理HRegion的IO请求,向HDFS文件系统中读写数据;
2.负责切分运行过程中变得过大的HRegion;

3.Client访问HBase上数据的过程并不需要Master参与(寻址访问zookeeper和HRegionServer,数据读写访问HRegionServer),HMaster仅仅维护着table和Region的元数据信息,负载很低。

HBase与Zookeeper的关系

1.HBase依赖Zookeeper
首先HMaster和RegionServer都需要和Zookeeper交互,因为RegionServer上线了还需要交互,之后Zookeeper知道了告诉HMaster,而下线或断开了Zookeeper知道了也告诉HMaster;同时HMaster还管理RegionServer,HMaster还会在HDFS上写Region数据。

2.默认情况下,HBase管理Zookeeper实例,比如,启动或者停止Zookeeper;

3.HMaster与HRegionServer启动时会向Zookeeper注册;

4.Zookeeper的引入使得HMaster不再是单点故障。

详述hbase的读写流程

(1)读数据过程:Client访问Zookeeper,然后访问对应的RegionServer服务器,HBase首先依据row-key,找到对应的region(分片),然后再依据需要的列族Store,去磁盘上找到Store对应的文件夹,读取数据。

(2)写数据过程:Client访问Zookeeper,然后访问对应的RegionServer服务器,按照Store写入数据,先保存在HLog中,再保存在内存级别存储MemStore中;然后达到一定条件后,以HFile的格式Flush到StoreFile存储文件中,再持久化缓存到HDFS上。

整个过程中产生的HLog都会保存到HDFS中,服务器挂掉后恢复数据。

hbase集群搭建步骤

hbase完全分布式安装:
1、准备工作
​ 1、网络
​ 2、hosts
​ 3、ssh
​ ssh-keygen
​ ssh-copy-id -i .ssh/id_rsa.pub node1
​ 4、时间:各个节点的时间必须一致
​ date -s ‘2018-12-24 16:23:11’
​ 时间服务器
​ yum install ntpdate
​ ntpdate ntp1.aliyun.com
​ 5、jdk版本
2、解压配置
​ 1、hbase-env.sh
​ JAVA_HOME
​ HBASE_MANAGES_ZK=false
​ 2、hbase-site.xml

		<property>
  		  <name>hbase.rootdir</name>
  		  <value>hdfs://bjsxt/hbase</value>
  		</property>
  		<property>
  		  <name>hbase.cluster.distributed</name>
  		  <value>true</value>
  		</property>
  		<property>
  		  <name>hbase.zookeeper.quorum</name>
  		  <value>node1,node2,node3</value>
  		</property>

​ 3、regionservers
​ node2
​ node3
​ node4
​ 4、backup-masters
​ node4
​ 5、拷贝hdfs-site.xml带conf目录
3、start-hbase.sh

​ 3、regionservers
​ node2
​ node3
​ node4
​ 4、backup-masters
​ node4
​ 5、拷贝hdfs-site.xml带conf目录
3、start-hbase.sh

列出hbase的基本操作

名称Shell命令
创建表create ‘表名’, ‘列族名1’[,…]
添加记录put ‘表名’, ‘RowKey’, ‘列族名称:列名’, ‘值’
查看记录get ‘表名’, ‘RowKey’, ‘列族名称:列名’
查看表中的记录总数count ‘表名’
删除记录delete ‘表名’ , ‘RowKey’, ‘列族名称:列名’
删除一张表先要屏蔽该表,才能对该表进行删除。 第一步 disable ‘表名称’ 第二步 drop ‘表名称’
查看所有记录scan '表名" ’

列出hbase的java api基本操作
package org.admln.hbase;

import java.util.ArrayList;
import java.util.List;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.HColumnDescriptor;
import org.apache.hadoop.hbase.HTableDescriptor;
import org.apache.hadoop.hbase.KeyValue;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.client.Delete;
import org.apache.hadoop.hbase.client.Get;
import org.apache.hadoop.hbase.client.HBaseAdmin;
import org.apache.hadoop.hbase.client.HTable;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.client.ResultScanner;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.util.Bytes;

public class OperateTable {
// 声明静态配置
private static Configuration conf = null;
static {
conf = HBaseConfiguration.create();
conf.set(“hbase.zookeeper.quorum”, “slave1”);
conf.set(“hbase.zookeeper.property.clientPort”, “2181”);
}

// 创建数据库表
public static void createTable(String tableName, String[] columnFamilys)
        throws Exception {
    // 新建一个数据库管理员
    HBaseAdmin hAdmin = new HBaseAdmin(conf);

    if (hAdmin.tableExists(tableName)) {
        System.out.println("表已经存在");
        System.exit(0);
    } else {
        // 新建一个 scores 表的描述
        HTableDescriptor tableDesc = new HTableDescriptor(
                TableName.valueOf(tableName));
        // 在描述里添加列族
        for (String columnFamily : columnFamilys) {
            tableDesc.addFamily(new HColumnDescriptor(columnFamily));
        }
        // 根据配置好的描述建表
        hAdmin.createTable(tableDesc);
        System.out.println("创建表成功");
    }
}

// 删除数据库表
public static void deleteTable(String tableName) throws Exception {
    // 新建一个数据库管理员
    HBaseAdmin hAdmin = new HBaseAdmin(conf);

    if (hAdmin.tableExists(tableName)) {
        // 关闭一个表
        hAdmin.disableTable(tableName);
        // 删除一个表
        hAdmin.deleteTable(tableName);
        System.out.println("删除表成功");

    } else {
        System.out.println("删除的表不存在");
        System.exit(0);
    }
}

// 添加一条数据
public static void addRow(String tableName, String row,
        String columnFamily, String column, String value) throws Exception {
    HTable table = new HTable(conf, tableName);
    Put put = new Put(Bytes.toBytes(row));
    // 参数出分别:列族、列、值
    put.add(Bytes.toBytes(columnFamily), Bytes.toBytes(column),
            Bytes.toBytes(value));
    table.put(put);
}

// 删除一条数据
public static void delRow(String tableName, String row) throws Exception {
    HTable table = new HTable(conf, tableName);
    Delete del = new Delete(Bytes.toBytes(row));
    table.delete(del);
}

// 删除多条数据
public static void delMultiRows(String tableName, String[] rows)
        throws Exception {
    HTable table = new HTable(conf, tableName);
    List<Delete> list = new ArrayList<Delete>();

    for (String row : rows) {
        Delete del = new Delete(Bytes.toBytes(row));
        list.add(del);
    }

    table.delete(list);
}

// get row
public static void getRow(String tableName, String row) throws Exception {
    HTable table = new HTable(conf, tableName);
    Get get = new Get(Bytes.toBytes(row));
    Result result = table.get(get);
    // 输出结果
    for (KeyValue rowKV : result.raw()) {
        System.out.print("Row Name: " + new String(rowKV.getRow()) + " ");
        System.out.print("Timestamp: " + rowKV.getTimestamp() + " ");
        System.out.print("column Family: " + new String(rowKV.getFamily())
                + " ");
        System.out.print("Row Name:  " + new String(rowKV.getQualifier())
                + " ");
        System.out.println("Value: " + new String(rowKV.getValue()) + " ");
    }
}

// get all records
public static void getAllRows(String tableName) throws Exception {
    HTable table = new HTable(conf, tableName);
    Scan scan = new Scan();
    ResultScanner results = table.getScanner(scan);
    // 输出结果
    for (Result result : results) {
        for (KeyValue rowKV : result.raw()) {
            System.out.print("Row Name: " + new String(rowKV.getRow())
                    + " ");
            System.out.print("Timestamp: " + rowKV.getTimestamp() + " ");
            System.out.print("column Family: "
                    + new String(rowKV.getFamily()) + " ");
            System.out.print("Row Name:  "
                    + new String(rowKV.getQualifier()) + " ");
            System.out.println("Value: " + new String(rowKV.getValue())
                    + " ");
        }
    }
}

// main
public static void main(String[] args) {
    try {
        String tableName = "users2";

        // 第一步:创建数据库表:“users2”
        String[] columnFamilys = { "info", "course" };
        OperateTable.createTable(tableName, columnFamilys);

        // 第二步:向数据表的添加数据
        // 添加第一行数据
        OperateTable.addRow(tableName, "tht", "info", "age", "20");
        OperateTable.addRow(tableName, "tht", "info", "sex", "boy");
        OperateTable.addRow(tableName, "tht", "course", "china", "97");
        OperateTable.addRow(tableName, "tht", "course", "math", "128");
        OperateTable.addRow(tableName, "tht", "course", "english", "85");
        // 添加第二行数据
        OperateTable.addRow(tableName, "xiaoxue", "info", "age", "19");
        OperateTable.addRow(tableName, "xiaoxue", "info", "sex", "boy");
        OperateTable.addRow(tableName, "xiaoxue", "course", "china", "90");
        OperateTable.addRow(tableName, "xiaoxue", "course", "math", "120");
        OperateTable
                .addRow(tableName, "xiaoxue", "course", "english", "90");
        // 添加第三行数据
        OperateTable.addRow(tableName, "qingqing", "info", "age", "18");
        OperateTable.addRow(tableName, "qingqing", "info", "sex", "girl");
        OperateTable
                .addRow(tableName, "qingqing", "course", "china", "100");
        OperateTable.addRow(tableName, "qingqing", "course", "math", "100");
        OperateTable.addRow(tableName, "qingqing", "course", "english",
                "99");
        // 第三步:获取一条数据
        System.out.println("获取一条数据");
        OperateTable.getRow(tableName, "tht");
        // 第四步:获取所有数据
        System.out.println("获取所有数据");
        OperateTable.getAllRows(tableName);
        // 第五步:删除一条数据
        System.out.println("删除一条数据");
        OperateTable.delRow(tableName, "tht");
        OperateTable.getAllRows(tableName);
        // 第六步:删除多条数据
        System.out.println("删除多条数据");
        String[] rows = { "xiaoxue", "qingqing" };
        OperateTable.delMultiRows(tableName, rows);
        OperateTable.getAllRows(tableName);
        // 第八步:删除数据库
        System.out.println("删除数据库");
        OperateTable.deleteTable(tableName);

    } catch (Exception err) {
        err.printStackTrace();
    }
}
}






2014-11-11 14:09:00,368 WARN  util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2014-11-11 14:09:00,455 INFO  zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52 GMT
2014-11-11 14:09:00,455 INFO  zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:host.name=WZJ-20140910JYZ
2014-11-11 14:09:00,455 INFO  zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:java.version=1.7.0_72
2014-11-11 14:09:00,455 INFO  zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:java.vendor=Oracle Corporation
2014-11-11 14:09:00,455 INFO  zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:java.home=C:\Program Files (x86)\Java\jre7
2014-11-11 14:09:00,455 INFO  zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:java.class.path= 。。。省略
2014-11-11 14:09:00,456 INFO  zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:java.io.tmpdir=C:\Users\ADMINI~1\AppData\Local\Temp\
2014-11-11 14:09:00,456 INFO  zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:java.compiler=<NA>
2014-11-11 14:09:00,456 INFO  zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:os.name=Windows 7
2014-11-11 14:09:00,456 INFO  zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:os.arch=x86
2014-11-11 14:09:00,456 INFO  zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:os.version=6.1
2014-11-11 14:09:00,456 INFO  zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:user.name=Administrator
2014-11-11 14:09:00,456 INFO  zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:user.home=C:\Users\Administrator
2014-11-11 14:09:00,456 INFO  zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:user.dir=D:\eclipseWorkspace32\hbase
2014-11-11 14:09:00,458 INFO  zookeeper.ZooKeeper (ZooKeeper.java:<init>(438)) - Initiating client connection, connectString=slave1:2181 sessionTimeout=90000 watcher=hconnection-0x21801b
2014-11-11 14:09:00,482 INFO  zookeeper.RecoverableZooKeeper (RecoverableZooKeeper.java:<init>(120)) - Process identifier=hconnection-0x21801b connecting to ZooKeeper ensemble=slave1:2181
2014-11-11 14:09:00,498 INFO  zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(966)) - Opening socket connection to server slave1/192.168.126.135:2181. Will not attempt to authenticate using SASL (unknown error)
2014-11-11 14:09:00,500 INFO  zookeeper.ClientCnxn (ClientCnxn.java:primeConnection(849)) - Socket connection established to slave1/192.168.126.135:2181, initiating session
2014-11-11 14:09:00,513 INFO  zookeeper.ClientCnxn (ClientCnxn.java:onConnected(1207)) - Session establishment complete on server slave1/192.168.126.135:2181, sessionid = 0x1499d5df1d6002d, negotiated timeout = 90000
2014-11-11 14:09:00,680 INFO  zookeeper.ZooKeeper (ZooKeeper.java:<init>(438)) - Initiating client connection, connectString=slave1:2181 sessionTimeout=90000 watcher=catalogtracker-on-hconnection-0x21801b
2014-11-11 14:09:00,683 INFO  zookeeper.RecoverableZooKeeper (RecoverableZooKeeper.java:<init>(120)) - Process identifier=catalogtracker-on-hconnection-0x21801b connecting to ZooKeeper ensemble=slave1:2181
2014-11-11 14:09:00,683 INFO  zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(966)) - Opening socket connection to server slave1/192.168.126.135:2181. Will not attempt to authenticate using SASL (unknown error)
2014-11-11 14:09:00,684 INFO  zookeeper.ClientCnxn (ClientCnxn.java:primeConnection(849)) - Socket connection established to slave1/192.168.126.135:2181, initiating session
2014-11-11 14:09:00,686 INFO  zookeeper.ClientCnxn (ClientCnxn.java:onConnected(1207)) - Session establishment complete on server slave1/192.168.126.135:2181, sessionid = 0x1499d5df1d6002e, negotiated timeout = 90000
2014-11-11 14:09:00,711 INFO  Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(840)) - hadoop.native.lib is deprecated. Instead, use io.native.lib.available
2014-11-11 14:09:01,078 INFO  zookeeper.ZooKeeper (ZooKeeper.java:close(684)) - Session: 0x1499d5df1d6002e closed
2014-11-11 14:09:01,078 INFO  zookeeper.ClientCnxn (ClientCnxn.java:run(509)) - EventThread shut down
2014-11-11 14:09:01,248 INFO  zookeeper.ZooKeeper (ZooKeeper.java:<init>(438)) - Initiating client connection, connectString=slave1:2181 sessionTimeout=90000 watcher=catalogtracker-on-hconnection-0x21801b
2014-11-11 14:09:01,250 INFO  zookeeper.RecoverableZooKeeper (RecoverableZooKeeper.java:<init>(120)) - Process identifier=catalogtracker-on-hconnection-0x21801b connecting to ZooKeeper ensemble=slave1:2181
2014-11-11 14:09:01,251 INFO  zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(966)) - Opening socket connection to server slave1/192.168.126.135:2181. Will not attempt to authenticate using SASL (unknown error)
2014-11-11 14:09:01,252 INFO  zookeeper.ClientCnxn (ClientCnxn.java:primeConnection(849)) - Socket connection established to slave1/192.168.126.135:2181, initiating session
2014-11-11 14:09:01,254 INFO  zookeeper.ClientCnxn (ClientCnxn.java:onConnected(1207)) - Session establishment complete on server slave1/192.168.126.135:2181, sessionid = 0x1499d5df1d6002f, negotiated timeout = 90000
2014-11-11 14:09:01,263 INFO  zookeeper.ZooKeeper (ZooKeeper.java:close(684)) - Session: 0x1499d5df1d6002f closed
2014-11-11 14:09:01,263 INFO  zookeeper.ClientCnxn (ClientCnxn.java:run(509)) - EventThread shut down
创建表成功
获取一条数据
Row Name: tht Timestamp: 1415686141316 column Family: course Row Name:  china Value: 97 
Row Name: tht Timestamp: 1415686141343 column Family: course Row Name:  english Value: 85 
Row Name: tht Timestamp: 1415686141340 column Family: course Row Name:  math Value: 128 
Row Name: tht Timestamp: 1415686141303 column Family: info Row Name:  age Value: 20 
Row Name: tht Timestamp: 1415686141313 column Family: info Row Name:  sex Value: boy 
获取所有数据
Row Name: qingqing Timestamp: 1415686141375 column Family: course Row Name:  china Value: 100 
Row Name: qingqing Timestamp: 1415686141381 column Family: course Row Name:  english Value: 99 
Row Name: qingqing Timestamp: 1415686141378 column Family: course Row Name:  math Value: 100 
Row Name: qingqing Timestamp: 1415686141362 column Family: info Row Name:  age Value: 18 
Row Name: qingqing Timestamp: 1415686141365 column Family: info Row Name:  sex Value: girl 
Row Name: tht Timestamp: 1415686141316 column Family: course Row Name:  china Value: 97 
Row Name: tht Timestamp: 1415686141343 column Family: course Row Name:  english Value: 85 
Row Name: tht Timestamp: 1415686141340 column Family: course Row Name:  math Value: 128 
Row Name: tht Timestamp: 1415686141303 column Family: info Row Name:  age Value: 20 
Row Name: tht Timestamp: 1415686141313 column Family: info Row Name:  sex Value: boy 
Row Name: xiaoxue Timestamp: 1415686141353 column Family: course Row Name:  china Value: 90 
Row Name: xiaoxue Timestamp: 1415686141359 column Family: course Row Name:  english Value: 90 
Row Name: xiaoxue Timestamp: 1415686141355 column Family: course Row Name:  math Value: 120 
Row Name: xiaoxue Timestamp: 1415686141347 column Family: info Row Name:  age Value: 19 
Row Name: xiaoxue Timestamp: 1415686141350 column Family: info Row Name:  sex Value: boy 
删除一条数据
Row Name: qingqing Timestamp: 1415686141375 column Family: course Row Name:  china Value: 100 
Row Name: qingqing Timestamp: 1415686141381 column Family: course Row Name:  english Value: 99 
Row Name: qingqing Timestamp: 1415686141378 column Family: course Row Name:  math Value: 100 
Row Name: qingqing Timestamp: 1415686141362 column Family: info Row Name:  age Value: 18 
Row Name: qingqing Timestamp: 1415686141365 column Family: info Row Name:  sex Value: girl 
Row Name: xiaoxue Timestamp: 1415686141353 column Family: course Row Name:  china Value: 90 
Row Name: xiaoxue Timestamp: 1415686141359 column Family: course Row Name:  english Value: 90 
Row Name: xiaoxue Timestamp: 1415686141355 column Family: course Row Name:  math Value: 120 
Row Name: xiaoxue Timestamp: 1415686141347 column Family: info Row Name:  age Value: 19 
Row Name: xiaoxue Timestamp: 1415686141350 column Family: info Row Name:  sex Value: boy 
删除多条数据
删除数据库
2014-11-11 14:09:01,449 INFO  zookeeper.ZooKeeper (ZooKeeper.java:<init>(438)) - Initiating client connection, connectString=slave1:2181 sessionTimeout=90000 watcher=catalogtracker-on-hconnection-0x21801b
2014-11-11 14:09:01,451 INFO  zookeeper.RecoverableZooKeeper (RecoverableZooKeeper.java:<init>(120)) - Process identifier=catalogtracker-on-hconnection-0x21801b connecting to ZooKeeper ensemble=slave1:2181
2014-11-11 14:09:01,452 INFO  zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(966)) - Opening socket connection to server slave1/192.168.126.135:2181. Will not attempt to authenticate using SASL (unknown error)
2014-11-11 14:09:01,452 INFO  zookeeper.ClientCnxn (ClientCnxn.java:primeConnection(849)) - Socket connection established to slave1/192.168.126.135:2181, initiating session
2014-11-11 14:09:01,454 INFO  zookeeper.ClientCnxn (ClientCnxn.java:onConnected(1207)) - Session establishment complete on server slave1/192.168.126.135:2181, sessionid = 0x1499d5df1d60030, negotiated timeout = 90000
2014-11-11 14:09:01,459 INFO  zookeeper.ZooKeeper (ZooKeeper.java:close(684)) - Session: 0x1499d5df1d60030 closed
2014-11-11 14:09:01,459 INFO  zookeeper.ClientCnxn (ClientCnxn.java:run(509)) - EventThread shut down
2014-11-11 14:09:01,460 INFO  client.HBaseAdmin (HBaseAdmin.java:call(908)) - Started disable of users
2014-11-11 14:09:01,479 INFO  zookeeper.ZooKeeper (ZooKeeper.java:<init>(438)) - Initiating client connection, connectString=slave1:2181 sessionTimeout=90000 watcher=catalogtracker-on-hconnection-0x21801b
2014-11-11 14:09:01,481 INFO  zookeeper.RecoverableZooKeeper (RecoverableZooKeeper.java:<init>(120)) - Process identifier=catalogtracker-on-hconnection-0x21801b connecting to ZooKeeper ensemble=slave1:2181
2014-11-11 14:09:01,481 INFO  zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(966)) - Opening socket connection to server slave1/192.168.126.135:2181. Will not attempt to authenticate using SASL (unknown error)
2014-11-11 14:09:01,482 INFO  zookeeper.ClientCnxn (ClientCnxn.java:primeConnection(849)) - Socket connection established to slave1/192.168.126.135:2181, initiating session
2014-11-11 14:09:01,486 INFO  zookeeper.ClientCnxn (ClientCnxn.java:onConnected(1207)) - Session establishment complete on server slave1/192.168.126.135:2181, sessionid = 0x1499d5df1d60031, negotiated timeout = 90000
2014-11-11 14:09:01,500 INFO  zookeeper.ZooKeeper (ZooKeeper.java:close(684)) - Session: 0x1499d5df1d60031 closed
2014-11-11 14:09:01,500 INFO  zookeeper.ClientCnxn (ClientCnxn.java:run(509)) - EventThread shut down
2014-11-11 14:09:01,601 INFO  zookeeper.ZooKeeper (ZooKeeper.java:<init>(438)) - Initiating client connection, connectString=slave1:2181 sessionTimeout=90000 watcher=catalogtracker-on-hconnection-0x21801b
2014-11-11 14:09:01,603 INFO  zookeeper.RecoverableZooKeeper (RecoverableZooKeeper.java:<init>(120)) - Process identifier=catalogtracker-on-hconnection-0x21801b connecting to ZooKeeper ensemble=slave1:2181
2014-11-11 14:09:01,604 INFO  zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(966)) - Opening socket connection to server slave1/192.168.126.135:2181. Will not attempt to authenticate using SASL (unknown error)
2014-11-11 14:09:01,604 INFO  zookeeper.ClientCnxn (ClientCnxn.java:primeConnection(849)) - Socket connection established to slave1/192.168.126.135:2181, initiating session
2014-11-11 14:09:01,606 INFO  zookeeper.ClientCnxn (ClientCnxn.java:onConnected(1207)) - Session establishment complete on server slave1/192.168.126.135:2181, sessionid = 0x1499d5df1d60032, negotiated timeout = 90000
2014-11-11 14:09:01,614 INFO  zookeeper.ZooKeeper (ZooKeeper.java:close(684)) - Session: 0x1499d5df1d60032 closed
2014-11-11 14:09:01,614 INFO  zookeeper.ClientCnxn (ClientCnxn.java:run(509)) - EventThread shut down
2014-11-11 14:09:01,816 INFO  zookeeper.ZooKeeper (ZooKeeper.java:<init>(438)) - Initiating client connection, connectString=slave1:2181 sessionTimeout=90000 watcher=catalogtracker-on-hconnection-0x21801b
2014-11-11 14:09:01,823 INFO  zookeeper.RecoverableZooKeeper (RecoverableZooKeeper.java:<init>(120)) - Process identifier=catalogtracker-on-hconnection-0x21801b connecting to ZooKeeper ensemble=slave1:2181
2014-11-11 14:09:01,828 INFO  zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(966)) - Opening socket connection to server slave1/192.168.126.135:2181. Will not attempt to authenticate using SASL (unknown error)
2014-11-11 14:09:01,831 INFO  zookeeper.ClientCnxn (ClientCnxn.java:primeConnection(849)) - Socket connection established to slave1/192.168.126.135:2181, initiating session
2014-11-11 14:09:01,837 INFO  zookeeper.ClientCnxn (ClientCnxn.java:onConnected(1207)) - Session establishment complete on server slave1/192.168.126.135:2181, sessionid = 0x1499d5df1d60033, negotiated timeout = 90000
2014-11-11 14:09:01,856 INFO  zookeeper.ZooKeeper (ZooKeeper.java:close(684)) - Session: 0x1499d5df1d60033 closed
2014-11-11 14:09:01,856 INFO  zookeeper.ClientCnxn (ClientCnxn.java:run(509)) - EventThread shut down
2014-11-11 14:09:02,157 INFO  zookeeper.ZooKeeper (ZooKeeper.java:<init>(438)) - Initiating client connection, connectString=slave1:2181 sessionTimeout=90000 watcher=catalogtracker-on-hconnection-0x21801b
2014-11-11 14:09:02,161 INFO  zookeeper.RecoverableZooKeeper (RecoverableZooKeeper.java:<init>(120)) - Process identifier=catalogtracker-on-hconnection-0x21801b connecting to ZooKeeper ensemble=slave1:2181
2014-11-11 14:09:02,163 INFO  zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(966)) - Opening socket connection to server slave1/192.168.126.135:2181. Will not attempt to authenticate using SASL (unknown error)
2014-11-11 14:09:02,164 INFO  zookeeper.ClientCnxn (ClientCnxn.java:primeConnection(849)) - Socket connection established to slave1/192.168.126.135:2181, initiating session
2014-11-11 14:09:02,168 INFO  zookeeper.ClientCnxn (ClientCnxn.java:onConnected(1207)) - Session establishment complete on server slave1/192.168.126.135:2181, sessionid = 0x1499d5df1d60034, negotiated timeout = 90000
2014-11-11 14:09:02,185 INFO  zookeeper.ZooKeeper (ZooKeeper.java:close(684)) - Session: 0x1499d5df1d60034 closed
2014-11-11 14:09:02,185 INFO  zookeeper.ClientCnxn (ClientCnxn.java:run(509)) - EventThread shut down
2014-11-11 14:09:02,686 INFO  zookeeper.ZooKeeper (ZooKeeper.java:<init>(438)) - Initiating client connection, connectString=slave1:2181 sessionTimeout=90000 watcher=catalogtracker-on-hconnection-0x21801b
2014-11-11 14:09:02,691 INFO  zookeeper.RecoverableZooKeeper (RecoverableZooKeeper.java:<init>(120)) - Process identifier=catalogtracker-on-hconnection-0x21801b connecting to ZooKeeper ensemble=slave1:2181
2014-11-11 14:09:02,691 INFO  zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(966)) - Opening socket connection to server slave1/192.168.126.135:2181. Will not attempt to authenticate using SASL (unknown error)
2014-11-11 14:09:02,693 INFO  zookeeper.ClientCnxn (ClientCnxn.java:primeConnection(849)) - Socket connection established to slave1/192.168.126.135:2181, initiating session
2014-11-11 14:09:02,696 INFO  zookeeper.ClientCnxn (ClientCnxn.java:onConnected(1207)) - Session establishment complete on server slave1/192.168.126.135:2181, sessionid = 0x1499d5df1d60035, negotiated timeout = 90000
2014-11-11 14:09:02,721 INFO  zookeeper.ZooKeeper (ZooKeeper.java:close(684)) - Session: 0x1499d5df1d60035 closed
2014-11-11 14:09:02,722 INFO  zookeeper.ClientCnxn (ClientCnxn.java:run(509)) - EventThread shut down
2014-11-11 14:09:02,723 INFO  client.HBaseAdmin (HBaseAdmin.java:disableTable(963)) - Disabled users
2014-11-11 14:09:02,874 INFO  client.HBaseAdmin (HBaseAdmin.java:deleteTable(695)) - Deleted users
删除表成功
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值