hbase两种CopyTable 和export备份恢复用例

创建表

hbase(main):001:0> create 't1',{NAME => 'f1', VERSIONS => 2},{NAME => 'f2', VERSIONS => 2}

hbase(main):001:0> create 't2',{NAME => 'f1', VERSIONS => 2},{NAME => 'f2', VERSIONS => 2}

插入数据

hbase(main):001:0>put 't1','rowkey001','f1:col1','value01'

hbase(main):001:0>put 't2','rowkey001','f1:col1','value01'


查看数据

hbase(main):001:0> scan 't1'
ROW                                COLUMN+CELL                                                                                        
 rowkey001                         column=f1:col1, timestamp=1478682595754, value=value07                                             
 rowkey001                         column=f2:col1, timestamp=1478770632449, value=value08                                             
 rowkey101                         column=f2:col1, timestamp=1478770707865, value=value08                                             
 rowkey103                         column=f2:col1, timestamp=1478770719691, value=value08                                             
 rowkey104                         column=f2:col1, timestamp=1478770723336, value=value08                                             
 rowkey105                         column=f2:col1, timestamp=1478770727871, value=value08                                             
 rowkey106                         column=f2:col1, timestamp=1478770731871, value=value08                                             
 rowkey107                         column=f2:col1, timestamp=1478770735883, value=value08                                             
 rowkey108                         column=f2:col1, timestamp=1478770740942, value=value08                                             
 rowkey109                         column=f2:col1, timestamp=1478770745509, value=value08                                             
9 row(s) in 0.2800 seconds


hbase(main):001:0> create 'newtable',{NAME => 'f1', VERSIONS => 2},{NAME => 'f2', VERSIONS => 2}

备份表t1的数据

[hadoop@masternode1 ~]$  $HBASE_HOME/bin/hbase org.apache.hadoop.hbase.mapreduce.CopyTable --new.name=newtable --peer.adr=slavenode1:2181:/hbase t1


hbase(main):004:0> put 't2','rowkey111','f1:col1','value100'
0 row(s) in 0.0700 seconds

hbase(main):005:0> put 't2','rowkey112','f1:col1','value100'



create 't1','m_id','address','info'

恢复回去

$HBASE_HOME/bin/hbase org.apache.hadoop.hbase.mapreduce.CopyTable --new.name=t1 --peer.adr=slavenode1:2181:/hbase  newtable
方法2:



通过自带的export备份

[hadoop@masternode1 ~]$ $HBASE_HOME/bin/hbase org.apache.hadoop.hbase.mapreduce.Export 't1' /opt/hadoop/contentBackup20161111 1 123456789

    File System Counters
        FILE: Number of bytes read=0
        FILE: Number of bytes written=152776
        FILE: Number of read operations=0
        FILE: Number of large read operations=0
        FILE: Number of write operations=0
        HDFS: Number of bytes read=63
        HDFS: Number of bytes written=894
        HDFS: Number of read operations=4
        HDFS: Number of large read operations=0
        HDFS: Number of write operations=2
    Job Counters
        Launched map tasks=1
        Rack-local map tasks=1
        Total time spent by all maps in occupied slots (ms)=4733
        Total time spent by all reduces in occupied slots (ms)=0
        Total time spent by all map tasks (ms)=4733
        Total vcore-seconds taken by all map tasks=4733
        Total megabyte-seconds taken by all map tasks=4846592
    Map-Reduce Framework
        Map input records=11
        Map output records=11
        Input split bytes=63
        Spilled Records=0
        Failed Shuffles=0
        Merged Map outputs=0
        GC time elapsed (ms)=59
        CPU time spent (ms)=2210
        Physical memory (bytes) snapshot=186703872
        Virtual memory (bytes) snapshot=943046656
        Total committed heap usage (bytes)=201326592
    File Input Format Counters
        Bytes Read=0
    File Output Format Counters
        Bytes Written=894

恢复

$HBASE+HOME/bin/hbase org.apache.hadoop.hbase.mapreduce.Import t1 /opt/hadoop/contentBackup20161111

    File System Counters
        FILE: Number of bytes read=0
        FILE: Number of bytes written=152339
        FILE: Number of read operations=0
        FILE: Number of large read operations=0
        FILE: Number of write operations=0
        HDFS: Number of bytes read=1022
        HDFS: Number of bytes written=0
        HDFS: Number of read operations=3
        HDFS: Number of large read operations=0
        HDFS: Number of write operations=0
    Job Counters
        Launched map tasks=1
        Rack-local map tasks=1
        Total time spent by all maps in occupied slots (ms)=4841
        Total time spent by all reduces in occupied slots (ms)=0
        Total time spent by all map tasks (ms)=4841
        Total vcore-seconds taken by all map tasks=4841
        Total megabyte-seconds taken by all map tasks=4957184
    Map-Reduce Framework
        Map input records=11
        Map output records=11
        Input split bytes=128
        Spilled Records=0
        Failed Shuffles=0
        Merged Map outputs=0
        GC time elapsed (ms)=60
        CPU time spent (ms)=1920
        Physical memory (bytes) snapshot=182181888
        Virtual memory (bytes) snapshot=919445504
        Total committed heap usage (bytes)=201326592
    File Input Format Counters
        Bytes Read=894
    File Output Format Counters
        Bytes Written=0
2016-11-11 18:00:44,669 INFO  [main] mapreduce.Job: Running job: job_1478852519609_0006
2016-11-11 18:00:44,674 INFO  [main] mapreduce.Job: Job job_1478852519609_0006 running in uber mode : false
2016-11-11 18:00:44,674 INFO  [main] mapreduce.Job:  map 100% reduce 0%
2016-11-11 18:00:44,679 INFO  [main] mapreduce.Job: Job job_1478852519609_0006 completed successfully
2016-11-11 18:00:44,688 INFO  [main] mapreduce.Job: Counters: 30
    File System Counters
        FILE: Number of bytes read=0
        FILE: Number of bytes written=152339
        FILE: Number of read operations=0
        FILE: Number of large read operations=0
        FILE: Number of write operations=0
        HDFS: Number of bytes read=1022
        HDFS: Number of bytes written=0
        HDFS: Number of read operations=3
        HDFS: Number of large read operations=0
        HDFS: Number of write operations=0
    Job Counters
        Launched map tasks=1
        Rack-local map tasks=1
        Total time spent by all maps in occupied slots (ms)=4841
        Total time spent by all reduces in occupied slots (ms)=0
        Total time spent by all map tasks (ms)=4841
        Total vcore-seconds taken by all map tasks=4841
        Total megabyte-seconds taken by all map tasks=4957184
    Map-Reduce Framework
        Map input records=11
        Map output records=11
        Input split bytes=128
        Spilled Records=0
        Failed Shuffles=0
        Merged Map outputs=0
        GC time elapsed (ms)=60
        CPU time spent (ms)=1920
        Physical memory (bytes) snapshot=182181888
        Virtual memory (bytes) snapshot=919445504
        Total committed heap usage (bytes)=201326592
    File Input Format Counters
        Bytes Read=894
    File Output Format Counters
        Bytes Written=0

[hadoop@masternode1 ~]$ hbase shell
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.1.5, r239b80456118175b340b2e562a5568b5c744252e, Sun May  8 20:29:26 PDT 2016

hbase(main):001:0> scan 't1'
ROW                                COLUMN+CELL                                                                                        
 rowkey001                         column=f1:col1, timestamp=1478682595754, value=value07                                             
 rowkey001                         column=f2:col1, timestamp=1478770632449, value=value08                                             
 rowkey101                         column=f2:col1, timestamp=1478770707865, value=value08                                             
 rowkey103                         column=f2:col1, timestamp=1478770719691, value=value08                                             
 rowkey104                         column=f2:col1, timestamp=1478770723336, value=value08                                             
 rowkey105                         column=f2:col1, timestamp=1478770727871, value=value08                                             
 rowkey106                         column=f2:col1, timestamp=1478770731871, value=value08                                             
 rowkey107                         column=f2:col1, timestamp=1478770735883, value=value08                                             
 rowkey108                         column=f2:col1, timestamp=1478770740942, value=value08                                             
 rowkey109                         column=f2:col1, timestamp=1478770745509, value=value08                                             
 rowkey111                         column=f1:col1, timestamp=1478853976976, value=value100                                            
 rowkey112                         column=f1:col1, timestamp=1478854000088, value=value100                                            
11 row(s) in 0.2670 seconds


  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值