HBase简易迁移

1、准备工作:

1.1 HBase已经建立并且正常

 

1.2 检查hbase状态

 

[root@namenode ~]# hbase shell

15/07/13 10:42:47 INFO Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available

HBase Shell; enter 'help<RETURN>' for list of supported commands.

Type "exit<RETURN>" to leave the HBase Shell

Version 1.0.0-cdh5.4.3, rUnknown, Wed Jun 24 19:34:50 PDT 2015

 

hbase(main):001:0> status

3 servers, 0 dead, 1.0000 average load

 

hbase(main):002:0>

出现以上情况为hbase可用。

1.3  尝试建立数据表

hbase(main):069:0> create 'member','member_id','address','info'  

0 row(s) in 1.3860 seconds

1.4 插入模拟数据

hbase(main):032:0> put'member','scutshuxue','info:age','24'

'

 

put'member','xiaofeng','address:province','guangdong'

 

put'member','xiaofeng','address:city','jieyang'

 

put'member','xiaofeng','address:town','xianqiao'

0 row(s) in 0.0790 seconds

 

hbase(main):033:0>

hbase(main):034:0* put'member','scutshuxue','info:birthday','1987-06-17'

0 row(s) in 0.0060 seconds

 

hbase(main):035:0>

hbase(main):036:0* put'member','scutshuxue','info:company','alibaba'

0 row(s) in 0.0220 seconds

 

hbase(main):037:0>

hbase(main):038:0* put'member','scutshuxue','address:contry','china'

0 row(s) in 0.0090 seconds

 

hbase(main):039:0>

hbase(main):040:0* put'member','scutshuxue','address:province','zhejiang'

0 row(s) in 0.0070 seconds

 

hbase(main):041:0>

hbase(main):042:0* put'member','scutshuxue','address:city','hangzhou'

0 row(s) in 0.0070 seconds

 

hbase(main):047:0*

hbase(main):048:0* put'member','xiaofeng','info:birthday','1987-4-17'

0 row(s) in 0.0060 seconds

 

hbase(main):049:0>

hbase(main):050:0* put'member','xiaofeng','info:favorite','movie'

0 row(s) in 0.0060 seconds

 

hbase(main):051:0>

hbase(main):052:0* put'member','xiaofeng','info:company','alibaba'

0 row(s) in 0.0050 seconds

 

hbase(main):053:0>

hbase(main):054:0* put'member','xiaofeng','address:contry','china'

0 row(s) in 0.0070 seconds

 

hbase(main):055:0>

hbase(main):056:0* put'member','xiaofeng','address:province','guangdong'

0 row(s) in 0.0080 seconds

 

hbase(main):057:0>

hbase(main):058:0* put'member','xiaofeng','address:city','jieyang'

0 row(s) in 0.0070 seconds

 

hbase(main):059:0>

hbase(main):060:0* put'member','xiaofeng','address:town','xianqiao'

0 row(s) in 0.0060 seconds

 

1.5 检索数据

hbase(main):061:0> scan 'member'

ROW                              COLUMN+CELL                                                                               

 scutshuxue                      column=address:city, timestamp=1436753702560, value=hangzhou                              

 scutshuxue                      column=address:contry, timestamp=1436753702509, value=china                               

 scutshuxue                      column=address:province, timestamp=1436753702534, value=zhejiang                          

 scutshuxue                      column=info:age, timestamp=1436753702377, value=24                                        

 scutshuxue                      column=info:birthday, timestamp=1436753702430, value=1987-06-17                           

 scutshuxue                      column=info:company, timestamp=1436753702472, value=alibaba                               

 xiaofeng                        column=address:city, timestamp=1436753702760, value=jieyang                               

 xiaofeng                        column=address:contry, timestamp=1436753702703, value=china                               

 xiaofeng                        column=address:province, timestamp=1436753702729, value=guangdong                         

 xiaofeng                        column=address:town, timestamp=1436753702786, value=xianqiao                              

 xiaofeng                        column=info:birthday, timestamp=1436753702612, value=1987-4-17                            

 xiaofeng                        column=info:company, timestamp=1436753702678, value=alibaba                               

 xiaofeng                        column=info:favorite, timestamp=1436753702644, value=movie                                

2 row(s) in 0.0870 seconds

1.6 数据迁移环境检查

登陆hbase所在的服务器,通常来说是namenode那台服务器。检查一下已存在的目录

[root@namenode ~]# hadoop fs -ls /

Found 3 items

drwxr-xr-x   - hbase hbase               0 2015-07-13 10:02 /hbase

drwxrwxrwt   - hdfs  supergroup          0 2015-07-13 10:12 /tmp

drwxr-xr-x   - hdfs  supergroup          0 2015-07-13 10:12 /user

1.7开始导出HBase的表

hbase org.apache.hadoop.hbase.mapreduce.Export member /tmp/member

member是表名  后面的是hdfs文件系统的目录

2. 数据恢复

2.1将原数据表清空 

hbase(main):063:0> disable 'member'

0 row(s) in 1.3760 seconds

 

hbase(main):065:0> drop 'member'

0 row(s) in 0.8590 seconds

 

hbase(main):066:0> list

TABLE                                                                                                                      

0 row(s) in 0.0100 seconds

可以看出,member表已经不存在了。

 

2.2建立表结构

export导出的需要用import导入。但是导入之前表结构应该先建立。

hbase(main):069:0> create 'member','member_id','address','info'   

0 row(s) in 1.3860 seconds

 

[root@namenode ~]# hbase org.apache.hadoop.hbase.mapreduce.Import member /tmp/member

 

hbase(main):071:0> scan 'member'

ROW                              COLUMN+CELL                                                                               

0 row(s) in 0.0220 seconds

恢复之前

hbase(main):072:0> scan 'member'

ROW                              COLUMN+CELL                                                                                

 scutshuxue                      column=address:city, timestamp=1436753702560, value=hangzhou                              

 scutshuxue                      column=address:contry, timestamp=1436753702509, value=china                               

 scutshuxue                      column=address:province, timestamp=1436753702534, value=zhejiang                          

 scutshuxue                      column=info:age, timestamp=1436753702377, value=24                                         

 scutshuxue                      column=info:birthday, timestamp=1436753702430, value=1987-06-17                           

 scutshuxue                      column=info:company, timestamp=1436753702472, value=alibaba                               

 xiaofeng                        column=address:city, timestamp=1436753702760, value=jieyang                               

 xiaofeng                        column=address:contry, timestamp=1436753702703, value=china                               

 xiaofeng                        column=address:province, timestamp=1436753702729, value=guangdong                         

 xiaofeng                        column=address:town, timestamp=1436753702786, value=xianqiao                              

 xiaofeng                        column=info:birthday, timestamp=1436753702612, value=1987-4-17                            

 xiaofeng                        column=info:company, timestamp=1436753702678, value=alibaba                               

 xiaofeng                        column=info:favorite, timestamp=1436753702644, value=movie                                

2 row(s) in 0.0570 seconds

恢复之后

 

2.3如果是异地则需要将第一次导出的文件复制到待恢复的主机上

[root@namenode ~]# hadoop fs -get /tmp/bak /Downloads/new

[root@namenode ~]# cd /Downloads/new/

[root@namenode new]# ll

total 4

drwxr-xr-x 2 root root 4096 Jul 13 11:05 bak

[root@namenode new]# cd bak/

[root@namenode bak]# ll

total 4

-rw-r--r-- 1 root root 771 Jul 13 11:05 part-m-00000

-rw-r--r-- 1 root root   0 Jul 13 11:05 _SUCCESS

 

OS环境下采用多种途径复制过去。然后执行复制文件到hdfs上。

[root@namenode bak]# hadoop fs -copyFromLocal /Downloads/new/bak/  /tmp/new

[root@namenode bak]# hadoop fs -ls /tmp/

Found 6 items

drwxrwxrwx   - hdfs   supergroup          0 2015-07-13 11:11 /tmp/.cloudera_health_monitoring_canary_files

drwxr-xr-x   - root   supergroup          0 2015-07-13 11:03 /tmp/bak

drwx-wx-wx   - hive   supergroup          0 2015-07-13 10:06 /tmp/hive

drwxrwxrwt   - mapred hadoop              0 2015-07-13 10:03 /tmp/logs

drwxr-xr-x   - root   supergroup          0 2015-07-13 10:34 /tmp/member

drwxr-xr-x   - root   supergroup          0 2015-07-13 11:11 /tmp/new

 

2.4后续恢复参见第2

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/637517/viewspace-1766829/,如需转载,请注明出处,否则将追究法律责任。

转载于:http://blog.itpub.net/637517/viewspace-1766829/

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值