Hbase手工dumping和splitting WAL

1、查看test表
  1. hbase(main):016:0> scan 'test'
  2. ROW COLUMN+CELL
  3.  row-01 column=cf1:id, timestamp=1442020353563, value=1
  4.  row-01 column=cf1:name, timestamp=1442020382276, value=aaa
  5.  row-02 column=cf1:id, timestamp=1442020360143, value=2
  6.  row-02 column=cf1:name, timestamp=1442020388494, value=bbb
  7.  row-03 column=cf1:id, timestamp=1442020364496, value=3
  8.  row-03 column=cf1:name, timestamp=1442020393616, value=ccc
  9.  row-04 column=cf1:id, timestamp=1442020369002, value=4
  10.  row-04 column=cf1:name, timestamp=1442020398557, value=ddd
  11.  row-05 column=cf1:id, timestamp=1442020373493, value=5
  12.  row-05 column=cf1:name, timestamp=1442020404131, value=eee
  13. 5 row(s) in 0.0550 seconds
2、查看Hadoop的日志文件信息
  1. grid@master1:~$ hadoop fs -lsr /hbase
  2. .....................................
  3. drwxr-xr-x - grid supergroup 0 2015-09-12 01:08 /hbase/.logs
  4. drwxr-xr-x - grid supergroup 0 2015-09-12 01:08 /hbase/.logs/slave1,60020,1442020081861
  5. -rw-r--r-- 3 grid supergroup 0 2015-09-12 01:08 /hbase/.logs/slave1,60020,1442020081861/slave1%2C60020%2C1442020081861.1442020084844
  6. drwxr-xr-x - grid supergroup 0 2015-09-12 01:08 /hbase/.logs/slave2,60020,1442020091201
  7. -rw-r--r-- 3 grid supergroup 0 2015-09-12 01:08 /hbase/.logs/slave2,60020,1442020091201/slave2%2C60020%2C1442020091201.1442020094247
  8. drwxr-xr-x - grid supergroup 0 2015-09-12 01:08 /hbase/.logs/slave3,60020,1442020081746
  9. -rw-r--r-- 3 grid supergroup 0 2015-09-12 01:08 /hbase/.logs/slave3,60020,1442020081746/slave3%2C60020%2C1442020081746.1442020084743
  10. .....................................
3、查看类 org.apache.hadoop.hbase.regionserver.wal.HLog功能
  1. grid@master1:~$ hbase org.apache.hadoop.hbase.regionserver.wal.HLog
  2. Usage: HLog <ARGS>
  3. Arguments:
  4.  --dump Dump textual representation of passed one or more files
  5.          For example: HLog --dump hdfs://example.com:9000/hbase/.logs/MACHINE/LOGFILE
  6.  --split Split the passed directory of WAL logs
  7.          For example: HLog --split hdfs://example.com:9000/hbase/.logs/DIR
有2中功能,dump和split
4、dump WAL
  1. grid@master1:~$ hbase org.apache.hadoop.hbase.regionserver.wal.HLog --dump hdfs://master1:9000/hbase/.logs/slave3,60020,1442020081746/slave3%2C60020%2C1442020081746.1442020084743
  2. Sequence 99 from region 1028785192 in table .META.
  3.   Action:
  4.     row: mytab,,1442016078397.1a941894743e7186ab65f8c462d8e7f2.
  5.     column: info:server
  6.     at time: Sat Sep 12 01:08:11 UTC 2015
  7.   Action:
  8.     row: mytab,,1442016078397.1a941894743e7186ab65f8c462d8e7f2.
  9.     column: info:serverstartcode
  10.     at time: Sat Sep 12 01:08:11 UTC 2015
  11. Sequence 100 from region 1028785192 in table .META.
  12.   Action:
  13.     row: mytab,90,1442016078397.c9393e0daec64352e7c8e541c8b2dce3.
  14.     column: info:server
  15.     at time: Sat Sep 12 01:08:11 UTC 2015
  16.   Action:
  17.     row: mytab,90,1442016078397.c9393e0daec64352e7c8e541c8b2dce3.
  18.     column: info:serverstartcode
  19.     at time: Sat Sep 12 01:08:11 UTC 2015
  20. ....................................................................
  1. grid@master1:~$ hbase org.apache.hadoop.hbase.regionserver.wal.HLog --dump hdfs://master1:9000/hbase/.logs/slave2,60020,1442020091201/slave2%2C60020%2C1442020091201.1442020094247
  2. Sequence 68 from region 70236052 in table -ROOT-
  3.   Action:
  4.     row: .META.,,1
  5.     column: info:server
  6.     at time: Sat Sep 12 01:08:10 UTC 2015
  7.   Action:
  8.     row: .META.,,1
  9.     column: info:serverstartcode
  10.     at time: Sat Sep 12 01:08:10 UTC 2015
  11. Sequence 95 from region 5cd0e6afea7d7843a5d28d922d6193f0 in table test
  12.   Action:
  13.     row: row-01
  14.     column: cf1:id
  15.     at time: Sat Sep 12 01:12:33 UTC 2015
  16. Sequence 96 from region 5cd0e6afea7d7843a5d28d922d6193f0 in table test
  17.   Action:
  18.     row: row-02
  19.     column: cf1:id
  20.     at time: Sat Sep 12 01:12:40 UTC 2015
  21. Sequence 97 from region 5cd0e6afea7d7843a5d28d922d6193f0 in table test
  22.   Action:
  23.     row: row-03
  24.     column: cf1:id
  25.     at time: Sat Sep 12 01:12:44 UTC 2015
  26. Sequence 98 from region 5cd0e6afea7d7843a5d28d922d6193f0 in table test
  27.   Action:
  28.     row: row-04
  29.     column: cf1:id
  30.     at time: Sat Sep 12 01:12:49 UTC 2015
  31. ....................................................................
  32. Sequence 105 from region 5cd0e6afea7d7843a5d28d922d6193f0 in table test
  33.   Action:
  34.     row: METAROW
  35.     column: METAFAMILY:
  36.     at time: Sat Sep 12 01:13:39 UTC 2015
slave3上的日志都是与mytab表相关的内容,slave2上的日志都是与test表相关的内容,而slave1没有任何信息。
5、使用参数-w过滤行
  1. grid@master1:~$ hbase org.apache.hadoop.hbase.regionserver.wal.HLog --dump hdfs://master1:9000/hbase/.logs/slave2,60020,1442020091201/slave2%2C60020%2C1442020091201.1442020094247 -p -w 'row-01'
  2. Sequence 95 from region 5cd0e6afea7d7843a5d28d922d6193f0 in table test
  3.   Action:
  4.     row: row-01
  5.     column: cf1:id
  6.     at time: Sat Sep 12 01:12:33 UTC 2015
  7.     value: 1
  8. Sequence 100 from region 5cd0e6afea7d7843a5d28d922d6193f0 in table test
  9.   Action:
  10.     row: row-01
  11.     column: cf1:name
  12.     at time: Sat Sep 12 01:13:02 UTC 2015
  13.     value: aaa
6、手工splitting WAL
  1. grid@master1:~$ hbase org.apache.hadoop.hbase.regionserver.wal.HLog --split hdfs://master1:9000/hbase/.logs/slave2,60020,1442020091201
  2. 15/09/12 01:41:09 INFO wal.HLogSplitter: Splitting 1 hlog(s) in hdfs://master1:9000/hbase/.logs/slave2,60020,1442020091201
  3. 15/09/12 01:41:09 DEBUG wal.HLogSplitter: Writer thread Thread[WriterThread-0,5,main]: starting
  4. 15/09/12 01:41:09 DEBUG wal.HLogSplitter: Writer thread Thread[WriterThread-1,5,main]: starting
  5. 15/09/12 01:41:09 DEBUG wal.HLogSplitter: Writer thread Thread[WriterThread-2,5,main]: starting
  6. 15/09/12 01:41:09 INFO wal.HLogSplitter: Splitting hlog 1 of 1: hdfs://master1:9000/hbase/.logs/slave2,60020,1442020091201/slave2%2C60020%2C1442020091201.1442020094247, length=0
  7. 15/09/12 01:41:09 WARN wal.HLogSplitter: File hdfs://master1:9000/hbase/.logs/slave2,60020,1442020091201/slave2%2C60020%2C1442020091201.1442020094247 might be still open, length is 0
  8. 15/09/12 01:41:09 INFO util.FSHDFSUtils: Recovering lease on dfs file hdfs://master1:9000/hbase/.logs/slave2,60020,1442020091201/slave2%2C60020%2C1442020091201.1442020094247
  9. 15/09/12 01:41:09 INFO util.FSHDFSUtils: recoverLease=false, attempt=0 on file=hdfs://master1:9000/hbase/.logs/slave2,60020,1442020091201/slave2%2C60020%2C1442020091201.1442020094247 after 2ms
  10. 15/09/12 01:41:12 INFO util.FSHDFSUtils: recoverLease=true, attempt=1 on file=hdfs://master1:9000/hbase/.logs/slave2,60020,1442020091201/slave2%2C60020%2C1442020091201.1442020094247 after 3005ms
  11. 15/09/12 01:41:12 DEBUG wal.HLogSplitter: Pushed=12 entries from hdfs://master1:9000/hbase/.logs/slave2,60020,1442020091201/slave2%2C60020%2C1442020091201.1442020094247
  12. 15/09/12 01:41:12 INFO wal.HLogSplitter: Waiting for split writer threads to finish
  13. 15/09/12 01:41:12 INFO util.FSUtils: FileSystem doesn't support getDefaultReplication
  14. 15/09/12 01:41:12 INFO util.FSUtils: FileSystem doesn't support getDefaultBlockSize
  15. 15/09/12 01:41:12 INFO util.FSUtils: FileSystem doesn't support getDefaultReplication
  16. 15/09/12 01:41:12 INFO util.FSUtils: FileSystem doesn't support getDefaultBlockSize
  17. 15/09/12 01:41:12 DEBUG wal.SequenceFileLogWriter: using new createWriter -- HADOOP-6840
  18. 15/09/12 01:41:12 DEBUG wal.SequenceFileLogWriter: using new createWriter -- HADOOP-6840
  19. 15/09/12 01:41:12 DEBUG wal.SequenceFileLogWriter: Path=hdfs://master1:9000/hbase/test/5cd0e6afea7d7843a5d28d922d6193f0/recovered.edits/0000000000000000095.temp, syncFs=true, hflush=false, compression=false
  20. 15/09/12 01:41:12 DEBUG wal.SequenceFileLogWriter: Path=hdfs://master1:9000/hbase/-ROOT-/70236052/recovered.edits/0000000000000000068.temp, syncFs=true, hflush=false, compression=false
  21. 15/09/12 01:41:12 DEBUG wal.HLogSplitter: Creating writer path=hdfs://master1:9000/hbase/-ROOT-/70236052/recovered.edits/0000000000000000068.temp region=70236052
  22. 15/09/12 01:41:12 DEBUG wal.HLogSplitter: Creating writer path=hdfs://master1:9000/hbase/test/5cd0e6afea7d7843a5d28d922d6193f0/recovered.edits/0000000000000000095.temp region=5cd0e6afea7d7843a5d28d922d6193f0
  23. 15/09/12 01:41:12 INFO wal.HLogSplitter: Split writers finished
  24. 15/09/12 01:41:12 INFO wal.HLogSplitter: Closed path hdfs://master1:9000/hbase/test/5cd0e6afea7d7843a5d28d922d6193f0/recovered.edits/0000000000000000095.temp (wrote 11 edits in 304ms)
  25. 15/09/12 01:41:12 INFO wal.HLogSplitter: Closed path hdfs://master1:9000/hbase/-ROOT-/70236052/recovered.edits/0000000000000000068.temp (wrote 1 edits in 305ms)
  26. 15/09/12 01:41:12 DEBUG wal.HLogSplitter: Rename hdfs://master1:9000/hbase/test/5cd0e6afea7d7843a5d28d922d6193f0/recovered.edits/0000000000000000095.temp to hdfs://master1:9000/hbase/test/5cd0e6afea7d7843a5d28d922d6193f0/recovered.edits/0000000000000000105
  27. 15/09/12 01:41:12 DEBUG wal.HLogSplitter: Rename hdfs://master1:9000/hbase/-ROOT-/70236052/recovered.edits/0000000000000000068.temp to hdfs://master1:9000/hbase/-ROOT-/70236052/recovered.edits/0000000000000000068
  28. 15/09/12 01:41:12 DEBUG wal.HLogSplitter: Archived processed log hdfs://master1:9000/hbase/.logs/slave2,60020,1442020091201/slave2%2C60020%2C1442020091201.1442020094247 to hdfs://master1:9000/hbase/.oldlogs/slave2%2C60020%2C1442020091201.1442020094247
  29. 15/09/12 01:41:12 INFO wal.HLogSplitter: hlog file splitting completed in 3576 ms for hdfs://master1:9000/hbase/.logs/slave2,60020,1442020091201
注意查看上面标红的部分
7、再次查看Hbase文件的变化
  1. grid@master1:~$ hadoop fs -lsr /hbase
  2. ..............................................................
  3. drwxr-xr-x - grid supergroup 0 2015-09-11 23:05 /hbase/-ROOT-/70236052/info
  4. -rw-r--r-- 3 grid supergroup 800 2015-09-11 23:05 /hbase/-ROOT-/70236052/info/17a565df18804731b4eb38bdcc95c262
  5. -rw-r--r-- 3 grid supergroup 1596 2015-09-11 19:21 /hbase/-ROOT-/70236052/info/924e206e22754690b08c8ec8bbf84692
  6. drwxr-xr-x - grid supergroup 0 2015-09-12 01:41 /hbase/-ROOT-/70236052/recovered.edits
  7. -rw-r--r-- 2 grid supergroup 303 2015-09-12 01:41 /hbase/-ROOT-/70236052/recovered.edits/0000000000000000068
  8. .................................................................
  9. drwxr-xr-x - grid supergroup 0 2015-09-12 01:41 /hbase/.logs
  10. drwxr-xr-x - grid supergroup 0 2015-09-12 01:08 /hbase/.logs/slave1,60020,1442020081861
  11. -rw-r--r-- 3 grid supergroup 0 2015-09-12 01:08 /hbase/.logs/slave1,60020,1442020081861/slave1%2C60020%2C1442020081861.1442020084844
  12. drwxr-xr-x - grid supergroup 0 2015-09-12 01:08 /hbase/.logs/slave3,60020,1442020081746
  13. -rw-r--r-- 3 grid supergroup 0 2015-09-12 01:08 /hbase/.logs/slave3,60020,1442020081746/slave3%2C60020%2C1442020081746.1442020084743
  14. .................................................................
  15. drwxr-xr-x - grid supergroup 0 2015-09-12 01:15 /hbase/test/5cd0e6afea7d7843a5d28d922d6193f0/cf1
  16. -rw-r--r-- 3 grid supergroup 954 2015-09-12 01:15 /hbase/test/5cd0e6afea7d7843a5d28d922d6193f0/cf1/6e4df2f70b504d0d87507b7152e2de39
  17. drwxr-xr-x - grid supergroup 0 2015-09-12 01:41 /hbase/test/5cd0e6afea7d7843a5d28d922d6193f0/recovered.edits
  18. -rw-r--r-- 2 grid supergroup 1408 2015-09-12 01:41 /hbase/test/5cd0e6afea7d7843a5d28d922d6193f0/recovered.edits/0000000000000000105
可以看到slave2的WAL文件没有了,但是多出了ROOT和test表的recovered .edits
8、查看recovered .edits文件的内容
  1. grid@master1:~$ hbase org.apache.hadoop.hbase.regionserver.wal.HLog --dump hdfs://master1:9000/hbase/test/5cd0e6afea7d7843a5d28d922d6193f0/recovered.edits/0000000000000000105
  2. Sequence 95 from region 5cd0e6afea7d7843a5d28d922d6193f0 in table test
  3.   Action:
  4.     row: row-01
  5.     column: cf1:id
  6.     at time: Sat Sep 12 01:12:33 UTC 2015
  7. Sequence 96 from region 5cd0e6afea7d7843a5d28d922d6193f0 in table test
  8.   Action:
  9.     row: row-02
  10.     column: cf1:id
  11.     at time: Sat Sep 12 01:12:40 UTC 2015
  12. Sequence 97 from region 5cd0e6afea7d7843a5d28d922d6193f0 in table test
  13.   Action:
  14.     row: row-03
  15.     column: cf1:id
  16.     at time: Sat Sep 12 01:12:44 UTC 2015
  17. ......................................................................
  18. Sequence 105 from region 5cd0e6afea7d7843a5d28d922d6193f0 in table test
  19.   Action:
  20.     row: METAROW
  21.     column: METAFAMILY:
  22.     at time: Sat Sep 12 01:13:39 UTC 2015
  1. grid@master1:~$ hbase org.apache.hadoop.hbase.regionserver.wal.HLog --dump hdfs://master1:9000/hbase/-ROOT-/70236052/recovered.edits/0000000000000000068
  2. Sequence 68 from region 70236052 in table -ROOT-
  3.   Action:
  4.     row: .META.,,1
  5.     column: info:server
  6.     at time: Sat Sep 12 01:08:10 UTC 2015
  7.   Action:
  8.     row: .META.,,1
  9.     column: info:serverstartcode
  10.     at time: Sat Sep 12 01:08:10 UTC 2015
从上面可以看出内容与splitting之前的内容一致。

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/12219480/viewspace-1797992/,如需转载,请注明出处,否则将追究法律责任。

转载于:http://blog.itpub.net/12219480/viewspace-1797992/

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值