phoenix-hbase之坑(一) 部署phoenix后硬盘空间疯涨

phoenix执行sqlline.py hostname:2181 启动后硬盘空间疯涨
现象:
df -h 查看空间使用情况

启动phoenix之后发现硬盘空间疯涨,查看HMster,HRegionServer日志发现(大致描述):
split wal日志失败,returning error

2013-09-09 11:23:05,863 DEBUG org.apache.hadoop.hbase.regionserver.HRegionServer: NotServingRegionExceptio

n; Region is not online: -ROOT-,,0
2013-09-09 11:23:08,874 DEBUG org.apache.hadoop.hbase.regionserver.HRegionServer: NotServingRegionException; Region is not online: -ROOT-,,0
2013-09-09 11:23:11,898 DEBUG org.apache.hadoop.hbase.regionserver.HRegionServer: NotServingRegionException; Region is not online: -ROOT-,,0
2013-09-09 11:24:15,344 DEBUG org.apache.hadoop.hbase.io.hfile.LruBlockCache: Stats: total=2.05 MB, free=247.44 MB, max=249.48 MB, blocks=0, accesses=0, hits=0, hitRatio=0, cachingAccesses=0, cachingHits=0, cachingHitsRatio=0, evictions=0, evicted=0, evictedPerRun=NaN
2013-09-09 11:24:19,977 ERROR org.apache.hadoop.hbase.regionserver.wal.HLog: Can't open after 300 attempts and 300518ms  for hdfs://opentsdb:8020/hbase/.logs/opentsdb,60020,1378358082016-splitting/opentsdb,60020,1378358082016.1378397697610
2013-09-09 11:24:19,978 INFO org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Processed 0 edits across 0 regions threw away edits for 0 regions; log file=hdfs://opentsdb:8020/hbase/.logs/opentsdb,60020,1378358082016-splitting/opentsdb,60020,1378358082016.1378397697610 is corrupted = false progress failed = false
2013-09-09 11:24:19,978 WARN org.apache.hadoop.hbase.regionserver.SplitLogWorker: log splitting of hdfs://opentsdb:8020/hbase/.logs/opentsdb,60020,1378358082016-splitting/opentsdb,60020,1378358082016.1378397697610 failed, returning error
分析:

HRegionServer拆分WAL预写日志失败,可能是hdfs上的文件损坏,使用 hadoop fsck /hbase 命令查看

FSCK started by hdfs (auth:SIMPLE) from /xxx.xxx.xxx for path / at Fri Jul 26 14:37:29 CST 2019
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
................................................................................................
/.../c9ddcb18-51c0-4fa7-bdab-daa5079bc094/rdd-1061487/_partitioner: Under replicated BP-770033680-172.20.36.31-1562142528050:blk_1073943459_202678. *****Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s), 0 decommissioning replica(s)**.***
. 
/.../c9ddcb18-51c0-4fa7-bdab-daa5079bc094/rdd-1061487/part-00000: Under replicated BP-770033680-172.20.36.31-1562142528050:blk_1073943450_202669. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s), 0 decommissioning replica(s).
. 
/.../c9ddcb18-51c0-4fa7-bdab-daa5079bc094/rdd-1061487/part-00001: Under replicated BP-770033680-172.20.36.31-1562142528050:blk_1073943456_202675. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s), 0 decommissioning replica(s).
. 
/.../c9ddcb18-51c0-4fa7-bdab-daa5079bc094/rdd-1061487/part-00002: Under replicated BP-770033680-172.20.36.31-1562142528050:blk_1073943452_202671. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s), 0 decommissioning replica(s).

 

. . .

. . . 

. . . 

Total size: 64294925737 B (Total open files size: 672396467 B)
Total dirs:	2743
Total files:	2439 代表/目录下文件总大小
Total symlinks:	0 (Files currently being written: 22)
Total blocks (validated):	2752 (avg. block size 23362981 B) (Total open file blocks (not validated): 26)
Minimally replicated blocks:	2752 (100.0 %)
Over-replicated blocks:	0 (0.0 %)
Under-replicated blocks:	22 (0.7994186 %)  指的是副本数小于指定副本数的block数量
Mis-replicated blocks:	0 (0.0 %)
Default replication factor:	3
Average block replication:	2.9920058
Corrupt blocks:	0
Missing replicas:	22 (0.26647288 %)   丢失的副本数
Number of data-nodes:	3
Number of racks:	1
FSCK ended at Fri Jul 26 14:37:29 CST 2019 in 48 milliseconds
注意:

Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s), 0 decommissioning replica(s)

解决办法:

本次解决办法

  • 修改hdfs-site.xml配置,修改dfs.replication的数量为datanode的个数
  • 将hdfs-site.xml ln-s 软连接到hbase的conf目录下
  • 重启hadoop,hbase
  • 从hdfs上把文件get下来
  • 删除hdfs上的文件
  • 将本地文件put上去
    其他解决办法参考:

https://www.jianshu.com/p/135dd6846618

一、文件
上传:

[hadoop@hadoop001 ~]$ hdfs dfs -mkdir /blockrecover
[hadoop@hadoop001 ~]$ hdfs dfs -put data/genome-scores.csv /blockrecover
[hadoop@hadoop001 ~]$ hdfs dfs -ls /blockrecover                                                            
Found 1 items
-rw-r--r--   3 hadoop hadoop  323544381 2019-08-21 18:48 /blockrecover/genome-scores.csv

校验健康状态:

[hadoop@hadoop001 ~]$ hdfs fsck /
Connecting to namenode via http://hadoop001:50070/fsck?ugi=hadoop&path=%2F
FSCK started by hadoop (auth:SIMPLE) from /192.168.174.121 for path / at Thu Aug 22 15:44:41 CST 2019
.Status: HEALTHY
 Total size:    323544381 B
 Total dirs:    10
 Total files:   1
 Total symlinks:                0
 Total blocks (validated):      3 (avg. block size 107848127 B)
 Minimally replicated blocks:   3 (100.0 %)
 Over-replicated blocks:        0 (0.0 %)
 Under-replicated blocks:       0 (0.0 %)
 Mis-replicated blocks:         0 (0.0 %)
 Default replication factor:    3
 Average block replication:     3.0
 Corrupt blocks:                0
 Missing replicas:              0 (0.0 %)
 Number of data-nodes:          3
 Number of racks:               1
FSCK ended at Thu Aug 22 15:44:41 CST 2019 in 4 milliseconds

The filesystem under path ‘/’ is HEALTHY
二、直接DN节点上删除文件一个block的一个副本(3副本)
删除块和meta文件:

[hadoop@hadoop002 subdir0]$ ll
total 318444
-rw-rw-r-- 1 hadoop hadoop 134217728 Aug 22 15:27 blk_1073741825
-rw-rw-r-- 1 hadoop hadoop   1048583 Aug 22 15:27 blk_1073741825_1001.meta
-rw-rw-r-- 1 hadoop hadoop 134217728 Aug 21 18:48 blk_1073741826
-rw-rw-r-- 1 hadoop hadoop   1048583 Aug 21 18:48 blk_1073741826_1002.meta
-rw-rw-r-- 1 hadoop hadoop  55108925 Aug 21 18:48 blk_1073741827
-rw-rw-r-- 1 hadoop hadoop    430547 Aug 21 18:48 blk_1073741827_1003.meta
[hadoop@hadoop002 subdir0]$ rm -rf blk_1073741825 blk_1073741825_1001.meta
[hadoop@hadoop002 subdir0]$ ll
total 186344
-rw-rw-r-- 1 hadoop hadoop 134217728 Aug 21 18:48 blk_1073741826
-rw-rw-r-- 1 hadoop hadoop   1048583 Aug 21 18:48 blk_1073741826_1002.meta
-rw-rw-r-- 1 hadoop hadoop  55108925 Aug 21 18:48 blk_1073741827
-rw-rw-r-- 1 hadoop hadoop    430547 Aug 21 18:48 blk_1073741827_1003.meta

直接重启HDFS,直接模拟损坏效果,然后fsck检查:

[hadoop@hadoop001 ~]$ hdfs fsck /
Connecting to namenode via http://hadoop002:50070/fsck?ugi=hadoop&path=%2F
FSCK started by hadoop (auth:SIMPLE) from /192.168.174.121 for path / at Thu Aug 22 15:49:54 CST 2019
.
/blockrecover/genome-scores.csv:  Under replicated BP-1685056456-192.168.174.121-1566207286072:blk_1073741825_1001. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s), 0 decommissioning replica(s).
Status: HEALTHY
 Total size:    323544381 B
 Total dirs:    10
 Total files:   1
 Total symlinks:                0
 Total blocks (validated):      3 (avg. block size 107848127 B)
 Minimally replicated blocks:   3 (100.0 %)
 Over-replicated blocks:        0 (0.0 %)
 Under-replicated blocks:       1 (33.333332 %)
 Mis-replicated blocks:         0 (0.0 %)
 Default replication factor:    3
 Average block replication:     2.6666667
 Corrupt blocks:                0
 Missing replicas:              1 (11.111111 %)
 Number of data-nodes:          3
 Number of racks:               1
FSCK ended at Thu Aug 22 15:49:54 CST 2019 in 26 milliseconds

The filesystem under path ‘/’ is HEALTHY
三、手动修复

[hadoop@hadoop001 ~]$ hdfs | grep debug

没有输出debug参数的任何信息结果! 故hdfs命令帮助是没有debug的,但是确实有hdfs debug这个组合命令,切记。

#修复命令

[hadoop@hadoop001 ~]$ hdfs debug  recoverLease  -path /blockrecover/genome-scores.csv -retries 10

直接DN节点查看,block文件和meta文件恢复

[hadoop@hadoop002 subdir0]$ ll
total 318444
-rw-rw-r-- 1 hadoop hadoop 134217728 Aug 22 15:50 blk_1073741825
-rw-rw-r-- 1 hadoop hadoop   1048583 Aug 22 15:50 blk_1073741825_1001.meta
-rw-rw-r-- 1 hadoop hadoop 134217728 Aug 21 18:48 blk_1073741826
-rw-rw-r-- 1 hadoop hadoop   1048583 Aug 21 18:48 blk_1073741826_1002.meta
-rw-rw-r-- 1 hadoop hadoop  55108925 Aug 21 18:48 blk_1073741827
-rw-rw-r-- 1 hadoop hadoop    430547 Aug 21 18:48 blk_1073741827_1003.meta

四、自动修复
当数据块损坏后,DN节点执行directoryscan操作之前,都不会发现损坏;
也就是directoryscan操作是间隔6h
dfs.datanode.directoryscan.interval : 21600

在DN向NN进行blockreport前,都不会恢复数据块;
也就是blockreport操作是间隔6h
dfs.blockreport.intervalMsec : 21600000

当NN收到blockreport才会进⾏行行恢复操作
五、直接在一个block块副本写入脏数据,造成副本损坏

[hadoop@hadoop002 subdir0]$ ll
total 318448
-rw-rw-r-- 1 hadoop hadoop 134217728 Aug 22 17:34 blk_1073741828
-rw-rw-r-- 1 hadoop hadoop   1048583 Aug 22 17:34 blk_1073741828_1004.meta
-rw-rw-r-- 1 hadoop hadoop 134217728 Aug 22 17:35 blk_1073741829
-rw-rw-r-- 1 hadoop hadoop   1048583 Aug 22 17:35 blk_1073741829_1005.meta
-rw-rw-r-- 1 hadoop hadoop  55108925 Aug 22 17:35 blk_1073741830
-rw-rw-r-- 1 hadoop hadoop    430547 Aug 22 17:35 blk_1073741830_1006.meta
[hadoop@hadoop002 subdir0]$ echo "gfdgfdgdf" >> blk_1073741830

直接重启HDFS,直接模拟损坏效果,然后fsck检查:

[hadoop@hadoop001 logs]$ hdfs fsck /
Connecting to namenode via http://hadoop001:50070/fsck?ugi=hadoop&path=%2F
FSCK started by hadoop (auth:SIMPLE) from /192.168.174.121 for path / at Fri Aug 23 09:05:39 CST 2019
.
/blockrecover/genome-scores.csv:  Under replicated BP-1685056456-192.168.174.121-1566207286072:blk_1073741830_1006. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s), 0 decommissioning replica(s).
Status: HEALTHY
 Total size:    323544381 B
 Total dirs:    10
 Total files:   1
 Total symlinks:                0
 Total blocks (validated):      3 (avg. block size 107848127 B)
 Minimally replicated blocks:   3 (100.0 %)
 Over-replicated blocks:        0 (0.0 %)
 Under-replicated blocks:       1 (33.333332 %)
 Mis-replicated blocks:         0 (0.0 %)
 Default replication factor:    3
 Average block replication:     2.6666667
 Corrupt blocks:                0
 Missing replicas:              1 (11.111111 %)
 Number of data-nodes:          3
 Number of racks:               1
FSCK ended at Fri Aug 23 09:05:39 CST 2019 in 59 milliseconds

The filesystem under path ‘/’ is HEALTHY
定位到Block损坏的副本位置

[hadoop@hadoop001 ~]$ hdfs fsck /blockrecover/genome-scores.csv -files -locations -blocks -racks 
Connecting to namenode via http://hadoop001:50070/fsck?ugi=hadoop&files=1&locations=1&blocks=1&racks=1&path=%2Fblockrecover%2Fgenome-scores.csv
FSCK started by hadoop (auth:SIMPLE) from /192.168.174.121 for path /blockrecover/genome-scores.csv at Fri Aug 23 09:57:52 CST 2019
/blockrecover/genome-scores.csv 323544381 bytes, 3 block(s):  Under replicated BP-1685056456-192.168.174.121-1566207286072:blk_1073741830_1006. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s), 0 decommissioning replica(s).
0. BP-1685056456-192.168.174.121-1566207286072:blk_1073741828_1004 len=134217728 Live_repl=3 [/default-rack/192.168.174.123:50010, /default-rack/192.168.174.121:50010, /default-rack/192.168.174.122:50010]
1. BP-1685056456-192.168.174.121-1566207286072:blk_1073741829_1005 len=134217728 Live_repl=3 [/default-rack/192.168.174.123:50010, /default-rack/192.168.174.121:50010, /default-rack/192.168.174.122:50010]
2. BP-1685056456-192.168.174.121-1566207286072:blk_1073741830_1006 len=55108925 Live_repl=2 [/default-rack/192.168.174.123:50010, /default-rack/192.168.174.121:50010]

Status: HEALTHY
 Total size:    323544381 B
 Total dirs:    0
 Total files:   1
 Total symlinks:                0
 Total blocks (validated):      3 (avg. block size 107848127 B)
 Minimally replicated blocks:   3 (100.0 %)
 Over-replicated blocks:        0 (0.0 %)
 Under-replicated blocks:       1 (33.333332 %)
 Mis-replicated blocks:         0 (0.0 %)
 Default replication factor:    3
 Average block replication:     2.6666667
 Corrupt blocks:                0
 Missing replicas:              1 (11.111111 %)
 Number of data-nodes:          3
 Number of racks:               1
FSCK ended at Fri Aug 23 09:57:52 CST 2019 in 2 milliseconds

The filesystem under path ‘/blockrecover/genome-scores.csv’ is HEALTHY
结果:没办法定位到损坏副本的位置

解决方式:

手动找到损坏副本的位置,删除对应的block文件和meta文件,然后执行
hdfs debug recoverLease -path /blockrecover/genome-scores.csv -retries 10

先把文件get下载,然后hdfs删除,再对应上传
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值