wal日志被部分删除后,正常启动hbase,hbase先尝试读数据,不断报错,经过大约20分的挣扎后最终发现日志丢失,程序放弃读wal日志,正常启动hbase。wal日志删除的时候没有写数据,数据没有丢失,若刚写完wal日志,还没写数据文件,那估计数据会少量丢失,但是程序会提示写入失败;总体来说hbase的容错能力还是棒棒的
详细错误日志如下:
1、刚开始报错的日志
2013-12-20 23:39:32,786 WARN [SplitLogWorker-hadoop03,60020,1387553065927] hdfs.DFSClient: Last block locations unavailable. Datanodes might not have reported blocks completely. Will retry for 2 times
2013-12-20 23:39:36,797 WARN [SplitLogWorker-hadoop03,60020,1387553065927] hdfs.DFSClient: Last block locations unavailable. Datanodes might not have reported blocks completely. Will retry for 1 times
2013-12-20 23:39:40,798 WARN [SplitLogWorker-hadoop03,60020,1387553065927] wal.HLogFactory: Lease should have recovered. This is not expected. Will retry
java.io.IOException: Could not obtain the last block locations.
at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:1958)
at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.<init>(DFSClient.java:1936)
at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:731)
at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:165)
at org.apache.hadoop.io.SequenceFile$Reader.openFile(SequenceFile.java:1499)
at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.openFile(SequenceFileLogReader.java:76)
at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1486)
at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1479)
at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1474)
at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.<init>(SequenceFileLogReader.java:69)
at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.reset(SequenceFileLogReader.java:174)
at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.initReader(SequenceFileLogReader.java:183)
at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.init(ReaderBase.java:68)
at org.apache.hadoop.hbase.regionserver.wal.HLogFacto