前言
今天在一个CDH
环境中启动HBase
时HBase Master
启动发生异常,于是查看HMaster
日志,其中一台HBase Master日志信息正常,另外一台HBase Master日志一直在刷SplitLogManager
相关的日志
报错日志
2020-06-20 20:00:54,345 WARN org.apache.hadoop.hbase.master.SplitLogManager: error while splitting logs in [hdfs://nameservice1/hbase/WALs/zfnode05.esgyn.cn,60020,1592556004866-splitting] installed = 1 but only 0 done
2020-06-20 20:00:54,345 WARN org.apache.hadoop.hbase.master.SplitLogManager: error while splitting logs in [hdfs://nameservice1/hbase/WALs/zfnode07.esgyn.cn,60020,1592556014755-splitting] installed = 1 but only 0 done
2020-06-20 20:00:54,346 WARN org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure: Failed serverName=zfnode05.esgyn.cn,60020,1592556004866, state=SERVER_CRASH_SPLIT_LOGS; retry
java.io.IOException: error or interrupted while splitting logs in [hdfs://nameservice1/hbase/WALs/zfnode05.esgyn.cn,60020,1592556004866-splitting] Task = installed = 1 done = 0 error = 0
at org.apache.hadoop.hbase.master.SplitLogManager.splitLogDistributed(SplitLogManager.java:291)
at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:436)
at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:409)
at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:326)
at org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.splitLogs(ServerCrashProcedure.java:449)
at org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.executeFromState(ServerCrashProcedure.java:257)
at org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.executeFromState(ServerCrashProcedure.java:75)
at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119)
at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:498)
at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1061)
at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:856)
at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:809)
at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75)
at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:495)
2020-06-20 20:00:54,346 WARN org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure: Failed serverName=zfnode07.esgyn.cn,60020,1592556014755, state=SERVER_CRASH_SPLIT_LOGS; retry
java.io.IOException: error or interrupted while splitting logs in [hdfs://nameservice1/hbase/WALs/zfnode07.esgyn.cn,60020,1592556014755-splitting] Task = installed = 1 done = 0 error = 0
at org.apache.hadoop.hbase.master.SplitLogManager.splitLogDistributed(SplitLogManager.java:291)
at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:436)
at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:409)
at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:326)
at org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.splitLogs(ServerCrashProcedure.java:449)
at org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.executeFromState(ServerCrashProcedure.java:257)
at org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.executeFromState(ServerCrashProcedure.java:75)
at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119)
at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:498)
at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1061)
at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:856)
at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:809)
at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75)
at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:495)
2020-06-20 20:00:54,355 INFO org.apache.zookeeper.ClientCnxn: EventThread shut down
2020-06-20 20:00:54,352 INFO org.apache.zookeeper.ZooKeeper: Session: 0x372d1899c99001f closed
2020-06-20 20:00:54,352 WARN org.apache.hadoop.hbase.master.SplitLogManager: Stopped while waiting for log splits to be completed
2020-06-20 20:00:54,360 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: stopping server zfnode02.esgyn.cn,60000,1592654144721; zookeeper connection closed.
2020-06-20 20:00:54,360 WARN org.apache.hadoop.hbase.master.SplitLogManager: error while splitting logs in [hdfs://nameservice1/hbase/WALs/zfnode08.esgyn.cn,60020,1592556004592-splitting] installed = 1 but only 0 done
2020-06-20 20:00:54,360 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: master/ZFnode02.esgyn.cn/10.19.41.22:60000 exiting
2020-06-20 20:00:54,361 WARN org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure: Failed serverName=zfnode08.esgyn.cn,60020,1592556004592, state=SERVER_CRASH_SPLIT_LOGS; retry
java.io.IOException: error or interrupted while splitting logs in [hdfs://nameservice1/hbase/WALs/zfnode08.esgyn.cn,60020,1592556004592-splitting] Task = installed = 1 done = 0 error = 0
at org.apache.hadoop.hbase.master.SplitLogManager.splitLogDistributed(SplitLogManager.java:291)
at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:436)
at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:409)
at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:326)
at org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.splitLogs(ServerCrashProcedure.java:449)
at org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.executeFromState(ServerCrashProcedure.java:257)
at org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.executeFromState(ServerCrashProcedure.java:75)
at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119)
at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:498)
at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1061)
at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:856)
at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:809)
at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75)
at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:495)
再看RegionServer日志,报错信息如下,
2020-06-20 19:49:00,847 WARN org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination: transisition task /hbase/splitWAL/WALs%2Fzfnode04.esgyn.cn%2C60020%2C1592556004686-splitting%2Fzfnode04.esgyn.cn%252C60020%252C1592556004686.null0.1592639729895 to RESIGNED zfnode03.esgyn.cn,60020,1592652946422 failed because of version mismatch
org.apache.zookeeper.KeeperException$BadVersionException: KeeperErrorCode = BadVersion for /hbase/splitWAL/WALs%2Fzfnode04.esgyn.cn%2C60020%2C1592556004686-splitting%2Fzfnode04.esgyn.cn%252C60020%252C1592556004686.null0.1592639729895
at org.apache.zookeeper.KeeperException.create(KeeperException.java:115)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:1266)
at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.setData(RecoverableZooKeeper.java:422)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:818)
at org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination.endTask(ZkSplitLogWorkerCoordination.java:595)
at org.apache.hadoop.hbase.regionserver.handler.WALSplitterHandler.process(WALSplitterHandler.java:96)
at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
根据以下报错信息所指,问题出现hdfs://nameservice1/hbase/WALs/zfnode07.esgyn.cn,60020,1592556014755-splitting。
原因
当无法拆分损坏的 WAL
日志时,会出现此问题,这会导致关闭 HBase
。
可能造成异常原因:
- HDFS 文件损坏 可以使用
hdfs fsck
命令检查 HDFS 集群 - 上次
HBase
的split
操作发生了异常,导致WAL
目录下面仍然有splitting
文件残留
解决方案
- 删除这些异常的
splitting
文件
hadoop fs -rmr /hbase/WALs/*-splitting
- 重启
HBase
集群。