错误代码:
ERROR: org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet
at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2932)
at org.apache.hadoop.hbase.master.MasterRpcServices.isMasterRunning(MasterRpcServices.java:1084)
at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
logs文件报错:
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1606)
at org.apache.hadoop.ipc.Client.call(Client.java:1435)
... 31 more
2020-09-07 20:59:20,963 ERROR [Thread-15] master.HMaster: ***** ABORTING master hadoop102,16000,1599482835095: Unhandled exception. Starting shutdown. *****
java.net.ConnectException: Call From hadoop102/192.168.6.102 to hadoop102:8020 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:755)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1549)
at org.apache.hadoop.ipc.Client.call(Client.java:1491)
at org.apache.hadoop.ipc.Client.call(Client.java:1388)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
at com.sun.proxy.$Proxy17.setSafeMode(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setSafeMode(ClientNamenodeProtocolTranslatorPB.java:785)
at sun.reflect.GeneratedMethodAccessor1.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
at com.sun.proxy.$Proxy18.setSafeMode(Unknown Source)
at sun.reflect.GeneratedMethodAccessor1.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:372)
at com.sun.proxy.$Proxy19.setSafeMode(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.setSafeMode(DFSClient.java:2051)
at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:1475)
at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:1459)
at org.apache.hadoop.hbase.util.FSUtils.isInSafeMode(FSUtils.java:292)
at org.apache.hadoop.hbase.util.FSUtils.waitOnSafeMode(FSUtils.java:698)
at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:241)
at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:151)
at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:122)
at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:823)
at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2241)
at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:567)
at java.lang.Thread.run(Thread.java:748)
2020-09-07 21:06:54,869 INFO [Thread-15] util.FSUtils: Waiting for dfs to exit safe mode...
2020-09-07 21:07:04,874 INFO [Thread-15] util.FSUtils: Waiting for dfs to exit safe mode...
2020-09-07 21:07:14,878 INFO [Thread-15] util.FSUtils: Waiting for dfs to exit safe mode...
2020-09-07 21:07:24,884 INFO [Thread-15] util.FSUtils: Waiting for dfs to exit safe mode...
2020-09-07 21:07:34,889 INFO [Thread-15] util.FSUtils: Waiting for dfs to exit safe mode...
问题分析:hdfs进入安全模式后,hbase无法启动,会一直打印等待dfs退出安全模式
推出安全模式:
1.使用fsck命令查看是否有损坏的块
./bin/hdfs fsck /
2.在NameNode节点上使用dfsadmin命令离开安全模式
./bin/hdfs dfsadmin -safemode leave
3.使用fsck命令将丢失的块删除
./bin/hdfs fsck -delete
4.重启hdfs相关服务
5.重启hbase
6../bin/hbase hbck -repair
HBase提供了hbck命令来检查各种不一致问题,包括meta数据不一致。检查数据在Master及RegionServer的内存中状态与数据在HDFS中的状态之间的一致性。
HBase的hbck不仅能够检查不一致问题,而且还能够修复不一致问题。
在生产环境中,应当经常运行hbck,以便及早发现不一致问题并更容易地解决问题。