简介:启动hive报错:
Cannot create directory /tmp/hive. Name node is in safe mode.
The reported blocks 451 needs additional 2 blocks to reach the threshold 0.9990 of total blocks 454.
The minimum number of live datanodes is not required. Safe mode will be turned off automatically once the thresholds have been reached. NamenodeHostName:nwh120
Caused by: org.apache.hadoop.ipc.RemoteException: Cannot create directory /tmp/hive/lqs/eb088783-3bce-4d6d-9f90-62faf733a1c0. Name node is in safe mode.
The reported blocks 451 needs additional 2 blocks to reach the threshold 0.9990 of total blocks 454.
The minimum number of live datanodes is not required. Safe mode will be turned off automatically once the thresholds have been reached. NamenodeHostName:nwh120
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.newSafemodeException(FSNamesystem.java:1468)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1455)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3174)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1145)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:714)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1000)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:928)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2916)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1545) ~[hadoop-common-3.1.3.jar:?]
at org.apache.hadoop.ipc.Client.call(Client.java:1491) ~[hadoop-common-3.1.3.jar:?]
at org.apache.hadoop.ipc.Client.call(Client.java:1388) ~[hadoop-common-3.1.3.jar:?]
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233) ~[hadoop-common-3.1.3.jar:?]
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) ~[hadoop-common-3.1.3.jar:?]
at com.sun.proxy.$Proxy29.mkdirs(Unknown Source) ~[?:?]
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:657) ~[hadoop-hdfs-client-3.1.3.jar:?]
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) ~[?:?]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_212]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_212]
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) ~[hadoop-common-3.1.3.jar:?]
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) ~[hadoop-common-3.1.3.jar:?]
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) ~[hadoop-common-3.1.3.jar:?]
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) ~[hadoop-common-3.1.3.jar:?]
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) ~[hadoop-common-3.1.3.jar:?]
at com.sun.proxy.$Proxy30.mkdirs(Unknown Source) ~[?:?]
at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2420) ~[hadoop-hdfs-client-3.1.3.jar:?]
at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2396) ~[hadoop-hdfs-client-3.1.3.jar:?]
at org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1319) ~[hadoop-hdfs-client-3.1.3.jar:?]
at org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1316) ~[hadoop-hdfs-client-3.1.3.jar:?]
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) ~[hadoop-common-3.1.3.jar:?]
at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1333) ~[hadoop-hdfs-client-3.1.3.jar:?]
at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1308) ~[hadoop-hdfs-client-3.1.3.jar:?]
at org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:786) ~[hive-exec-3.1.2.jar:3.1.2]
at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:721) ~[hive-exec-3.1.2.jar:3.1.2]
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:627) ~[hive-exec-3.1.2.jar:3.1.2]
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:586) ~[hive-exec-3.1.2.jar:3.1.2]
at org.apache.hive.service.cli.CLIService.applyAuthorizationConfigPolicy(CLIService.java:130) ~[hive-service-3.1.2.jar:3.1.2]
at org.apache.hive.service.cli.CLIService.init(CLIService.java:115) ~[hive-service-3.1.2.jar:3.1.2]
at org.apache.hive.service.CompositeService.init(CompositeService.java:59) ~[hive-service-3.1.2.jar:3.1.2]
at org.apache.hive.service.server.HiveServer2.init(HiveServer2.java:230) ~[hive-service-3.1.2.jar:3.1.2]
at org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:1036) ~[hive-service-3.1.2.jar:3.1.2]
原因:是因为hdfs处于安全状态,强制退出后按路径删除缺失的块就可以了;
退出安全模式:
[lqs@nwh120 logs]$ hadoop dfsadmin -safemode leave
WARNING: Use of this script to execute dfsadmin is deprecated.
WARNING: Attempting to execute replacement "hdfs dfsadmin" instead.
Safe mode is OFF
刷新hdfs会显示缺失的块,然后删除即可