org.apache.hadoop.hdfs.server.namenode.FSImage:Failed to load image fromFSImangeFile(file=fsimage..)

本文讲述了在大数据平台环境中,副namenode服务因镜像文件故障而无法启动的问题,通过停止服务,迁移主节点fsimage文件并重启解决,强调了在生产环境操作fsimage文件时的谨慎态度。

大数据平台环境:4节点,配置主副namenode
状态:副namenode服务无法启动,主namenode和所有datanode状态正常
通过查看hdfs的启动日志得知,是副namenode中的镜像文件出了问题,报错内容如下所示:

org.apache.hadoop.hdfs.server.namenode.FSImage:Failed to load image fromFSImangeFile(file=fsimage_0000000000496112425).

解决方法: 将大数据平台的服务停止,然后将主namenode中的fsimage_0000000000496112425这个文件拷贝到副namenode节点所在的相同目录,重启服务,服务正常.
在生产环境操作fsimage_xxxxxxxx这类文件时候切记小心谨慎!!!

25/10/24 00:40:34 WARN namenode.FSNamesystem: Encountered exception loading fsimage java.io.FileNotFoundException: No valid image files found at org.apache.hadoop.hdfs.server.namenode.FSImageTransactionalStorageInspector.getLatestImages(FSImageTransactionalStorageInspector.java:165) at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:671) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:322) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1052) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:681) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:666) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:728) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:953) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:932) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1673) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1741) 25/10/24 00:40:34 INFO mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@hadoop1:50070 25/10/24 00:40:34 INFO impl.MetricsSystemImpl: Stopping NameNode metrics system... 25/10/24 00:40:34 INFO impl.MetricsSystemImpl: NameNode metrics system stopped. 25/10/24 00:40:34 INFO impl.MetricsSystemImpl: NameNode metrics system shutdown complete. 25/10/24 00:40:34 ERROR namenode.NameNode: Failed to start namenode. java.io.FileNotFoundException: No valid image files found at org.apache.hadoop.hdfs.server.namenode.FSImageTransactionalStorageInspector.getLatestImages(FSImageTransactionalStorageInspector.java:165) at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:671) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:322) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1052) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:681) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:666) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:728) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:953) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:932) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1673) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1741) 25/10/24 00:40:34 INFO util.ExitUtil: Exiting with status 1: java.io.FileNotFoundException: No valid image files found 25/10/24 00:40:34 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at hadoop1/192.168.10.161 ************************************************************/
最新发布
10-25
2025-03-22 21:22:25,122 WARN namenode.NameNode: Encountered exception during format java.io.IOException: Cannot create directory /opt/hadoop/tmp/dfs/name/current at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:447) at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:591) at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:613) at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:189) at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1285) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1733) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1841) 2025-03-22 21:22:25,166 INFO namenode.FSNamesystem: Stopping services started for active state 2025-03-22 21:22:25,167 INFO namenode.FSNamesystem: Stopping services started for standby state 2025-03-22 21:22:25,169 ERROR namenode.NameNode: Failed to start namenode. java.io.IOException: Cannot create directory /opt/hadoop/tmp/dfs/name/current at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:447) at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:591) at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:613) at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:189) at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1285) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1733) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1841) 2025-03-22 21:22:25,179 INFO util.ExitUtil: Exiting with status 1: java.io.IOException: Cannot create directory /opt/hadoop/tmp/dfs/name/current 2025-03-22 21:22:25,207 INFO namenode.NameNode: SHUTDOWN_MSG:
03-23
************************************************************/ 2025-08-22 15:17:13,325 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] 2025-08-22 15:17:13,548 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode [] 2025-08-22 15:17:13,853 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: Loaded properties from hadoop-metrics2.properties 2025-08-22 15:17:14,198 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s). 2025-08-22 15:17:14,198 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started 2025-08-22 15:17:14,221 INFO org.apache.hadoop.hdfs.server.namenode.NameNodeUtils: fs.defaultFS is hdfs://192.168.88.8:8020 2025-08-22 15:17:14,222 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients should use 192.168.88.8:8020 to access this namenode/service. 2025-08-22 15:17:14,573 INFO org.apache.hadoop.util.JvmPauseMonitor: Starting JVM pause monitor 2025-08-22 15:17:14,802 INFO org.apache.hadoop.hdfs.DFSUtil: Filter initializers set : org.apache.hadoop.http.lib.StaticUserWebFilter,org.apache.hadoop.hdfs.web.AuthFilterInitializer 2025-08-22 15:17:14,832 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:9870 2025-08-22 15:17:14,886 INFO org.eclipse.jetty.util.log: Logging initialized @2957ms to org.eclipse.jetty.util.log.Slf4jLog 2025-08-22 15:17:15,216 WARN org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets. Reason: Could not read signature secret file: /root/hadoop-http-auth-signature-secret 2025-08-22 15:17:15,263 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined 2025-08-22 15:17:15,286 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter) 2025-08-22 15:17:15,290 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs 2025-08-22 15:17:15,290 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2025-08-22 15:17:15,290 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2025-08-22 15:17:15,300 INFO org.apache.hadoop.http.HttpServer2: Added filter AuthFilter (class=org.apache.hadoop.hdfs.web.AuthFilter) to context hdfs 2025-08-22 15:17:15,301 INFO org.apache.hadoop.http.HttpServer2: Added filter AuthFilter (class=org.apache.hadoop.hdfs.web.AuthFilter) to context static 2025-08-22 15:17:15,301 INFO org.apache.hadoop.http.HttpServer2: Added filter AuthFilter (class=org.apache.hadoop.hdfs.web.AuthFilter) to context logs 2025-08-22 15:17:15,396 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/* 2025-08-22 15:17:15,448 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 9870 2025-08-22 15:17:15,452 INFO org.eclipse.jetty.server.Server: jetty-9.4.51.v20230217; built: 2023-02-17T08:19:37.309Z; git: b45c405e4544384de066f814ed42ae3dceacdd49; jvm 1.8.0_401-b10 2025-08-22 15:17:15,528 INFO org.eclipse.jetty.server.session: DefaultSessionIdManager workerName=node0 2025-08-22 15:17:15,529 INFO org.eclipse.jetty.server.session: No SessionScavenger set, using defaults 2025-08-22 15:17:15,532 INFO org.eclipse.jetty.server.session: node0 Scavenging every 660000ms 2025-08-22 15:17:15,579 WARN org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets. Reason: Could not read signature secret file: /root/hadoop-http-auth-signature-secret 2025-08-22 15:17:15,587 INFO org.eclipse.jetty.server.handler.ContextHandler: Started o.e.j.s.ServletContextHandler@465232e9{logs,/logs,file:///export/server/hadoop/logs/,AVAILABLE} 2025-08-22 15:17:15,589 INFO org.eclipse.jetty.server.handler.ContextHandler: Started o.e.j.s.ServletContextHandler@7486b455{static,/static,file:///export/server/hadoop/share/hadoop/hdfs/webapps/static/,AVAILABLE} 2025-08-22 15:17:15,884 INFO org.eclipse.jetty.server.handler.ContextHandler: Started o.e.j.w.WebAppContext@2d6c53fc{hdfs,/,file:///export/server/hadoop/share/hadoop/hdfs/webapps/hdfs/,AVAILABLE}{file:/export/server/hadoop/share/hadoop/hdfs/webapps/hdfs} 2025-08-22 15:17:15,912 INFO org.eclipse.jetty.server.AbstractConnector: Started ServerConnector@21d03963{HTTP/1.1, (http/1.1)}{0.0.0.0:9870} 2025-08-22 15:17:15,912 INFO org.eclipse.jetty.server.Server: Started @3984ms 2025-08-22 15:17:17,049 INFO org.apache.hadoop.hdfs.server.common.Util: Assuming 'file' scheme for path /data/nn in configuration. 2025-08-22 15:17:17,054 INFO org.apache.hadoop.hdfs.server.common.Util: Assuming 'file' scheme for path /data/nn in configuration. 2025-08-22 15:17:17,055 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories! 2025-08-22 15:17:17,055 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories! 2025-08-22 15:17:17,073 INFO org.apache.hadoop.hdfs.server.common.Util: Assuming 'file' scheme for path /data/nn in configuration. 2025-08-22 15:17:17,073 INFO org.apache.hadoop.hdfs.server.common.Util: Assuming 'file' scheme for path /data/nn in configuration. 2025-08-22 15:17:17,182 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Edit logging is async:true 2025-08-22 15:17:17,240 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: KeyProvider: null 2025-08-22 15:17:17,245 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair: true 2025-08-22 15:17:17,246 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Detailed lock hold time metrics enabled: false 2025-08-22 15:17:17,263 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner = root (auth:SIMPLE) 2025-08-22 15:17:17,264 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup = supergroup 2025-08-22 15:17:17,264 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true 2025-08-22 15:17:17,264 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isStoragePolicyEnabled = true 2025-08-22 15:17:17,264 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false 2025-08-22 15:17:17,369 INFO org.apache.hadoop.hdfs.server.common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling 2025-08-22 15:17:17,868 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit : configured=1000, counted=60, effected=1000 2025-08-22 15:17:17,868 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true 2025-08-22 15:17:17,899 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000 2025-08-22 15:17:17,900 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start around 2025 八月 22 15:17:17 2025-08-22 15:17:17,904 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap 2025-08-22 15:17:17,904 INFO org.apache.hadoop.util.GSet: VM type = 64-bit 2025-08-22 15:17:17,908 INFO org.apache.hadoop.util.GSet: 2.0% max memory 583.5 MB = 11.7 MB 2025-08-22 15:17:17,909 INFO org.apache.hadoop.util.GSet: capacity = 2^21 = 2097152 entries 2025-08-22 15:17:18,057 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Storage policy satisfier is disabled 2025-08-22 15:17:18,058 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable = false 2025-08-22 15:17:18,109 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.999 2025-08-22 15:17:18,109 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 0 2025-08-22 15:17:18,109 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 30000 2025-08-22 15:17:18,113 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication = 3 2025-08-22 15:17:18,113 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication = 512 2025-08-22 15:17:18,113 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication = 1 2025-08-22 15:17:18,113 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams = 2 2025-08-22 15:17:18,113 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: redundancyRecheckInterval = 3000ms 2025-08-22 15:17:18,113 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer = false 2025-08-22 15:17:18,113 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog = 1000 2025-08-22 15:17:18,278 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: GLOBAL serial map: bits=29 maxEntries=536870911 2025-08-22 15:17:18,278 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: USER serial map: bits=24 maxEntries=16777215 2025-08-22 15:17:18,278 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: GROUP serial map: bits=24 maxEntries=16777215 2025-08-22 15:17:18,278 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: XATTR serial map: bits=24 maxEntries=16777215 2025-08-22 15:17:18,340 INFO org.apache.hadoop.util.GSet: Computing capacity for map INodeMap 2025-08-22 15:17:18,341 INFO org.apache.hadoop.util.GSet: VM type = 64-bit 2025-08-22 15:17:18,341 INFO org.apache.hadoop.util.GSet: 1.0% max memory 583.5 MB = 5.8 MB 2025-08-22 15:17:18,341 INFO org.apache.hadoop.util.GSet: capacity = 2^20 = 1048576 entries 2025-08-22 15:17:18,620 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: ACLs enabled? true 2025-08-22 15:17:18,621 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: POSIX ACL inheritance enabled? true 2025-08-22 15:17:18,621 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: XAttrs enabled? true 2025-08-22 15:17:18,621 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occurring more than 10 times 2025-08-22 15:17:18,662 INFO org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager: Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536 2025-08-22 15:17:18,666 INFO org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager: SkipList is disabled 2025-08-22 15:17:18,678 INFO org.apache.hadoop.util.GSet: Computing capacity for map cachedBlocks 2025-08-22 15:17:18,678 INFO org.apache.hadoop.util.GSet: VM type = 64-bit 2025-08-22 15:17:18,679 INFO org.apache.hadoop.util.GSet: 0.25% max memory 583.5 MB = 1.5 MB 2025-08-22 15:17:18,679 INFO org.apache.hadoop.util.GSet: capacity = 2^18 = 262144 entries 2025-08-22 15:17:18,695 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10 2025-08-22 15:17:18,695 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10 2025-08-22 15:17:18,695 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25 2025-08-22 15:17:18,704 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled 2025-08-22 15:17:18,705 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis 2025-08-22 15:17:18,709 INFO org.apache.hadoop.util.GSet: Computing capacity for map NameNodeRetryCache 2025-08-22 15:17:18,709 INFO org.apache.hadoop.util.GSet: VM type = 64-bit 2025-08-22 15:17:18,710 INFO org.apache.hadoop.util.GSet: 0.029999999329447746% max memory 583.5 MB = 179.3 KB 2025-08-22 15:17:18,710 INFO org.apache.hadoop.util.GSet: capacity = 2^14 = 16384 entries 2025-08-22 15:17:18,804 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /data/nn/in_use.lock acquired by nodename 1882@master 2025-08-22 15:17:18,811 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception loading fsimage java.io.IOException: NameNode is not formatted. at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:253) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1236) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:808) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:694) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:781) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:1033) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:1008) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1782) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1847) 2025-08-22 15:17:18,830 INFO org.eclipse.jetty.server.handler.ContextHandler: Stopped o.e.j.w.WebAppContext@2d6c53fc{hdfs,/,null,STOPPED}{file:/export/server/hadoop/share/hadoop/hdfs/webapps/hdfs} 2025-08-22 15:17:18,847 INFO org.eclipse.jetty.server.AbstractConnector: Stopped ServerConnector@21d03963{HTTP/1.1, (http/1.1)}{0.0.0.0:9870} 2025-08-22 15:17:18,847 INFO org.eclipse.jetty.server.session: node0 Stopped scavenging 2025-08-22 15:17:18,848 INFO org.eclipse.jetty.server.handler.ContextHandler: Stopped o.e.j.s.ServletContextHandler@7486b455{static,/static,file:///export/server/hadoop/share/hadoop/hdfs/webapps/static/,STOPPED} 2025-08-22 15:17:18,848 INFO org.eclipse.jetty.server.handler.ContextHandler: Stopped o.e.j.s.ServletContextHandler@465232e9{logs,/logs,file:///export/server/hadoop/logs/,STOPPED} 2025-08-22 15:17:18,871 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system... 2025-08-22 15:17:18,872 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped. 2025-08-22 15:17:18,872 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete. 2025-08-22 15:17:18,872 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. java.io.IOException: NameNode is not formatted. at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:253) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1236) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:808) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:694) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:781) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:1033) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:1008) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1782) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1847) 2025-08-22 15:17:18,876 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1: java.io.IOException: NameNode is not formatted. 2025-08-22 15:17:18,897 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at master/192.168.88.8 ************************************************************/ [root@master logs]#
08-23
2025-07-16 11:05:59,849 INFO org.eclipse.jetty.server.AbstractConnector: Stopped ServerConnector@51bf5add{HTTP/1.1, (http/1.1)}{0.0.0.0:9870} 2025-07-16 11:05:59,849 INFO org.eclipse.jetty.server.session: node0 Stopped scavenging 2025-07-16 11:05:59,850 INFO org.eclipse.jetty.server.handler.ContextHandler: Stopped o.e.j.s.ServletContextHandler@49b2a47d{static,/static,file:///home/daryl/hadoop-3.3.6/share/hadoop/hdfs/webapps/static/,STOPPED} 2025-07-16 11:05:59,850 INFO org.eclipse.jetty.server.handler.ContextHandler: Stopped o.e.j.s.ServletContextHandler@7dc3712{logs,/logs,file:///home/daryl/hadoop-3.3.6/logs/,STOPPED} 2025-07-16 11:05:59,857 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system... 2025-07-16 11:05:59,858 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped. 2025-07-16 11:05:59,858 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete. 2025-07-16 11:05:59,858 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /home/daryl/hadoop-3.3.6/data/name is in an inconsistent state: storage directory does not exist or is not accessible. at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:392) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:243) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1236) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:808) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:694) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:781) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:1033) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:1008) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1782) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1847) 2025-07-16 11:05:59,861 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1: org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /home/daryl/hadoop-3.3.6/data/name is in an inconsistent state: storage directory does not exist or is not accessible. 2025-07-16 11:05:59,866 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at zx/127.0.1.1 ************************************************************/
07-17
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值