记录一次HDFS JN迁移异常处理

集群环境为CDH6.3.2,现需要将hdfs三个jn中的其中一个迁移到其他节点,当正在CDH管理端操作jn迁移时,误删了目标jn节点上迁移任务刚自动创建的jn数据目录,之后产生了一系列问题,先看下误删jn数据目录后的jn报错日志:

2021-06-19 11:43:09,759 INFO org.apache.hadoop.hdfs.qjournal.server.JournalNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting JournalNode
STARTUP_MSG:   host = prod-bigdata-pc6/10.5.2.136
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 3.0.0-cdh6.3.
@
@
@
@
@
@
@
@
STARTUP_MSG:   java = 1.8.0_181
************************************************************/
2021-06-19 11:43:09,824 INFO org.apache.hadoop.hdfs.qjournal.server.JournalNode: registered UNIX signal handlers for [TERM, HUP, INT]
2021-06-19 11:43:09,955 ERROR org.apache.hadoop.hdfs.qjournal.server.JournalNode: Failed to start JournalNode.
org.apache.hadoop.util.DiskChecker$DiskErrorException: Cannot create directory: /data/dfs/jn
        at org.apache.hadoop.util.DiskChecker.checkDirInternal(DiskChecker.java:98)
        at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:77)
        at org.apache.hadoop.hdfs.qjournal.server.JournalNode.validateAndCreateJournalDir(JournalNode.java:167)
        at org.apache.hadoop.hdfs.qjournal.server.JournalNode.start(JournalNode.java:189)
        at org.apache.hadoop.hdfs.qjournal.server.JournalNode.run(JournalNode.java:177)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
        at org.apache.hadoop.hdfs.qjournal.server.JournalNode.main(JournalNode.java:371)
2021-06-19 11:43:09,977 ERROR org.apache.hadoop.hdfs.qjournal.server.JournalNode: Failed to start journalnode.
org.apache.hadoop.util.DiskChecker$DiskErrorException: Cannot create directory: /data/dfs/jn
        at org.apache.hadoop.util.DiskChecker.checkDirInternal(DiskChecker.java:98)
        at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:77)
        at org.apache.hadoop.hdfs.qjournal.server.JournalNode.validateAndCreateJournalDir(JournalNode.java:167)
        at org.apache.hadoop.hdfs.qjournal.server.JournalNode.start(JournalNode.java:189)
        at org.apache.hadoop.hdfs.qjournal.server.JournalNode.run(JournalNode.java:177)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
        at org.apache.hadoop.hdfs.qjournal.server.JournalNode.main(JournalNode.java:371)
2021-06-19 11:43:09,980 INFO org.apache.hadoop.util.ExitUtil: Exiting with status -1: org.apache.hadoop.util.DiskChecker$DiskErrorException: Cannot create directory: /data/dfs/jn
2021-06-19 11:43:09,984 INFO org.apache.hadoop.hdfs.qjournal.server.JournalNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down JournalNode at prod-bigdata-pc6/10.5.2.136

从日志里可以看出由于没法创建/data/dfs/jn目录而导致jn启动失败,之后手动创建这个数据目录并重启jn,但是仍然启动失败,日志如下:

2021-06-19 11:48:47,827 INFO org.apache.hadoop.hdfs.qjournal.server.JournalNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting JournalNode
STARTUP_MSG:   host = prod-bigdata-pc6/10.5.2.136
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 3.0.0-cdh6.3.2
@
@
@
@
@
2021-06-19 11:48:48,912 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 8485
2021-06-19 11:48:48,963 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2021-06-19 11:48:48,964 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8485: starting
2021-06-19 11:48:50,241 INFO org.apache.hadoop.hdfs.qjournal.server.JournalNode: Initializing journal in directory /data/dfs/jn/hdfs
2021-06-19 11:48:50,331 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /data/dfs/jn/hdfs/in_use.lock acquired by nodename 6213@prod-bigdata-pc6
2021-06-19 11:48:50,452 INFO org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Starting SyncJournal daemon for journal hdfs
2021-06-19 11:48:50,454 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 8485, call Call#2 Retry#0 org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol.getEditLogManifest from 10.5.2.138:44085
org.apache.hadoop.hdfs.qjournal.protocol.JournalNotFormattedException: Journal Storage Directory /data/dfs/jn/hdfs not formatted
        at org.apache.hadoop.hdfs.qjournal.server.Journal.checkFormatted(Journal.java:500)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.getEditLogManifest(Journal.java:682)
        at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.getEditLogManifest(JournalNodeRpcServer.java:217)
        at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.getEditLogManifest(QJournalProtocolServerSideTranslatorPB.java:228)
        at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:27411)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
2021-06-19 11:48:50,454 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 8485, call Call#0 Retry#0 org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol.getEditLogManifest from 10.5.2.135:36889
org.apache.hadoop.hdfs.qjournal.protocol.JournalNotFormattedException: Journal Storage Directory /data/dfs/jn/hdfs not formatted
        at org.apache.hadoop.hdfs.qjournal.server.Journal.checkFormatted(Journal.java:500)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.getEditLogManifest(Journal.java:682)
        at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.getEditLogManifest(JournalNodeRpcServer.java:217)
        at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.getEditLogManifest(QJournalProtocolServerSideTranslatorPB.java:228)
        at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:27411)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
@
@
@
@
@
@
2021-06-19 12:09:46,157 ERROR org.apache.hadoop.hdfs.qjournal.server.JournalNode: RECEIVED SIGNAL 15: SIGTERM
2021-06-19 12:09:46,161 INFO org.apache.hadoop.hdfs.qjournal.server.JournalNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down JournalNode at prod-bigdata-pc6/10.5.2.136

在这里可以看到错误是由于jn数据目录没有格式化导致的,并且我们看到其他jn启动时也出现了同样的日志:

2021-06-19 11:43:09,775 INFO org.apache.hadoop.hdfs.qjournal.server.JournalNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting JournalNode
STARTUP_MSG:   host = prod-bigdata-pc5/10.5.2.135
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 3.0.0-cdh6.3.2
@
@
@
@
@
2021-06-19 11:47:15,655 INFO org.apache.hadoop.hdfs.qjournal.server.JournalNode: Initializing journal in directory /data/dfs/jn/hdfs
2021-06-19 11:47:15,673 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /data/dfs/jn/hdfs/in_use.lock acquired by nodename 5259@prod-bigdata-pc8
2021-06-19 11:47:15,761 INFO org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Starting SyncJournal daemon for journal hdfs
2021-06-19 11:47:15,763 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 8485, call Call#1 Retry#0 org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol.getEditLogManifest from 10.5.2.135:34476
org.apache.hadoop.hdfs.qjournal.protocol.JournalNotFormattedException: Journal Storage Directory /data/dfs/jn/hdfs not formatted
        at org.apache.hadoop.hdfs.qjournal.server.Journal.checkFormatted(Journal.java:500)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.getEditLogManifest(Journal.java:682)
        at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.getEditLogManifest(JournalNodeRpcServer.java:217)
        at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.getEditLogManifest(QJournalProtocolServerSideTranslatorPB.java:228)
        at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:27411)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
2021-06-19 11:47:41,303 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 8485, call Call#4 Retry#0 org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol.getEditLogManifest from 10.5.2.135:35594
org.apache.hadoop.hdfs.qjournal.protocol.JournalNotFormattedException: Journal Storage Directory /data/dfs/jn/hdfs not formatted
@
@
@
@
2021-06-19 12:09:47,069 ERROR org.apache.hadoop.hdfs.qjournal.server.JournalNode: RECEIVED SIGNAL 15: SIGTERM
2021-06-19 12:09:47,080 INFO org.apache.hadoop.hdfs.qjournal.server.JournalNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down JournalNode at prod-bigdata-pc8/10.5.2.138

NameNode启动时也报错,日志:

2021-06-19 11:47:13,409 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = prod-bigdata-pc5/10.5.2.135
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 3.0.0-cdh6.3.2
@
@
@
@
STARTUP_MSG:   java = 1.8.0_181
************************************************************/
2021-06-19 11:47:13,480 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2021-06-19 11:47:13,554 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []
2021-06-19 11:47:13,657 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: Loaded properties from hadoop-metrics2.properties
2021-06-19 11:47:13,744 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s).
2021-06-19 11:47:13,745 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2021-06-19 11:47:13,765 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is hdfs://hdfs
2021-06-19 11:47:13,770 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use hdfs to access this namenode/service.
2021-06-19 11:47:14,113 INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user hdfs/prod-bigdata-pc5@kunlun.prod using keytab file hdfs.keytab. Keytab auto renewal enabled : false
2021-06-19 11:47:14,144 INFO org.apache.hadoop.util.JvmPauseMonitor: Starting JVM pause monitor
2021-06-19 11:47:14,172 INFO org.apache.hadoop.hdfs.DFSUtil: Starting web server as: HTTP/prod-bigdata-pc5@kunlun.prod
2021-06-19 11:47:14,172 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://prod-bigdata-pc5:9870
2021-06-19 11:47:14,187 INFO org.eclipse.jetty.util.log: Logging initialized @1775ms
2021-06-19 11:47:14,280 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2021-06-19 11:47:14,284 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined
2021-06-19 11:47:14,292 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2021-06-19 11:47:14,294 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
2021-06-19 11:47:14,294 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2021-06-19 11:47:14,294 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2021-06-19 11:47:14,315 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2021-06-19 11:47:14,317 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2021-06-19 11:47:14,320 INFO org.apache.hadoop.http.HttpServer2: Adding Kerberos (SPNEGO) filter to fsck
2021-06-19 11:47:14,321 INFO org.apache.hadoop.http.HttpServer2: Adding Kerberos (SPNEGO) filter to imagetransfer
2021-06-19 11:47:14,326 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 9870
2021-06-19 11:47:14,327 INFO org.eclipse.jetty.server.Server: jetty-9.3.25.v20180904, build timestamp: 2018-09-05T05:11:46+08:00, git hash: 3ce520221d0240229c862b122d2b06c12a625732
2021-06-19 11:47:14,373 INFO org.eclipse.jetty.server.handler.ContextHandler: Started o.e.j.s.ServletContextHandler@45cff11c{/logs,file:///data/log/hadoop-hdfs/,AVAILABLE}
2021-06-19 11:47:14,373 INFO org.eclipse.jetty.server.handler.ContextHandler: Started o.e.j.s.ServletContextHandler@4bff1903{/static,file:///data/cloudera/parcels/CDH-6.3.2-1.cdh6.3.2.p0.1605554/lib/hadoop-hdfs/webapps/static/,AVAILABLE}
2021-06-19 11:47:14,437 INFO org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler: Using keytab hdfs.keytab, for principal HTTP/prod-bigdata-pc5@kunlun.prod
2021-06-19 11:47:14,442 INFO org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler: Using keytab hdfs.keytab, for principal HTTP/prod-bigdata-pc5@kunlun.prod
2021-06-19 11:47:14,447 INFO org.eclipse.jetty.server.handler.ContextHandler: Started o.e.j.w.WebAppContext@3d08f3f5{/,file:///data/cloudera/parcels/CDH-6.3.2-1.cdh6.3.2.p0.1605554/lib/hadoop-hdfs/webapps/hdfs/,AVAILABLE}{/hdfs}
2021-06-19 11:47:14,454 INFO org.eclipse.jetty.server.AbstractConnector: Started ServerConnector@7b139eab{HTTP/1.1,[http/1.1]}{prod-bigdata-pc5:9870}
2021-06-19 11:47:14,454 INFO org.eclipse.jetty.server.Server: Started @2042ms
2021-06-19 11:47:14,551 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
2021-06-19 11:47:14,606 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Edit logging is async:true
2021-06-19 11:47:14,618 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: KeyProvider: null
2021-06-19 11:47:14,620 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair: true
2021-06-19 11:47:14,620 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
2021-06-19 11:47:14,620 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner             = hdfs/prod-bigdata-pc5@kunlun.prod (auth:KERBEROS)
2021-06-19 11:47:14,620 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup          = supergroup
2021-06-19 11:47:14,620 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true
2021-06-19 11:47:14,621 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Determined nameservice ID: hdfs
2021-06-19 11:47:14,621 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: true
2021-06-19 11:47:14,934 INFO org.apache.hadoop.hdfs.server.common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
2021-06-19 11:47:14,951 WARN org.apache.hadoop.hdfs.util.CombinedHostsFileReader: /var/run/cloudera-scm-agent/process/4982-hdfs-NAMENODE/dfs_all_hosts.txt has invalid JSON format.Try the old format without top-level token defined.
2021-06-19 11:47:14,982 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000
2021-06-19 11:47:14,983 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2021-06-19 11:47:14,992 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2021-06-19 11:47:14,992 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start around 2021 六月 19 11:47:14
2021-06-19 11:47:14,999 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap
2021-06-19 11:47:14,999 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2021-06-19 11:47:15,001 INFO org.apache.hadoop.util.GSet: 2.0% max memory 3.9 GB = 79.8 MB
2021-06-19 11:47:15,001 INFO org.apache.hadoop.util.GSet: capacity      = 2^23 = 8388608 entries
2021-06-19 11:47:15,045 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable = true
2021-06-19 11:47:15,046 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.key.update.interval=600 min(s), dfs.block.access.token.lifetime=600 min(s), dfs.encrypt.data.transfer.algorithm=3des
2021-06-19 11:47:15,059 INFO org.apache.hadoop.conf.Configuration.deprecation: No unit for dfs.namenode.safemode.extension(30000) assuming MILLISECONDS
2021-06-19 11:47:15,060 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2021-06-19 11:47:15,060 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 1
2021-06-19 11:47:15,060 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 30000
2021-06-19 11:47:15,060 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication         = 3
2021-06-19 11:47:15,060 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication             = 512
2021-06-19 11:47:15,060 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication             = 1
2021-06-19 11:47:15,060 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams      = 20
2021-06-19 11:47:15,060 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: redundancyRecheckInterval  = 3000ms
2021-06-19 11:47:15,060 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer        = false
2021-06-19 11:47:15,061 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
2021-06-19 11:47:15,085 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: GLOBAL serial map: bits=24 maxEntries=16777215
2021-06-19 11:47:15,102 INFO org.apache.hadoop.util.GSet: Computing capacity for map INodeMap
2021-06-19 11:47:15,103 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2021-06-19 11:47:15,103 INFO org.apache.hadoop.util.GSet: 1.0% max memory 3.9 GB = 39.9 MB
2021-06-19 11:47:15,103 INFO org.apache.hadoop.util.GSet: capacity      = 2^22 = 4194304 entries
2021-06-19 11:47:15,113 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: ACLs enabled? true
2021-06-19 11:47:15,113 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: POSIX ACL inheritance enabled? true
2021-06-19 11:47:15,113 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: XAttrs enabled? true
2021-06-19 11:47:15,114 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occurring more than 10 times
2021-06-19 11:47:15,119 INFO org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager: Loaded config captureOpenFiles: true, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true
2021-06-19 11:47:15,125 INFO org.apache.hadoop.util.GSet: Computing capacity for map cachedBlocks
2021-06-19 11:47:15,125 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2021-06-19 11:47:15,125 INFO org.apache.hadoop.util.GSet: 0.25% max memory 3.9 GB = 10.0 MB
2021-06-19 11:47:15,125 INFO org.apache.hadoop.util.GSet: capacity      = 2^20 = 1048576 entries
2021-06-19 11:47:15,135 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
2021-06-19 11:47:15,135 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
2021-06-19 11:47:15,135 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2021-06-19 11:47:15,138 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled
2021-06-19 11:47:15,139 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2021-06-19 11:47:15,141 INFO org.apache.hadoop.util.GSet: Computing capacity for map NameNodeRetryCache
2021-06-19 11:47:15,141 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2021-06-19 11:47:15,142 INFO org.apache.hadoop.util.GSet: 0.029999999329447746% max memory 3.9 GB = 1.2 MB
2021-06-19 11:47:15,142 INFO org.apache.hadoop.util.GSet: capacity      = 2^17 = 131072 entries
2021-06-19 11:47:15,148 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Using INode attribute provider: org.apache.sentry.hdfs.SentryINodeAttributesProvider
2021-06-19 11:47:15,165 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /data/dfs/nn/in_use.lock acquired by nodename 21692@prod-bigdata-pc5
2021-06-19 11:47:16,411 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: prod-bigdata-pc6/10.5.2.136:8485. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-06-19 11:47:17,413 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: prod-bigdata-pc6/10.5.2.136:8485. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-06-19 11:47:16,411 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: prod-bigdata-pc6/10.5.2.136:8485. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-06-19 11:47:17,413 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: prod-bigdata-pc6/10.5.2.136:8485. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-06-19 11:47:18,415 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: prod-bigdata-pc6/10.5.2.136:8485. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-06-19 11:47:19,417 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: prod-bigdata-pc6/10.5.2.136:8485. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-06-19 11:47:20,419 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: prod-bigdata-pc6/10.5.2.136:8485. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-06-19 11:47:21,258 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 6002 ms (timeout=20000 ms) for a response for selectInputStreams. Exceptions so far: [10.5.2.138:8485: Journal Storage Directory /data/dfs/jn/hdfs not formatted
        at org.apache.hadoop.hdfs.qjournal.server.Journal.checkFormatted(Journal.java:500)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.getEditLogManifest(Journal.java:682)
        at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.getEditLogManifest(JournalNodeRpcServer.java:217)
        at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.getEditLogManifest(QJournalProtocolServerSideTranslatorPB.java:228)
        at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:27411)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
, 10.5.2.135:8485: Journal Storage Directory /data/dfs/jn/hdfs not formatted
        at org.apache.hadoop.hdfs.qjournal.server.Journal.checkFormatted(Journal.java:500)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.getEditLogManifest(Journal.java:682)
        at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.getEditLogManifest(JournalNodeRpcServer.java:217)
        at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.getEditLogManifest(QJournalProtocolServerSideTranslatorPB.java:228)
        at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:27411)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
]
2021-06-19 11:47:21,421 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: prod-bigdata-pc6/10.5.2.136:8485. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-06-19 11:47:22,262 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 7006 ms (timeout=20000 ms) for a response for selectInputStreams. Exceptions so far: [10.5.2.138:8485: Journal Storage Directory /data/dfs/jn/hdfs not formatted
        at org.apache.hadoop.hdfs.qjournal.server.Journal.checkFormatted(Journal.java:500)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.getEditLogManifest(Journal.java:682)
        at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.getEditLogManifest(JournalNodeRpcServer.java:217)
        at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.getEditLogManifest(QJournalProtocolServerSideTranslatorPB.java:228)
        at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:27411)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:929)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1653)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1720)
2021-06-19 11:47:25,448 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: No edit log streams selected.
2021-06-19 11:47:25,448 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Planning to load image: FSImageFile(file=/data/dfs/nn/current/fsimage_0000000000006467288, cpktTxId=0000000000006467288)
2021-06-19 11:47:25,521 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode: Loading 7325 INodes.
2021-06-19 11:47:25,671 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf: Loaded FSImage in 0 seconds.
2021-06-19 11:47:25,671 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Loaded image for txid 6467288 from /data/dfs/nn/current/fsimage_0000000000006467288
2021-06-19 11:47:25,676 INFO org.apache.hadoop.conf.Configuration.deprecation: No unit for dfs.namenode.checkpoint.period(3600) assuming SECONDS
2021-06-19 11:47:25,677 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Need to save fs image? false (staleImage=false, haEnabled=true, isRollingUpgrade=false)
2021-06-19 11:47:25,677 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem write lock held for 10528 ms via
java.lang.Thread.getStackTrace(Thread.java:1559)
org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:1032)
org.apache.hadoop.hdfs.server.namenode.FSNamesystemLock.writeUnlock(FSNamesystemLock.java:263)
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.writeUnlock(FSNamesystem.java:1604)
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1111)
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:709)
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:665)
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:727)
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:950)
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:929)
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1653)
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1720)
        Number of suppressed write-lock reports: 0
@
@
@
@
@
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
2021-06-19 11:48:01,357 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1: Error: recoverUnfinalizedSegments failed for required journal (JournalAndStream(mgr=QJM to [10.5.2.135:8485, 10.5.2.136:8485, 10.5.2.138:8485], stream=null))
2021-06-19 11:48:01,360 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at prod-bigdata-pc5/10.5.2.135
************************************************************/

此时,我们去未做迁移的两个jn数据目录下,可以看到里面的数据被清空了,所以也报数据目录未格式化的错误,于是将原先老的jn节点数据目录的VERSION拷贝到现在的三台jn节点数据目录namespase1/current中,注意文件权限,VERSION文件要属于hdfs用户,不然可能会报错,日志:

2021-06-19 13:33:33,062 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /data/dfs/nn/in_use.lock acquired by nodename 27540@prod-bigdata-pc5
2021-06-19 13:33:33,558 WARN org.apache.hadoop.hdfs.server.namenode.FSEditLog: Unable to determine input streams from QJM to [10.5.2.135:8485, 10.5.2.136:8485, 10.5.2.138:8485]. Skipping.
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Got too many exceptions to achieve quorum size 2/3. 3 exceptions thrown:
10.5.2.136:8485: /data/dfs/jn/hdfs/current/VERSION (权限不够)
        at java.io.RandomAccessFile.open0(Native Method)
        at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
        at java.io.RandomAccessFile.<init>(RandomAccessFile.java:243)

然后再次重启jn,jn启动正常,并且启动原先活动的nn节点,nn也能够正常开启(已开启HA,不开启原先备用状态的节点),但是此时jn数据目录被清空了,肯定会缺失一些编辑日志,启动备用nn时必然无法从jn上获取缺失的编辑日志,所以需要将nn活动节点的编辑日志拷贝到jn数据目录(无需重启jn和nn,可直接拷贝)。

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

头顶榴莲树

你的鼓励是我最大的动力~

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值