namenode edits同步及切换日志

Hadoop HA中EditLogTailer的作用
https://www.cnblogs.com/yu-wang/p/15661308.html

2024-09-20 09:28:08,107 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /app/tingyun/base/base-hadoop/data/hdfs/nn/current/edits_inprogress_0000000000003846411 -> /app/tingyun/base/base-hadoop/data/hdfs/nn/current/edits_0000000000003846411-0000000000003846412
2024-09-20 09:28:08,107 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 3846413
2024-09-20 09:30:08,475 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 192.168.228.227
2024-09-20 09:30:08,475 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs
2024-09-20 09:30:08,475 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 3846413, 3846413
2024-09-20 09:30:08,476 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 1 Number of transactions batched in Syncs: 0 Number of syncs: 1 SyncTimes(ms): 12 41
2024-09-20 09:30:08,568 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 1 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 97 49
2024-09-20 09:30:09,017 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /app/tingyun/base/base-hadoop/data/hdfs/nn/current/edits_inprogress_0000000000003846413 -> /app/tingyun/base/base-hadoop/data/hdfs/nn/current/edits_0000000000003846413-0000000000003846414
2024-09-20 09:30:09,017 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 3846415
2024-09-20 09:31:55,492 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 2 Number of transactions batched in Syncs: 0 Number of syncs: 1 SyncTimes(ms): 32 51
2024-09-20 09:31:55,546 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1074296087_555263, replicas=192.168.228.229:50010, 192.168.228.230:50010, 192.168.228.228:50010 for /druid/segments/APP_NETWORK_DATA_HOUR/f890703ce15042caba4e6dedbb2d3592/15_index.zip
2024-09-20 09:31:55,622 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /druid/segments/APP_NETWORK_DATA_HOUR/f890703ce15042caba4e6dedbb2d3592/15_index.zip is closed by DFSClient_NONMAPREDUCE_-571280516_435
2024-09-20 09:31:56,710 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1074296088_555264, replicas=192.168.228.229:50010, 192.168.228.228:50010, 192.168.228.226:50010 for /druid/segments/APP_NETWORK_DATA_HOUR/2244a759674f413e8a707b7ccda3689a/7_index.zip
2024-09-20 09:31:56,809 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /druid/segments/APP_NETWORK_DATA_HOUR/2244a759674f413e8a707b7ccda3689a/7_index.zip is closed by DFSClient_NONMAPREDUCE_-571280516_435
2024-09-20 09:31:57,608 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1074296089_555265, replicas=192.168.228.229:50010, 192.168.228.228:50010, 192.168.228.230:50010 for /druid/segments/APP_NETWORK_DATA_HOUR/a3eea1ff706247a59d6c55a4aef932f7/4_index.zip
2024-09-20 09:31:57,671 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /druid/segments/APP_NETWORK_DATA_HOUR/a3eea1ff706247a59d6c55a4aef932f7/4_index.zip is closed by DFSClient_NONMAPREDUCE_-571280516_435
2024-09-20 09:32:12,530 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 192.168.228.227
2024-09-20 09:32:12,530 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs
2024-09-20 09:32:12,530 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 3846415, 3846439
2024-09-20 09:32:12,779 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 26 Total time for transactions(ms): 2 Number of transactions batched in Syncs: 3 Number of syncs: 23 SyncTimes(ms): 597 260
2024-09-20 09:32:12,801 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /app/tingyun/base/base-hadoop/data/hdfs/nn/current/edits_inprogress_0000000000003846415 -> /app/tingyun/base/base-hadoop/data/hdfs/nn/current/edits_0000000000003846415-0000000000003846440
2024-09-20 09:32:12,801 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 3846441
2024-09-20 09:33:15,128 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 1 Number of transactions batched in Syncs: 0 Number of syncs: 1 SyncTimes(ms): 23 34
2024-09-20 09:33:15,188 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1074296090_555266, replicas=192.168.228.229:50010, 192.168.228.226:50010, 192.168.228.228:50010 for /druid/segments/APP_NETWORK_DATA_HOUR/9434bd7c70954f2b97dc881631f268ee/1_index.zip
2024-09-20 09:33:23,150 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /druid/segments/APP_NETWORK_DATA_HOUR/9434bd7c70954f2b97dc881631f268ee/1_index.zip is closed by DFSClient_NONMAPREDUCE_-571280516_435
2024-09-20 09:34:13,361 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 192.168.228.227
2024-09-20 09:34:13,361 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs
2024-09-20 09:34:13,361 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 3846441, 3846449
2024-09-20 09:34:13,687 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 10 Total time for transactions(ms): 2 Number of transactions batched in Syncs: 1 Number of syncs: 9 SyncTimes(ms): 432 99
2024-09-20 09:34:13,720 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /app/tingyun/base/base-hadoop/data/hdfs/nn/current/edits_inprogress_0000000000003846441 -> /app/tingyun/base/base-hadoop/data/hdfs/nn/current/edits_0000000000003846441-0000000000003846450
2024-09-20 09:34:13,720 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 3846451
2024-09-20 09:36:15,773 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 192.168.228.227
2024-09-20 09:36:15,773 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs
2024-09-20 09:36:15,773 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 3846451, 3846451
2024-09-20 09:36:15,774 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 1 Number of transactions batched in Syncs: 0 Number of syncs: 1 SyncTimes(ms): 32 42
2024-09-20 09:36:15,806 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 1 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 56 50
2024-09-20 09:36:15,830 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /app/tingyun/base/base-hadoop/data/hdfs/nn/current/edits_inprogress_0000000000003846451 -> /app/tingyun/base/base-hadoop/data/hdfs/nn/current/edits_0000000000003846451-0000000000003846452
2024-09-20 09:36:15,830 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 3846453
2024-09-20 09:38:19,122 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 192.168.228.227
2024-09-20 09:38:19,130 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs
2024-09-20 09:38:19,130 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 3846453, 3846453
2024-09-20 09:38:19,138 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 8 Number of transactions batched in Syncs: 0 Number of syncs: 1 SyncTimes(ms): 382 50
2024-09-20 09:38:19,182 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 8 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 418 57
2024-09-20 09:38:19,396 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /app/tingyun/base/base-hadoop/data/hdfs/nn/current/edits_inprogress_0000000000003846453 -> /app/tingyun/base/base-hadoop/data/hdfs/nn/current/edits_0000000000003846453-0000000000003846454
2024-09-20 09:38:19,396 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 3846455
2024-09-20 09:40:20,053 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 192.168.228.227
2024-09-20 09:40:20,053 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs
2024-09-20 09:40:20,053 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 3846455, 3846455
2024-09-20 09:40:20,054 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 2 Number of transactions batched in Syncs: 0 Number of syncs: 1 SyncTimes(ms): 9 41
2024-09-20 09:40:20,079 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 2 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 21 54
2024-09-20 09:40:20,106 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /app/tingyun/base/base-hadoop/data/hdfs/nn/current/edits_inprogress_0000000000003846455 -> /app/tingyun/base/base-hadoop/data/hdfs/nn/current/edits_0000000000003846455-0000000000003846456
2024-09-20 09:40:20,106 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 3846457
2024-09-20 09:42:20,928 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 192.168.228.227
2024-09-20 09:42:20,928 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs
2024-09-20 09:42:20,928 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 3846457, 3846457
2024-09-20 09:42:20,929 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 2 Number of transactions batched in Syncs: 0 Number of syncs: 1 SyncTimes(ms): 11 40
2024-09-20 09:42:20,952 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 2 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 26 48
2024-09-20 09:42:20,975 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /app/tingyun/base/base-hadoop/data/hdfs/nn/current/edits_inprogress_0000000000003846457 -> /app/tingyun/base/base-hadoop/data/hdfs/nn/current/edits_0000000000003846457-0000000000003846458
2024-09-20 09:42:20,975 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 3846459
2024-09-20 09:44:21,422 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 192.168.228.227
2024-09-20 09:44:21,422 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs
2024-09-20 09:44:21,422 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 3846459, 3846459
2024-09-20 09:44:21,423 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 1 Number of transactions batched in Syncs: 0 Number of syncs: 1 SyncTimes(ms): 14 37
2024-09-20 09:44:22,026 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 1 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 609 45
2024-09-20 09:44:22,048 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /app/tingyun/base/base-hadoop/data/hdfs/nn/current/edits_inprogress_0000000000003846459 -> /app/tingyun/base/base-hadoop/data/hdfs/nn/current/edits_0000000000003846459-0000000000003846460
2024-09-20 09:44:22,048 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 3846461
2024-09-20 09:44:22,554 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Took 1130ms to send a batch of 1 edits (17 bytes) to remote journal 192.168.228.227:8485
2024-09-20 09:46:23,616 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 192.168.228.227
2024-09-20 09:46:23,616 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs
2024-09-20 09:46:23,616 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 3846461, 3846461
2024-09-20 09:46:23,616 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 2 Number of transactions batched in Syncs: 0 Number of syncs: 1 SyncTimes(ms): 281 37
2024-09-20 09:46:23,808 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 2 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 465 45
2024-09-20 09:46:23,827 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /app/tingyun/base/base-hadoop/data/hdfs/nn/current/edits_inprogress_0000000000003846461 -> /app/tingyun/base/base-hadoop/data/hdfs/nn/current/edits_0000000000003846461-0000000000003846462
2024-09-20 09:46:23,827 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 3846463
2024-09-20 09:48:24,442 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 192.168.228.227
2024-09-20 09:48:24,442 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs
2024-09-20 09:48:24,442 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 3846463, 3846463
2024-09-20 09:48:24,453 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 11 Number of transactions batched in Syncs: 0 Number of syncs: 1 SyncTimes(ms): 13 44
2024-09-20 09:48:24,522 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 11 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 74 51
2024-09-20 09:48:24,564 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /app/tingyun/base/base-hadoop/data/hdfs/nn/current/edits_inprogress_0000000000003846463 -> /app/tingyun/base/base-hadoop/data/hdfs/nn/current/edits_0000000000003846463-0000000000003846464
2024-09-20 09:48:24,564 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 3846465
2024-09-20 09:50:25,097 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 192.168.228.227
2024-09-20 09:50:25,097 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs
2024-09-20 09:50:25,097 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 3846465, 3846465
2024-09-20 09:50:25,097 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 3 Number of transactions batched in Syncs: 0 Number of syncs: 1 SyncTimes(ms): 12 52
2024-09-20 09:50:25,120 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 3 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 27 60
2024-09-20 09:50:25,139 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /app/tingyun/base/base-hadoop/data/hdfs/nn/current/edits_inprogress_0000000000003846465 -> /app/tingyun/base/base-hadoop/data/hdfs/nn/current/edits_0000000000003846465-0000000000003846466
2024-09-20 09:50:25,139 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 3846467
2024-09-20 09:52:25,860 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 192.168.228.227
2024-09-20 09:52:25,860 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs
2024-09-20 09:52:25,860 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 3846467, 3846467
2024-09-20 09:52:25,870 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 11 Number of transactions batched in Syncs: 0 Number of syncs: 1 SyncTimes(ms): 11 46
2024-09-20 09:52:27,035 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Took 1164ms to send a batch of 1 edits (17 bytes) to remote journal 192.168.228.227:8485
2024-09-20 09:52:27,058 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 11 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 1176 68
2024-09-20 09:52:27,081 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /app/tingyun/base/base-hadoop/data/hdfs/nn/current/edits_inprogress_0000000000003846467 -> /app/tingyun/base/base-hadoop/data/hdfs/nn/current/edits_0000000000003846467-0000000000003846468
2024-09-20 09:52:27,081 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 3846469
2024-09-20 09:52:29,044 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Took 3173ms to send a batch of 1 edits (17 bytes) to remote journal 192.168.228.228:8485
2024-09-20 09:52:54,081 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 27001 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:52:55,087 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 28007 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:52:56,089 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 29008 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:52:57,090 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 30009 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:52:58,090 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 31010 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:52:59,091 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 32011 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:00,092 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 33012 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:01,094 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 34014 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:02,095 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 35015 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:03,096 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 36016 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:04,097 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 37017 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:05,099 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 38018 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:06,100 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 39019 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:07,100 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 40020 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:08,101 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 41021 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:09,102 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 42022 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:10,104 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 43023 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:11,104 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 44024 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:12,105 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 45025 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:13,107 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 46026 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:14,108 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 47027 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:15,108 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 48028 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:16,109 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 49029 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:17,110 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 50030 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:18,112 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 51031 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:19,113 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 52033 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:20,114 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 53034 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:21,115 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 54035 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:22,116 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 55036 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:23,118 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 56037 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:24,118 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 57038 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:25,119 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 58039 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:26,120 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 59040 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:27,122 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 60041 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:28,123 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 61042 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:29,123 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 62043 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:30,124 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 63044 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:31,125 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 64045 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:32,127 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 65046 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:33,127 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 66047 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:34,128 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 67048 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:35,129 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 68049 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:36,131 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 69050 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:37,132 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 70051 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:38,132 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 71052 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:39,133 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 72053 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:40,134 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 73054 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:41,136 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 74055 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:42,137 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 75056 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:43,137 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 76057 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:44,138 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 77058 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:45,142 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 78061 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:46,142 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 79062 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:47,143 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 80063 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:48,145 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 81064 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:49,145 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 82065 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:50,146 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 83066 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:51,147 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 84067 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:52,149 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 85068 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:53,150 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 86069 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:54,150 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 87070 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:55,151 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 88071 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:56,152 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 89072 ms (timeout=90000 ms) for a response for startLogSegment(3846469). Succeeded so far: [192.168.228.230:8485]
2024-09-20 09:53:57,081 FATAL org.apache.hadoop.hdfs.server.namenode.FSEditLog: Error: starting log segment 3846469 failed for required journal (JournalAndStream(mgr=QJM to [192.168.228.228:8485, 192.168.228.227:8485, 192.168.228.230:8485], stream=null))
java.io.IOException: Timed out waiting 90000ms for a quorum of nodes to respond.
        at org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.waitForWriteQuorum(AsyncLoggerSet.java:138)
        at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.startLogSegment(QuorumJournalManager.java:435)
        at org.apache.hadoop.hdfs.server.namenode.JournalSet$JournalAndStream.startLogSegment(JournalSet.java:108)
        at org.apache.hadoop.hdfs.server.namenode.JournalSet$3.apply(JournalSet.java:225)
        at org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:400)
        at org.apache.hadoop.hdfs.server.namenode.JournalSet.startLogSegment(JournalSet.java:222)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegment(FSEditLog.java:1336)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLog.rollEditLog(FSEditLog.java:1303)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.rollEditLog(FSImage.java:1339)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.rollEditLog(FSNamesystem.java:4481)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rollEditLog(NameNodeRpcServer.java:1279)
        at org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolServerSideTranslatorPB.rollEditLog(NamenodeProtocolServerSideTranslatorPB.java:145)
        at org.apache.hadoop.hdfs.protocol.proto.NamenodeProtocolProtos$NamenodeProtocolService$2.callBlockingMethod(NamenodeProtocolProtos.java:12836)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:528)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:498)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1038)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1003)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:931)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1938)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2855)
2024-09-20 09:53:57,250 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1: Error: starting log segment 3846469 failed for required journal (JournalAndStream(mgr=QJM to [192.168.228.228:8485, 192.168.228.227:8485, 192.168.228.230:8485], stream=null))
2024-09-20 09:53:57,292 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: FSImageSaver clean checkpoint: txid = 3718741 when meet shutdown.
2024-09-20 09:53:57,299 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at SY-TINGYUN-DBMS05/192.168.228.230



































2024-09-20 09:54:00,683 WARN org.apache.hadoop.ha.FailoverController: Unable to gracefully make NameNode at SY-TINGYUN-DBMS05/192.168.228.230:8020 standby (unable to connect)
java.net.ConnectException: Call From SY-TINGYUN-DBMS04/192.168.228.227 to SY-TINGYUN-DBMS05:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wi
ki.apache.org/hadoop/ConnectionRefused
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:827)
        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:757)
        at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1564)
        at org.apache.hadoop.ipc.Client.call(Client.java:1506)
        at org.apache.hadoop.ipc.Client.call(Client.java:1403)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
        at com.sun.proxy.$Proxy9.transitionToStandby(Unknown Source)
        at org.apache.hadoop.ha.protocolPB.HAServiceProtocolClientSideTranslatorPB.transitionToStandby(HAServiceProtocolClientSideTranslatorPB.java:113)
        at org.apache.hadoop.ha.FailoverController.tryGracefulFence(FailoverController.java:172)
        at org.apache.hadoop.ha.ZKFailoverController.doFence(ZKFailoverController.java:529)
        at org.apache.hadoop.ha.ZKFailoverController.fenceOldActive(ZKFailoverController.java:520)
        at org.apache.hadoop.ha.ZKFailoverController.access$1100(ZKFailoverController.java:62)
        at org.apache.hadoop.ha.ZKFailoverController$ElectorCallbacks.fenceOldActive(ZKFailoverController.java:943)
        at org.apache.hadoop.ha.ActiveStandbyElector.fenceOldActive(ActiveStandbyElector.java:991)
        at org.apache.hadoop.ha.ActiveStandbyElector.becomeActive(ActiveStandbyElector.java:888)
        at org.apache.hadoop.ha.ActiveStandbyElector.processResult(ActiveStandbyElector.java:473)
        at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:610)
        at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:508)
Caused by: java.net.ConnectException: Connection refused
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716)
        at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:532)
        at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:702)
        at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:823)
        at org.apache.hadoop.ipc.Client$Connection.access$3700(Client.java:423)
        at org.apache.hadoop.ipc.Client.getConnection(Client.java:1621)
        at org.apache.hadoop.ipc.Client.call(Client.java:1450)
        ... 15 more
2024-09-20 09:54:00,684 INFO org.apache.hadoop.ha.NodeFencer: ====== Beginning Service Fencing Process... ======
2024-09-20 09:54:00,684 INFO org.apache.hadoop.ha.NodeFencer: Trying method 1/1: org.apache.hadoop.ha.ShellCommandFencer(/bin/true)
2024-09-20 09:54:00,691 INFO org.apache.hadoop.ha.ShellCommandFencer: Launched fencing command '/bin/true' with pid 2785709
2024-09-20 09:54:00,698 INFO org.apache.hadoop.ha.NodeFencer: ====== Fencing successful by method org.apache.hadoop.ha.ShellCommandFencer(/bin/true) ======
2024-09-20 09:54:00,699 INFO org.apache.hadoop.ha.ActiveStandbyElector: Writing znode /hadoop-ha/tingyun/ActiveBreadCrumb to indicate that the local node is the most recent active...
2024-09-20 09:54:00,709 INFO org.apache.hadoop.ha.ZKFailoverController: Trying to make NameNode at SY-TINGYUN-DBMS04/192.168.228.227:8020 active...
2024-09-20 09:55:00,743 ERROR org.apache.hadoop.ha.ZKFailoverController: Couldn't make NameNode at SY-TINGYUN-DBMS04/192.168.228.227:8020 active
java.net.SocketTimeoutException: Call From SY-TINGYUN-DBMS04/192.168.228.227 to SY-TINGYUN-DBMS04:8020 failed on socket timeout exception: java.net.SocketTimeoutException: 60000 millis timeout while waiting f
or channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/192.168.228.227:23594 remote=SY-TINGYUN-DBMS04/192.168.228.227:8020]; For more details see:  http://wiki.apache.org/hadoop
/SocketTimeout
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:827)
        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:777)
        at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1564)
        at org.apache.hadoop.ipc.Client.call(Client.java:1506)
        at org.apache.hadoop.ipc.Client.call(Client.java:1403)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
        at com.sun.proxy.$Proxy9.transitionToActive(Unknown Source)
        at org.apache.hadoop.ha.protocolPB.HAServiceProtocolClientSideTranslatorPB.transitionToActive(HAServiceProtocolClientSideTranslatorPB.java:101)
        at org.apache.hadoop.ha.HAServiceProtocolHelper.transitionToActive(HAServiceProtocolHelper.java:48)
        at org.apache.hadoop.ha.ZKFailoverController.becomeActive(ZKFailoverController.java:397)
        at org.apache.hadoop.ha.ZKFailoverController.access$900(ZKFailoverController.java:62)
        at org.apache.hadoop.ha.ZKFailoverController$ElectorCallbacks.becomeActive(ZKFailoverController.java:924)
        at org.apache.hadoop.ha.ActiveStandbyElector.becomeActive(ActiveStandbyElector.java:894)
        at org.apache.hadoop.ha.ActiveStandbyElector.processResult(ActiveStandbyElector.java:473)
        at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:610)
        at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:508)
Caused by: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/192.168.228.227:23594 remote=SY-TINGYUN-D
BMS04/192.168.228.227:8020]
        at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
        at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
        at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
        at java.io.FilterInputStream.read(FilterInputStream.java:133)
        at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
        at java.io.BufferedInputStream.read(BufferedInputStream.java:265)
        at java.io.FilterInputStream.read(FilterInputStream.java:83)
        at java.io.FilterInputStream.read(FilterInputStream.java:83)
        at org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:569)
        at java.io.DataInputStream.readInt(DataInputStream.java:387)
        at org.apache.hadoop.ipc.Client$IpcStreams.readResponse(Client.java:1865)
        at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1202)
        at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1098)
2024-09-20 09:55:00,744 WARN org.apache.hadoop.ha.ActiveStandbyElector: Exception handling the winning of election
org.apache.hadoop.ha.ServiceFailedException: Couldn't transition to active
        at org.apache.hadoop.ha.ZKFailoverController.becomeActive(ZKFailoverController.java:416)
        at org.apache.hadoop.ha.ZKFailoverController.access$900(ZKFailoverController.java:62)
        at org.apache.hadoop.ha.ZKFailoverController$ElectorCallbacks.becomeActive(ZKFailoverController.java:924)
        at org.apache.hadoop.ha.ActiveStandbyElector.becomeActive(ActiveStandbyElector.java:894)
        at org.apache.hadoop.ha.ActiveStandbyElector.processResult(ActiveStandbyElector.java:473)
        at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:610)
        at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:508)
Caused by: java.net.SocketTimeoutException: Call From SY-TINGYUN-DBMS04/192.168.228.227 to SY-TINGYUN-DBMS04:8020 failed on socket timeout exception: java.net.SocketTimeoutException: 60000 millis timeout whil
e waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/192.168.228.227:23594 remote=SY-TINGYUN-DBMS04/192.168.228.227:8020]; For more details see:  http://wiki.apache
.org/hadoop/SocketTimeout
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:827)
        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:777)
        at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1564)
        at org.apache.hadoop.ipc.Client.call(Client.java:1506)
        at org.apache.hadoop.ipc.Client.call(Client.java:1403)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
        at com.sun.proxy.$Proxy9.transitionToActive(Unknown Source)
        at org.apache.hadoop.ha.protocolPB.HAServiceProtocolClientSideTranslatorPB.transitionToActive(HAServiceProtocolClientSideTranslatorPB.java:101)
        at org.apache.hadoop.ha.HAServiceProtocolHelper.transitionToActive(HAServiceProtocolHelper.java:48)
        at org.apache.hadoop.ha.ZKFailoverController.becomeActive(ZKFailoverController.java:397)
        ... 6 more
Caused by: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/192.168.228.227:23594 remote=SY-TINGYUN-D
BMS04/192.168.228.227:8020]
        at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
        at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
        at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
        at java.io.FilterInputStream.read(FilterInputStream.java:133)
        at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
        at java.io.BufferedInputStream.read(BufferedInputStream.java:265)
        at java.io.FilterInputStream.read(FilterInputStream.java:83)
        at java.io.FilterInputStream.read(FilterInputStream.java:83)
        at org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:569)
        at java.io.DataInputStream.readInt(DataInputStream.java:387)
        at org.apache.hadoop.ipc.Client$IpcStreams.readResponse(Client.java:1865)
        at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1202)
        at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1098)
2024-09-20 09:55:00,744 INFO org.apache.hadoop.ha.ActiveStandbyElector: Trying to re-establish ZK session
2024-09-20 09:55:00,766 INFO org.apache.zookeeper.ZooKeeper: Session: 0x101df6ba4310178 closed
2024-09-20 09:55:01,766 INFO org.apache.zookeeper.ZooKeeper: Initiating client connection, connectString=SY-TINGYUN-DBMS03:2181,SY-TINGYUN-DBMS04:2181,SY-TINGYUN-DBMS05:2181 sessionTimeout=10000 watcher=org.
apache.hadoop.ha.ActiveStandbyElector$WatcherWithClientRef@58571fa9
2024-09-20 09:55:01,768 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server SY-TINGYUN-DBMS04/192.168.228.227:2181. Will not attempt to authenticate using SASL (unknown error)
2024-09-20 09:55:01,768 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to SY-TINGYUN-DBMS04/192.168.228.227:2181, initiating session
2024-09-20 09:55:01,779 INFO org.apache.zookeeper.ClientCnxn: Session establishment complete on server SY-TINGYUN-DBMS04/192.168.228.227:2181, sessionid = 0x201df6b65db015b, negotiated timeout = 10000
2024-09-20 09:55:01,780 INFO org.apache.hadoop.ha.ActiveStandbyElector: Session connected.
2024-09-20 09:55:01,780 INFO org.apache.zookeeper.ClientCnxn: EventThread shut down for session: 0x101df6ba4310178
2024-09-20 09:55:01,841 INFO org.apache.hadoop.ha.ActiveStandbyElector: Checking for any old active which needs to be fenced...
2024-09-20 09:55:01,842 INFO org.apache.hadoop.ha.ActiveStandbyElector: Old node exists: 0a0774696e6779756e12036e6e311a1153592d54494e4759554e2d44424d53303420d43e28d33e
2024-09-20 09:55:01,842 INFO org.apache.hadoop.ha.ActiveStandbyElector: But old node has our own data, so don't need to fence it.
2024-09-20 09:55:01,842 INFO org.apache.hadoop.ha.ActiveStandbyElector: Writing znode /hadoop-ha/tingyun/ActiveBreadCrumb to indicate that the local node is the most recent active...
2024-09-20 09:55:01,852 INFO org.apache.hadoop.ha.ZKFailoverController: Trying to make NameNode at SY-TINGYUN-DBMS04/192.168.228.227:8020 active...
2024-09-20 09:55:47,820 WARN org.apache.hadoop.ha.HealthMonitor: Transport-level exception trying to monitor health of NameNode at SY-TINGYUN-DBMS04/192.168.228.227:8020: java.net.SocketTimeoutException: 4500
0 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/192.168.228.227:20692 remote=SY-TINGYUN-DBMS04/192.168.228.227:8020] Call From SY-TINGYUN-
DBMS04/192.168.228.227 to SY-TINGYUN-DBMS04:8020 failed on socket timeout exception: java.net.SocketTimeoutException: 45000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channel
s.SocketChannel[connected local=/192.168.228.227:20692 remote=SY-TINGYUN-DBMS04/192.168.228.227:8020]; For more details see:  http://wiki.apache.org/hadoop/SocketTimeout
2024-09-20 09:55:47,820 INFO org.apache.hadoop.ha.HealthMonitor: Entering state SERVICE_NOT_RESPONDING
2024-09-20 09:56:01,902 ERROR org.apache.hadoop.ha.ZKFailoverController: Couldn't make NameNode at SY-TINGYUN-DBMS04/192.168.228.227:8020 active
java.net.SocketTimeoutException: Call From SY-TINGYUN-DBMS04/192.168.228.227 to SY-TINGYUN-DBMS04:8020 failed on socket timeout exception: java.net.SocketTimeoutException: 60000 millis timeout while waiting f
or channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/192.168.228.227:20680 remote=SY-TINGYUN-DBMS04/192.168.228.227:8020]; For more details see:  http://wiki.apache.org/hadoop
/SocketTimeout
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:827)
        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:777)
        at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1564)
        at org.apache.hadoop.ipc.Client.call(Client.java:1506)
        at org.apache.hadoop.ipc.Client.call(Client.java:1403)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
        at com.sun.proxy.$Proxy9.transitionToActive(Unknown Source)
        at org.apache.hadoop.ha.protocolPB.HAServiceProtocolClientSideTranslatorPB.transitionToActive(HAServiceProtocolClientSideTranslatorPB.java:101)
        at org.apache.hadoop.ha.HAServiceProtocolHelper.transitionToActive(HAServiceProtocolHelper.java:48)
        at org.apache.hadoop.ha.ZKFailoverController.becomeActive(ZKFailoverController.java:397)
        at org.apache.hadoop.ha.ZKFailoverController.access$900(ZKFailoverController.java:62)
        at org.apache.hadoop.ha.ZKFailoverController$ElectorCallbacks.becomeActive(ZKFailoverController.java:924)
        at org.apache.hadoop.ha.ActiveStandbyElector.becomeActive(ActiveStandbyElector.java:894)
        at org.apache.hadoop.ha.ActiveStandbyElector.processResult(ActiveStandbyElector.java:473)
        at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:610)
        at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:508)
Caused by: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/192.168.228.227:20680 remote=SY-TINGYUN-D
BMS04/192.168.228.227:8020]
        at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
        at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
        at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
        at java.io.FilterInputStream.read(FilterInputStream.java:133)
        at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
        at java.io.BufferedInputStream.read(BufferedInputStream.java:265)
        at java.io.FilterInputStream.read(FilterInputStream.java:83)
        at java.io.FilterInputStream.read(FilterInputStream.java:83)
        at org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:569)
        at java.io.DataInputStream.readInt(DataInputStream.java:387)
        at org.apache.hadoop.ipc.Client$IpcStreams.readResponse(Client.java:1865)
        at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1202)
        at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1098)
2024-09-20 09:56:01,903 WARN org.apache.hadoop.ha.ActiveStandbyElector: Exception handling the winning of election
org.apache.hadoop.ha.ServiceFailedException: Couldn't transition to active
        at org.apache.hadoop.ha.ZKFailoverController.becomeActive(ZKFailoverController.java:416)
        at org.apache.hadoop.ha.ZKFailoverController.access$900(ZKFailoverController.java:62)
        at org.apache.hadoop.ha.ZKFailoverController$ElectorCallbacks.becomeActive(ZKFailoverController.java:924)
        at org.apache.hadoop.ha.ActiveStandbyElector.becomeActive(ActiveStandbyElector.java:894)
        at org.apache.hadoop.ha.ActiveStandbyElector.processResult(ActiveStandbyElector.java:473)
        at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:610)
        at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:508)
Caused by: java.net.SocketTimeoutException: Call From SY-TINGYUN-DBMS04/192.168.228.227 to SY-TINGYUN-DBMS04:8020 failed on socket timeout exception: java.net.SocketTimeoutException: 60000 millis timeout whil
e waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/192.168.228.227:20680 remote=SY-TINGYUN-DBMS04/192.168.228.227:8020]; For more details see:  http://wiki.apache
.org/hadoop/SocketTimeout
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:827)
        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:777)
        at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1564)
        at org.apache.hadoop.ipc.Client.call(Client.java:1506)
        at org.apache.hadoop.ipc.Client.call(Client.java:1403)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
        at com.sun.proxy.$Proxy9.transitionToActive(Unknown Source)
        at org.apache.hadoop.ha.protocolPB.HAServiceProtocolClientSideTranslatorPB.transitionToActive(HAServiceProtocolClientSideTranslatorPB.java:101)
        at org.apache.hadoop.ha.HAServiceProtocolHelper.transitionToActive(HAServiceProtocolHelper.java:48)
        at org.apache.hadoop.ha.ZKFailoverController.becomeActive(ZKFailoverController.java:397)
        ... 6 more
Caused by: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/192.168.228.227:20680 remote=SY-TINGYUN-D
BMS04/192.168.228.227:8020]
        at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
        at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
        at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
        at java.io.FilterInputStream.read(FilterInputStream.java:133)
        at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
        at java.io.BufferedInputStream.read(BufferedInputStream.java:265)
        at java.io.FilterInputStream.read(FilterInputStream.java:83)
        at java.io.FilterInputStream.read(FilterInputStream.java:83)
        at org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:569)
        at java.io.DataInputStream.readInt(DataInputStream.java:387)
        at org.apache.hadoop.ipc.Client$IpcStreams.readResponse(Client.java:1865)
        at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1202)
        at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1098)
2024-09-20 09:56:01,903 INFO org.apache.hadoop.ha.ActiveStandbyElector: Trying to re-establish ZK session
2024-09-20 09:56:01,903 INFO org.apache.hadoop.ha.ZKFailoverController: Local service NameNode at SY-TINGYUN-DBMS04/192.168.228.227:8020 entered state: SERVICE_NOT_RESPONDING
2024-09-20 09:56:01,906 WARN org.apache.hadoop.hdfs.tools.DFSZKFailoverController: Can't get local NN thread dump due to Server returned HTTP response code: 401 for URL: http://SY-TINGYUN-DBMS04:50070/stacks
2024-09-20 09:56:01,963 INFO org.apache.zookeeper.ZooKeeper: Session: 0x201df6b65db015b closed
2024-09-20 09:56:02,963 INFO org.apache.zookeeper.ZooKeeper: Initiating client connection, connectString=SY-TINGYUN-DBMS03:2181,SY-TINGYUN-DBMS04:2181,SY-TINGYUN-DBMS05:2181 sessionTimeout=10000 watcher=org.
apache.hadoop.ha.ActiveStandbyElector$WatcherWithClientRef@4c02edc1
2024-09-20 09:56:02,964 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server SY-TINGYUN-DBMS03/192.168.228.228:2181. Will not attempt to authenticate using SASL (unknown error)
2024-09-20 09:56:02,964 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to SY-TINGYUN-DBMS03/192.168.228.228:2181, initiating session
2024-09-20 09:56:03,006 INFO org.apache.zookeeper.ClientCnxn: Session establishment complete on server SY-TINGYUN-DBMS03/192.168.228.228:2181, sessionid = 0x101df6ba4310186, negotiated timeout = 10000
2024-09-20 09:56:03,008 INFO org.apache.hadoop.ha.ActiveStandbyElector: Session connected.
2024-09-20 09:56:03,008 INFO org.apache.zookeeper.ClientCnxn: EventThread shut down for session: 0x201df6b65db015b
2024-09-20 09:56:03,008 INFO org.apache.hadoop.ha.ZKFailoverController: Quitting master election for NameNode at SY-TINGYUN-DBMS04/192.168.228.227:8020 and marking that fencing is necessary
2024-09-20 09:56:03,008 INFO org.apache.hadoop.ha.ActiveStandbyElector: Yielding from election
2024-09-20 09:56:03,046 INFO org.apache.zookeeper.ZooKeeper: Session: 0x101df6ba4310186 closed
2024-09-20 09:56:03,046 WARN org.apache.hadoop.ha.ActiveStandbyElector: Ignoring stale result from old client with sessionId 0x101df6ba4310186
2024-09-20 09:56:03,046 INFO org.apache.zookeeper.ClientCnxn: EventThread shut down for session: 0x101df6ba4310186
2024-09-20 09:56:49,094 WARN org.apache.hadoop.ha.HealthMonitor: Transport-level exception trying to monitor health of NameNode at SY-TINGYUN-DBMS04/192.168.228.227:8020: java.net.SocketTimeoutException: 4500
0 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/192.168.228.227:54020 remote=SY-TINGYUN-DBMS04/192.168.228.227:8020] Call From SY-TINGYUN-
DBMS04/192.168.228.227 to SY-TINGYUN-DBMS04:8020 failed on socket timeout exception: java.net.SocketTimeoutException: 45000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channel
s.SocketChannel[connected local=/192.168.228.227:54020 remote=SY-TINGYUN-DBMS04/192.168.228.227:8020]; For more details see:  http://wiki.apache.org/hadoop/SocketTimeout
2024-09-20 09:57:35,141 WARN org.apache.hadoop.ha.HealthMonitor: Transport-level exception trying to monitor health of NameNode at SY-TINGYUN-DBMS04/192.168.228.227:8020: java.net.SocketTimeoutException: 4500
0 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/192.168.228.227:53804 remote=SY-TINGYUN-DBMS04/192.168.228.227:8020] Call From SY-TINGYUN-
DBMS04/192.168.228.227 to SY-TINGYUN-DBMS04:8020 failed on socket timeout exception: java.net.SocketTimeoutException: 45000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channel
s.SocketChannel[connected local=/192.168.228.227:53804 remote=SY-TINGYUN-DBMS04/192.168.228.227:8020]; For more details see:  http://wiki.apache.org/hadoop/SocketTimeout
2024-09-20 09:57:59,086 INFO org.apache.hadoop.ha.HealthMonitor: Entering state SERVICE_HEALTHY
2024-09-20 09:57:59,086 INFO org.apache.hadoop.ha.ZKFailoverController: Local service NameNode at SY-TINGYUN-DBMS04/192.168.228.227:8020 entered state: SERVICE_HEALTHY
2024-09-20 09:57:59,086 INFO org.apache.zookeeper.ZooKeeper: Initiating client connection, connectString=SY-TINGYUN-DBMS03:2181,SY-TINGYUN-DBMS04:2181,SY-TINGYUN-DBMS05:2181 sessionTimeout=10000 watcher=org.
apache.hadoop.ha.ActiveStandbyElector$WatcherWithClientRef@6cfa9680
2024-09-20 09:57:59,087 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server SY-TINGYUN-DBMS04/192.168.228.227:2181. Will not attempt to authenticate using SASL (unknown error)
2024-09-20 09:57:59,087 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to SY-TINGYUN-DBMS04/192.168.228.227:2181, initiating session
2024-09-20 09:57:59,143 INFO org.apache.zookeeper.ClientCnxn: Session establishment complete on server SY-TINGYUN-DBMS04/192.168.228.227:2181, sessionid = 0x201df6b65db015c, negotiated timeout = 10000
2024-09-20 09:57:59,144 INFO org.apache.hadoop.ha.ActiveStandbyElector: Session connected.
2024-09-20 09:57:59,210 INFO org.apache.hadoop.ha.ActiveStandbyElector: Checking for any old active which needs to be fenced...
2024-09-20 09:57:59,211 INFO org.apache.hadoop.ha.ActiveStandbyElector: Old node exists: 0a0774696e6779756e12036e6e311a1153592d54494e4759554e2d44424d53303420d43e28d33e
2024-09-20 09:57:59,211 INFO org.apache.hadoop.ha.ActiveStandbyElector: But old node has our own data, so don't need to fence it.
2024-09-20 09:57:59,211 INFO org.apache.hadoop.ha.ActiveStandbyElector: Writing znode /hadoop-ha/tingyun/ActiveBreadCrumb to indicate that the local node is the most recent active...
2024-09-20 09:57:59,261 INFO org.apache.hadoop.ha.ZKFailoverController: Trying to make NameNode at SY-TINGYUN-DBMS04/192.168.228.227:8020 active...
2024-09-20 09:57:59,264 INFO org.apache.hadoop.ha.ZKFailoverController: Successfully transitioned NameNode at SY-TINGYUN-DBMS04/192.168.228.227:8020 to active state



评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值