1 NameNode和2NameNode工作流程图
1)第一阶段:NameNode 启动
(1)第一次启动 NameNode 格式化后,创建 Fsimage 和 Edits 文件。如果不是第一次启动,直接加载编辑日志和镜像文件到内存。
(2)客户端对元数据进行增删改的请求。
(3)NameNode 记录操作日志,更新滚动日志。
(4)NameNode 在内存中对元数据进行增删改。
2)第二阶段:Secondary NameNode 工作
(1)Secondary NameNode 询问 NameNode 是否需要 CheckPoint。直接带回 NameNode是否检查结果。
(2)Secondary NameNode 请求执行 CheckPoint。
(3)NameNode 滚动正在写的 Edits 日志。
(4)将滚动前的编辑日志和镜像文件拷贝到 Secondary NameNode。
(5)Secondary NameNode 加载编辑日志和镜像文件到内存,并合并。
(6)生成新的镜像文件 fsimage.chkpoint。
(7)拷贝 fsimage.chkpoint 到 NameNode。
(8)NameNode 将 fsimage.chkpoint 重新命名成 fsimage。
2 Fsimage 和 Edits 解析
NameNode被格式化之后,将在/opt/module/hadoop-3.1.3/data/dfs/name/current目录中产生如下文件
(1)Fsimage文件:HDFS文件系统元数据的一个永久性的检查点,其中包含HDFS文件系统的所有目录和文件inode的序列化信息。
fsimage_0000000000000000000
fsimage_0000000000000000000.md5
seen_txid
VERSION
(2)Edits文件:存放HDFS文件系统的所有更新操作的路径,文件系统客户端执行的所有写操作首先会被记录到Edits文件中。
(3)seen_txid文件保存的是一个数字,就是最后一个edits_的数字
(4)每次NameNode启动的时候都会将Fsimage文件读入内存,加 载Edits里面的更新操作,保证内存中的元数据信息是最新的、同步的,可以看成NameNode启动的时候就将Fsimage和Edits文件进行了合并。
我们使用oiv和oev可以分别查看Fsimage和Edits文件,帮助我么进一步理解。经过000004 - HDFS, 我们已经启动了hadoop集群,并且对HDFS进行了一些操作,因此接下来我们可以直接开始进行oev和oiv的操作实践。
1)oev 查看 Edits 文件
(1)基本语法
hdfs oev -p 文件类型 -i 编辑日志 -o 转换后文件输出路径
(2)执行命令
[jintao.zhang@Jintaos-MacBook-Pro]$ ssh -i "hadoop-instances-stack-key-pair.pem" ec2-user@13.211.147.164
[ec2-user@ip-192-168-0-101 ~]$ cd software_installation/hadoop-3.1.3/data/dfs/name/current/
[ec2-user@ip-192-168-0-101 current]$ ll
total 1040
-rw-rw-r--. 1 ec2-user ec2-user 217 Jul 16 14:27 VERSION
-rw-rw-r--. 1 ec2-user ec2-user 1048576 Jul 16 14:39 edits_inprogress_0000000000000000001
-rw-rw-r--. 1 ec2-user ec2-user 395 Jul 16 14:27 fsimage_0000000000000000000
-rw-rw-r--. 1 ec2-user ec2-user 62 Jul 16 14:27 fsimage_0000000000000000000.md5
-rw-rw-r--. 1 ec2-user ec2-user 2 Jul 16 14:27 seen_txid
[ec2-user@ip-192-168-0-101 current]$ hdfs oev -p XML -i edits_inprogress_0000000000000000001 -o /home/ec2-user/workspace/edits.xml
[ec2-user@ip-192-168-0-101 current]$
(3)下面是edits.xml文件的内容, 从中可以找到我们在过000004 - HDFS中不操作的记录。
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<EDITS>
<EDITS_VERSION>-64</EDITS_VERSION>
<RECORD>
<OPCODE>OP_START_LOG_SEGMENT</OPCODE>
<DATA>
<TXID>1</TXID>
</DATA>
</RECORD>
<RECORD>
<OPCODE>OP_MKDIR</OPCODE>
<DATA>
<TXID>2</TXID>
<LENGTH>0</LENGTH>
<INODEID>16386</INODEID>
<PATH>/tmp</PATH>
<TIMESTAMP>1721140090939</TIMESTAMP>
<PERMISSION_STATUS>
<USERNAME>ec2-user</USERNAME>
<GROUPNAME>supergroup</GROUPNAME>
<MODE>504</MODE>
</PERMISSION_STATUS>
</DATA>
</RECORD>
<RECORD>
<OPCODE>OP_MKDIR</OPCODE>
<DATA>
<TXID>3</TXID>
<LENGTH>0</LENGTH>
<INODEID>16387</INODEID>
<PATH>/tmp/hadoop-yarn</PATH>
<TIMESTAMP>1721140090941</TIMESTAMP>
<PERMISSION_STATUS>
<USERNAME>ec2-user</USERNAME>
<GROUPNAME>supergroup</GROUPNAME>
<MODE>504</MODE>
</PERMISSION_STATUS>
</DATA>
</RECORD>
<RECORD>
<OPCODE>OP_MKDIR</OPCODE>
<DATA>
<TXID>4</TXID>
<LENGTH>0</LENGTH>
<INODEID>16388</INODEID>
<PATH>/tmp/hadoop-yarn/staging</PATH>
<TIMESTAMP>1721140090941</TIMESTAMP>
<PERMISSION_STATUS>
<USERNAME>ec2-user</USERNAME>
<GROUPNAME>supergroup</GROUPNAME>
<MODE>504</MODE>
</PERMISSION_STATUS>
</DATA>
</RECORD>
<RECORD>
<OPCODE>OP_MKDIR</OPCODE>
<DATA>
<TXID>5</TXID>
<LENGTH>0</LENGTH>
<INODEID>16389</INODEID>
<PATH>/tmp/hadoop-yarn/staging/history</PATH>
<TIMESTAMP>1721140090941</TIMESTAMP>
<PERMISSION_STATUS>
<USERNAME>ec2-user</USERNAME>
<GROUPNAME>supergroup</GROUPNAME>
<MODE>504</MODE>
</PERMISSION_STATUS>
</DATA>
</RECORD>
<RECORD>
<OPCODE>OP_MKDIR</OPCODE>
<DATA>
<TXID>6</TXID>
<LENGTH>0</LENGTH>
<INODEID>16390</INODEID>
<PATH>/tmp/hadoop-yarn/staging/history/done</PATH>
<TIMESTAMP>1721140090942</TIMESTAMP>
<PERMISSION_STATUS>
<USERNAME>ec2-user</USERNAME>
<GROUPNAME>supergroup</GROUPNAME>
<MODE>504</MODE>
</PERMISSION_STATUS>
</DATA>
</RECORD>
<RECORD>
<OPCODE>OP_MKDIR</OPCODE>
<DATA>
<TXID>7</TXID>
<LENGTH>0</LENGTH>
<INODEID>16391</INODEID>
<PATH>/tmp/hadoop-yarn/staging/history/done_intermediate</PATH>
<TIMESTAMP>1721140090985</TIMESTAMP>
<PERMISSION_STATUS>
<USERNAME>ec2-user</USERNAME>
<GROUPNAME>supergroup</GROUPNAME>
<MODE>493</MODE>
</PERMISSION_STATUS>
</DATA>
</RECORD>
<RECORD>
<OPCODE>OP_SET_PERMISSIONS</OPCODE>
<DATA>
<TXID>8</TXID>
<SRC>/tmp/hadoop-yarn/staging/history/done_intermediate</SRC>
<MODE>1023</MODE>
</DATA>
</RECORD>
<RECORD>
<OPCODE>OP_MKDIR</OPCODE>
<DATA>
<TXID>9</TXID>
<LENGTH>0</LENGTH>
<INODEID>16392</INODEID>
<PATH>/folder-for-test</PATH>
<TIMESTAMP>1721140762712</TIMESTAMP>
<PERMISSION_STATUS>
<USERNAME>ec2-user</USERNAME>
<GROUPNAME>supergroup</GROUPNAME>
<MODE>493</MODE>
</PERMISSION_STATUS>
</DATA>
</RECORD>
<RECORD>
<OPCODE>OP_ADD</OPCODE>
<DATA>
<TXID>10</TXID>
<LENGTH>0</LENGTH>
<INODEID>16393</INODEID>
<PATH>/folder-for-test/upload-file-to-hdfs-test.txt</PATH>
<REPLICATION>2</REPLICATION>
<MTIME>1721140763390</MTIME>
<ATIME>1721140763390</ATIME>
<BLOCKSIZE>134217728</BLOCKSIZE>
<CLIENT_NAME>DFSClient_NONMAPREDUCE_-496643430_26</CLIENT_NAME>
<CLIENT_MACHINE>203.13.23.10</CLIENT_MACHINE>
<OVERWRITE>true</OVERWRITE>
<PERMISSION_STATUS>
<USERNAME>ec2-user</USERNAME>
<GROUPNAME>supergroup</GROUPNAME>
<MODE>420</MODE>
</PERMISSION_STATUS>
<ERASURE_CODING_POLICY_ID>0</ERASURE_CODING_POLICY_ID>
<RPC_CLIENTID>e5cbf485-118b-40ac-9c5a-bb9f22ebd18d</RPC_CLIENTID>
<RPC_CALLID>2</RPC_CALLID>
</DATA>
</RECORD>
<RECORD>
<OPCODE>OP_ALLOCATE_BLOCK_ID</OPCODE>
<DATA>
<TXID>11</TXID>
<BLOCK_ID>1073741825</BLOCK_ID>
</DATA>
</RECORD>
<RECORD>
<OPCODE>OP_SET_GENSTAMP_V2</OPCODE>
<DATA>
<TXID>12</TXID>
<GENSTAMPV2>1001</GENSTAMPV2>
</DATA>
</RECORD>
<RECORD>
<OPCODE>OP_ADD_BLOCK</OPCODE>
<DATA>
<TXID>13</TXID>
<PATH>/folder-for-test/upload-file-to-hdfs-test.txt</PATH>
<BLOCK>
<BLOCK_ID>1073741825</BLOCK_ID>
<NUM_BYTES>0</NUM_BYTES>
<GENSTAMP>1001</GENSTAMP>
</BLOCK>
<RPC_CLIENTID/>
<RPC_CALLID>-2</RPC_CALLID>
</DATA>
</RECORD>
<RECORD>
<OPCODE>OP_CLOSE</OPCODE>
<DATA>
<TXID>14</TXID>
<LENGTH>0</LENGTH>
<INODEID>0</INODEID>
<PATH>/folder-for-test/upload-file-to-hdfs-test.txt</PATH>
<REPLICATION>2</REPLICATION>
<MTIME>1721140765890</MTIME>
<ATIME>1721140763390</ATIME>
<BLOCKSIZE>134217728</BLOCKSIZE>
<CLIENT_NAME/>
<CLIENT_MACHINE/>
<OVERWRITE>false</OVERWRITE>
<BLOCK>
<BLOCK_ID>1073741825</BLOCK_ID>
<NUM_BYTES>98</NUM_BYTES>
<GENSTAMP>1001</GENSTAMP>
</BLOCK>
<PERMISSION_STATUS>
<USERNAME>ec2-user</USERNAME>
<GROUPNAME>supergroup</GROUPNAME>
<MODE>420</MODE>
</PERMISSION_STATUS>
</DATA>
</RECORD>
<RECORD>
<OPCODE>OP_ADD</OPCODE>
<DATA>
<TXID>15</TXID>
<LENGTH>0</LENGTH>
<INODEID>16394</INODEID>
<PATH>/folder-for-test/input.txt</PATH>
<REPLICATION>2</REPLICATION>
<MTIME>1721140766132</MTIME>
<ATIME>1721140766132</ATIME>
<BLOCKSIZE>134217728</BLOCKSIZE>
<CLIENT_NAME>DFSClient_NONMAPREDUCE_-496643430_26</CLIENT_NAME>
<CLIENT_MACHINE>203.13.23.10</CLIENT_MACHINE>
<OVERWRITE>true</OVERWRITE>
<PERMISSION_STATUS>
<USERNAME>ec2-user</USERNAME>
<GROUPNAME>supergroup</GROUPNAME>
<MODE>420</MODE>
</PERMISSION_STATUS>
<ERASURE_CODING_POLICY_ID>0</ERASURE_CODING_POLICY_ID>
<RPC_CLIENTID>e5cbf485-118b-40ac-9c5a-bb9f22ebd18d</RPC_CLIENTID>
<RPC_CALLID>6</RPC_CALLID>
</DATA>
</RECORD>
<RECORD>
<OPCODE>OP_ALLOCATE_BLOCK_ID</OPCODE>
<DATA>
<TXID>16</TXID>
<BLOCK_ID>1073741826</BLOCK_ID>
</DATA>
</RECORD>
<RECORD>
<OPCODE>OP_SET_GENSTAMP_V2</OPCODE>
<DATA>
<TXID>17</TXID>
<GENSTAMPV2>1002</GENSTAMPV2>
</DATA>
</RECORD>
<RECORD>
<OPCODE>OP_ADD_BLOCK</OPCODE>
<DATA>
<TXID>18</TXID>
<PATH>/folder-for-test/input.txt</PATH>
<BLOCK>
<BLOCK_ID>1073741826</BLOCK_ID>
<NUM_BYTES>0</NUM_BYTES>
<GENSTAMP>1002</GENSTAMP>
</BLOCK>
<RPC_CLIENTID/>
<RPC_CALLID>-2</RPC_CALLID>
</DATA>
</RECORD>
<RECORD>
<OPCODE>OP_CLOSE</OPCODE>
<DATA>
<TXID>19</TXID>
<LENGTH>0</LENGTH>
<INODEID>0</INODEID>
<PATH>/folder-for-test/input.txt</PATH>
<REPLICATION>2</REPLICATION>
<MTIME>1721140767591</MTIME>
<ATIME>1721140766132</ATIME>
<BLOCKSIZE>134217728</BLOCKSIZE>
<CLIENT_NAME/>
<CLIENT_MACHINE/>
<OVERWRITE>false</OVERWRITE>
<BLOCK>
<BLOCK_ID>1073741826</BLOCK_ID>
<NUM_BYTES>11</NUM_BYTES>
<GENSTAMP>1002</GENSTAMP>
</BLOCK>
<PERMISSION_STATUS>
<USERNAME>ec2-user</USERNAME>
<GROUPNAME>supergroup</GROUPNAME>
<MODE>420</MODE>
</PERMISSION_STATUS>
</DATA>
</RECORD>
<RECORD>
<OPCODE>OP_RENAME_OLD</OPCODE>
<DATA>
<TXID>20</TXID>
<LENGTH>0</LENGTH>
<SRC>/folder-for-test</SRC>
<DST>/renamed-folder-for-test</DST>
<TIMESTAMP>1721140769786</TIMESTAMP>
<RPC_CLIENTID>40cd6c17-44fc-436c-b929-d3b8d59c778e</RPC_CLIENTID>
<RPC_CALLID>12</RPC_CALLID>
</DATA>
</RECORD>
</EDITS>
2)oiv 查看 Fsimage 文件
(1)基本语法
hdfs oiv -p 文件类型 -i 镜像文件 -o 转换后文件输出路径
(2)因为新启动的hadoop集群的Fsimage内没有记录任何操作,已经执行过的少量操作都还在Edits文件中,没有进行Checkpoint操作,因此直接使用oiv查看Fsimage是看不到任何操作记录的。所以这一步我们要先手动出发CheckPoint操作
[jintao.zhang@Jintaos-MacBook-Pro]$ ssh -i "hadoop-instances-stack-key-pair.pem" ec2-user@13.211.147.164
[ec2-user@ip-192-168-0-101 ~]$ cd software_installation/hadoop-3.1.3/data/dfs/name/current/
[ec2-user@ip-192-168-0-101 current]$ ll
total 1040
-rw-rw-r--. 1 ec2-user ec2-user 217 Jul 16 14:27 VERSION
-rw-rw-r--. 1 ec2-user ec2-user 1048576 Jul 16 14:39 edits_inprogress_0000000000000000001
-rw-rw-r--. 1 ec2-user ec2-user 395 Jul 16 14:27 fsimage_0000000000000000000
-rw-rw-r--. 1 ec2-user ec2-user 62 Jul 16 14:27 fsimage_0000000000000000000.md5
-rw-rw-r--. 1 ec2-user ec2-user 2 Jul 16 14:27 seen_txid
[ec2-user@ip-192-168-0-101 current]$ hdfs dfsadmin -safemode enter
Safe mode is ON
[ec2-user@ip-192-168-0-101 current]$ hdfs dfsadmin -saveNamespace
Save namespace successful
[ec2-user@ip-192-168-0-101 current]$ hdfs dfsadmin -safemode leave
Safe mode is OFF
[ec2-user@ip-192-168-0-101 current]$ ll
total 1052
-rw-rw-r--. 1 ec2-user ec2-user 217 Jul 16 14:51 VERSION
-rw-rw-r--. 1 ec2-user ec2-user 1790 Jul 16 14:51 edits_0000000000000000001-0000000000000000021
-rw-rw-r--. 1 ec2-user ec2-user 1048576 Jul 16 14:51 edits_inprogress_0000000000000000022
-rw-rw-r--. 1 ec2-user ec2-user 395 Jul 16 14:27 fsimage_0000000000000000000
-rw-rw-r--. 1 ec2-user ec2-user 62 Jul 16 14:27 fsimage_0000000000000000000.md5
-rw-rw-r--. 1 ec2-user ec2-user 1046 Jul 16 14:51 fsimage_0000000000000000021
-rw-rw-r--. 1 ec2-user ec2-user 62 Jul 16 14:51 fsimage_0000000000000000021.md5
-rw-rw-r--. 1 ec2-user ec2-user 3 Jul 16 14:51 seen_txid
[ec2-user@ip-192-168-0-101 current]$ cat seen_txid
22
可以看到,出发CheckPoint操作后,产生了一个新的edits_inprogress
和fsimage
文件, 从seen_txid可以看到,当前生效的edits文件是以22
结束的。最新的 fsimage 文件通常会有最大的编号。
(3)接下来执行下面命令查看最新的FSimage文件
[ec2-user@ip-192-168-0-101 current]$ ll
total 1052
-rw-rw-r--. 1 ec2-user ec2-user 217 Jul 16 14:51 VERSION
-rw-rw-r--. 1 ec2-user ec2-user 1790 Jul 16 14:51 edits_0000000000000000001-0000000000000000021
-rw-rw-r--. 1 ec2-user ec2-user 1048576 Jul 16 14:51 edits_inprogress_0000000000000000022
-rw-rw-r--. 1 ec2-user ec2-user 395 Jul 16 14:27 fsimage_0000000000000000000
-rw-rw-r--. 1 ec2-user ec2-user 62 Jul 16 14:27 fsimage_0000000000000000000.md5
-rw-rw-r--. 1 ec2-user ec2-user 1046 Jul 16 14:51 fsimage_0000000000000000021
-rw-rw-r--. 1 ec2-user ec2-user 62 Jul 16 14:51 fsimage_0000000000000000021.md5
-rw-rw-r--. 1 ec2-user ec2-user 3 Jul 16 14:51 seen_txid
[ec2-user@ip-192-168-0-101 current]$ hdfs oiv -p XML -i fsimage_0000000000000000021 -o /home/ec2-user/workspace/fsimage_0000000000000000021.xml
2024-07-16 14:56:01,356 INFO offlineImageViewer.FSImageHandler: Loading 3 strings
(4)fsimage_0000000000000000021.xml文件内容如下,从中可以看到上一步Edits中的所有操作均已合并至了最新的Fsimage上,同时也可以看出来,Fsimage是合并后的最终状态,并不包含操作过程。Fsimage中包含了文件的层级结构以及文件块信息,但是并不包含某个文件块的存储位置信息,存储位置信息是由DateNode上报给NameNode的。
<?xml version="1.0"?>
<fsimage>
<version>
<layoutVersion>-64</layoutVersion>
<onDiskVersion>1</onDiskVersion>
<oivRevision>ba631c436b806728f8ec2f54ab1e289526c90579</oivRevision>
</version>
<NameSection>
<namespaceId>1022385255</namespaceId>
<genstampV1>1000</genstampV1>
<genstampV2>1002</genstampV2>
<genstampV1Limit>0</genstampV1Limit>
<lastAllocatedBlockId>1073741826</lastAllocatedBlockId>
<txid>21</txid>
</NameSection>
<ErasureCodingSection>
<erasureCodingPolicy>
<policyId>1</policyId>
<policyName>RS-6-3-1024k</policyName>
<cellSize>1048576</cellSize>
<policyState>DISABLED</policyState>
<ecSchema>
<codecName>rs</codecName>
<dataUnits>6</dataUnits>
<parityUnits>3</parityUnits>
</ecSchema>
</erasureCodingPolicy>
<erasureCodingPolicy>
<policyId>2</policyId>
<policyName>RS-3-2-1024k</policyName>
<cellSize>1048576</cellSize>
<policyState>DISABLED</policyState>
<ecSchema>
<codecName>rs</codecName>
<dataUnits>3</dataUnits>
<parityUnits>2</parityUnits>
</ecSchema>
</erasureCodingPolicy>
<erasureCodingPolicy>
<policyId>3</policyId>
<policyName>RS-LEGACY-6-3-1024k</policyName>
<cellSize>1048576</cellSize>
<policyState>DISABLED</policyState>
<ecSchema>
<codecName>rs-legacy</codecName>
<dataUnits>6</dataUnits>
<parityUnits>3</parityUnits>
</ecSchema>
</erasureCodingPolicy>
<erasureCodingPolicy>
<policyId>4</policyId>
<policyName>XOR-2-1-1024k</policyName>
<cellSize>1048576</cellSize>
<policyState>DISABLED</policyState>
<ecSchema>
<codecName>xor</codecName>
<dataUnits>2</dataUnits>
<parityUnits>1</parityUnits>
</ecSchema>
</erasureCodingPolicy>
<erasureCodingPolicy>
<policyId>5</policyId>
<policyName>RS-10-4-1024k</policyName>
<cellSize>1048576</cellSize>
<policyState>DISABLED</policyState>
<ecSchema>
<codecName>rs</codecName>
<dataUnits>10</dataUnits>
<parityUnits>4</parityUnits>
</ecSchema>
</erasureCodingPolicy>
</ErasureCodingSection>
<INodeSection>
<lastInodeId>16394</lastInodeId>
<numInodes>10</numInodes>
<inode>
<id>16385</id>
<type>DIRECTORY</type>
<name></name>
<mtime>1721140769786</mtime>
<permission>ec2-user:supergroup:0755</permission>
<nsquota>9223372036854775807</nsquota>
<dsquota>-1</dsquota>
</inode>
<inode>
<id>16386</id>
<type>DIRECTORY</type>
<name>tmp</name>
<mtime>1721140090941</mtime>
<permission>ec2-user:supergroup:0770</permission>
<nsquota>-1</nsquota>
<dsquota>-1</dsquota>
</inode>
<inode>
<id>16387</id>
<type>DIRECTORY</type>
<name>hadoop-yarn</name>
<mtime>1721140090941</mtime>
<permission>ec2-user:supergroup:0770</permission>
<nsquota>-1</nsquota>
<dsquota>-1</dsquota>
</inode>
<inode>
<id>16388</id>
<type>DIRECTORY</type>
<name>staging</name>
<mtime>1721140090941</mtime>
<permission>ec2-user:supergroup:0770</permission>
<nsquota>-1</nsquota>
<dsquota>-1</dsquota>
</inode>
<inode>
<id>16389</id>
<type>DIRECTORY</type>
<name>history</name>
<mtime>1721140090985</mtime>
<permission>ec2-user:supergroup:0770</permission>
<nsquota>-1</nsquota>
<dsquota>-1</dsquota>
</inode>
<inode>
<id>16390</id>
<type>DIRECTORY</type>
<name>done</name>
<mtime>1721140090942</mtime>
<permission>ec2-user:supergroup:0770</permission>
<nsquota>-1</nsquota>
<dsquota>-1</dsquota>
</inode>
<inode>
<id>16391</id>
<type>DIRECTORY</type>
<name>done_intermediate</name>
<mtime>1721140090985</mtime>
<permission>ec2-user:supergroup:1777</permission>
<nsquota>-1</nsquota>
<dsquota>-1</dsquota>
</inode>
<inode>
<id>16392</id>
<type>DIRECTORY</type>
<name>renamed-folder-for-test</name>
<mtime>1721140766132</mtime>
<permission>ec2-user:supergroup:0755</permission>
<nsquota>-1</nsquota>
<dsquota>-1</dsquota>
</inode>
<inode>
<id>16393</id>
<type>FILE</type>
<name>upload-file-to-hdfs-test.txt</name>
<replication>2</replication>
<mtime>1721140765890</mtime>
<atime>1721140763390</atime>
<preferredBlockSize>134217728</preferredBlockSize>
<permission>ec2-user:supergroup:0644</permission>
<blocks>
<block>
<id>1073741825</id>
<genstamp>1001</genstamp>
<numBytes>98</numBytes>
</block>
</blocks>
<storagePolicyId>0</storagePolicyId>
</inode>
<inode>
<id>16394</id>
<type>FILE</type>
<name>input.txt</name>
<replication>2</replication>
<mtime>1721140767591</mtime>
<atime>1721140766132</atime>
<preferredBlockSize>134217728</preferredBlockSize>
<permission>ec2-user:supergroup:0644</permission>
<blocks>
<block>
<id>1073741826</id>
<genstamp>1002</genstamp>
<numBytes>11</numBytes>
</block>
</blocks>
<storagePolicyId>0</storagePolicyId>
</inode>
</INodeSection>
<INodeReferenceSection></INodeReferenceSection>
<SnapshotSection>
<snapshotCounter>0</snapshotCounter>
<numSnapshots>0</numSnapshots>
</SnapshotSection>
<INodeDirectorySection>
<directory>
<parent>16385</parent>
<child>16392</child>
<child>16386</child>
</directory>
<directory>
<parent>16386</parent>
<child>16387</child>
</directory>
<directory>
<parent>16387</parent>
<child>16388</child>
</directory>
<directory>
<parent>16388</parent>
<child>16389</child>
</directory>
<directory>
<parent>16389</parent>
<child>16390</child>
<child>16391</child>
</directory>
<directory>
<parent>16392</parent>
<child>16394</child>
<child>16393</child>
</directory>
</INodeDirectorySection>
<FileUnderConstructionSection></FileUnderConstructionSection>
<SecretManagerSection>
<currentId>0</currentId>
<tokenSequenceNumber>0</tokenSequenceNumber>
<numDelegationKeys>0</numDelegationKeys>
<numTokens>0</numTokens>
</SecretManagerSection>
<CacheManagerSection>
<nextDirectiveId>1</nextDirectiveId>
<numDirectives>0</numDirectives>
<numPools>0</numPools>
</CacheManagerSection>
</fsimage>
3 Hadoop_HDFS_检查点时间设置
还记得我们在执行1.2 oiv 查看 Fsimage 文件
时,我们需要手动触发CheckPoint操作,那么CheckPoint操作的默认执行周期和触发时机是什么呢?
1)通常情况下,SecondaryNameNode 每隔一小时执行一次。其配置在[hdfs-default.xml]中
<property>
<name>dfs.namenode.checkpoint.period</name>
<value>3600s</value>
</property>
2)一分钟检查一次操作次数,当操作次数达到 1 百万时,SecondaryNameNode 执行一次。
<property>
<name>dfs.namenode.checkpoint.txns</name>
<value>1000000</value>
<description>操作动作次数</description>
</property>
<property>
<name>dfs.namenode.checkpoint.check.period</name>
<value>60s</value>
<description> 1 分钟检查一次操作次数</description>
</property>