一 .群集架构
二.群集规划
namenode datanode journalnode zkfc zookeeper
bigdata01 yes yes yes yes
bigdata02 yes yes yes yes yes
bigdata03 yes yes yes yes
针对HDFS的HA群集,只需要启动HDFS相关的进程就可以了,YARN的相关进程可以不启动,它们两个的进程本来就是相互独立的。
在HDFS的HA群集中,不需要使用SecondaryNameNode进程。
① namenode: HDFS的主节点
② datanode : HDFS的从节点
③ journalnode : JournalNode进程,用来同步edits信息的
④ zkfc(DFSZKFailoverController) : 监视namenode的状态,负责切换namenode节点的状态
⑤ zookeeper(QuorumPeerMain) : 保存ha群集节点状态信息
环境准备,三个几点:
bigdata01 192.168.182.100
bigdata02 192.168.182.101
bigdata03 192.168.182.102
需提前将:IP、hostname、firewalld、JDK、SSH免密登录等基础环境设置好。
注意:由于namenode进行故障切换的时候,需要在两个namenode节点之间相互使用ssh进行连接,所以需要免密登录。
2.1 节点规划
使用三个节点搭建一个Zookeeper群集
bigdata01
bigdata02
bigdata03
2.2 配置Zookeeper
1.解压安装包
[root@bigdata01 soft]# tar -zxvf apache-zookeeper-3.5.8-bin.tar.gz
2.修改配置
[root@bigdata01 soft]# cd apache-zookeeper-3.5.8-bin/conf/
[root@bigdata01 conf]# mv zoo_sample.cfg zoo.cfg
dataDir=/data/soft/apache-zookeeper-3.5.8-bin/data
server.0=bigdata01:2888:3888
server.1=bigdata02:2888:3888
server.2=bigdata03:2888:3888
创建目录保存myid文件,并向myid文件中写入内容
myid中的值其实与zoo.cfg中server后面指定的编号是一一对应的。
编号0对应的是bigdata01这台机器,所以这里指定0
[root@bigdata01 conf]#cd /data/soft/apache-zookeeper-3.5.8-bin
[root@bigdata01 apache-zookeeper-3.5.8-bin]# mkdir data
[root@bigdata01 apache-zookeeper-3.5.8-bin]# cd data
# 将0写入myid
[root@bigdata01 data]# echo 0 > myid
3. 将修改好配置的zookeeper拷贝到其它两个节点
[root@bigdata01 soft]# scp -rq apache-zookeeper-3.5.8-bin bigdata02:/data/soft/
[root@bigdata01 soft]# scp -rq apache-zookeeper-3.5.8-bin bigdata03:/data/soft/
4.修改bigdata02和bigdata03上zookeeper中myid文件的内容
# bigdata02
[root@bigdata02 ~]# cd /data/soft/apache-zookeeper-3.5.8-bin/data/
[root@bigdata02 data]# echo 1 > myid
# bigdata03
[root@bigdata03 ~]# cd /data/soft/apache-zookeeper-3.5.8-bin/data/
[root@bigdata03 data]# echo 2 > myid
5. 启动zookeeper群集
分别在bigdata01,bigdata02,bigdata03启动Zookeeper进程
# bigdata01
[root@bigdata01 apache-zookeeper-3.5.8-bin]# bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /data/soft/apache-zookeeper-3.5.8-bin/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
# bigdata02
[root@bigdata02 apache-zookeeper-3.5.8-bin]# bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /data/soft/apache-zookeeper-3.5.8-bin/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
# bigdata03
[root@bigdata03 apache-zookeeper-3.5.8-bin]# bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /data/soft/apache-zookeeper-3.5.8-bin/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
6. 验证
分别在bigdata01,bigdata02,bigdata03上执行jps命令,验证是否有QuorumPeerMain进程
如果没有,就到对应节点的logs目录下查看zookeeper*-*.out日志文件
执行 bin/zkServer.sh status 命令会发现一个节点显示为leader,其它节点为follower
[root@bigdata01 apache-zookeeper-3.5.8-bin]# bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /data/soft/apache-zookeeper-3.5.8-bin/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: follower
[root@bigdata02 apache-zookeeper-3.5.8-bin]# bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /data/soft/apache-zookeeper-3.5.8-bin/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: leader
[root@bigdata03 apache-zookeeper-3.5.8-bin]# bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /data/soft/apache-zookeeper-3.5.8-bin/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: follower
7.如何停止zookeeper群集
想停止zookeeper群集,可以在所有节点 分别执行 bin/zkServer.sh stop 命令即可.
2.3 配置Hadoop群集
1. 解压hadoop安装包
[root@bigdata01 soft]# tar -zxvf hadoop-3.2.0.tar.gz
2.修改hadoop配置文件
[root@bigdata01 soft]# cd hadoop-3.2.0/etc/hadoop/
[root@bigdata01 hadoop]#
① hadoop-env.sh
在文件末尾增加环境变量信息
[root@bigdata01 hadoop]# vi hadoop-env.sh
export JAVA_HOME=/data/soft/jdk1.8
export HADOOP_LOG_DIR=/data/hadoop_repo/logs/hadoop
② code-site.xml
[root@bigdata01 hadoop]# vi core-site.xml
<configuration>
# mycluster是集群的逻辑名称,需要和hdfs-site.xml中dfs.nameservices值一致
<property>
<name>fs.defaultFS</name>
<value>hdfs://mycluster</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/data/hadoop_repo</value>
</property>
# 用户角色配置,不配置此项会导致web页面报错
<property>
<name>hadoop.http.staticuser.user</name>
<value>root</value>
</property>
# zookeeper集群地址
<property>
<name>ha.zookeeper.quorum</name>
<value>bigdata01:2181,bigdata02:2181,bigdata03:2181</value>
</property>
</configuration>
③ hdfs-site.xml
[root@bigdata01 hadoop]# vi hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
# 自定义的集群名称
<property>
<name>dfs.nameservices</name>
<value>mycluster</value>
</property>
# 所有的namenode列表,逻辑名称,不是namenode所在的主机名
<property>
<name>dfs.ha.namenodes.mycluster</name>
<value>nn1,nn2</value>
</property>
# namenode之间用于RPC通信的地址,value填写namenode所在的主机地址
# 默认端口8020,注意mycluster与nn1要和前面的配置一致
<property>
<name>dfs.namenode.rpc-address.mycluster.nn1</name>
<value>bigdata01:8020</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn2</name>
<value>bigdata02:8020</value>
</property>
# namenode的web访问地址,默认端口9870
<property>
<name>dfs.namenode.http-address.mycluster.nn1</name>
<value>bigdata01:9870</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn2</name>
<value>bigdata02:9870</value>
</property>
# journalnode主机地址,最少三台,默认端口8485
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://bigdata01:8485;bigdata02:8485;bigdata03:8485/mycluster</value>
</property>
# 故障时自动切换的实现类
<property>
<name>dfs.client.failover.proxy.provider.mycluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
# 故障时相互操作方式(namenode要切换active和standby),使用ssh方式
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
# 修改为自己用户的ssh key存放地址
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/root/.ssh/id_rsa</value>
</property>
# namenode日志文件输出路径,即journalnode读取变更的位置
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/data/hadoop_repo/journalnode</value>
</property>
# 启用自动故障转移
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
</configuration>
mapred-site.xml 和 yarn-site.xml 根据需要修改,这里就不修改了,因为我们只需要启动HDFS相关服务
④ workers
[root@bigdata01 hadoop]# vi workers
bigdata02
bigdata03
⑤ start-dfs.sh
[root@bigdata01 hadoop]# cd /data/soft/hadoop-3.2.0/sbin
[root@bigdata01 sbin]# vi start-dfs.sh
HDFS_DATANODE_USER=root
HDFS_DATANODE_SECURE_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_ZKFC_USER=root
HDFS_JOURNALNODE_USER=root
⑥ stop-dfs.sh
[root@bigdata01 sbin]# vi stop-dfs.sh
HDFS_DATANODE_USER=root
HDFS_DATANODE_SECURE_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_ZKFC_USER=root
HDFS_JOURNALNODE_USER=root
start-yarn.sh, stop-yarn.sh根据需要配置,这里不需要启动这些yarn进程
3. 将修改好配置的安装包拷贝到其他节点
[root@bigdata01 sbin]# cd /data/soft/
[root@bigdata01 soft]# scp -rq hadoop-3.2.0 bigdata02:/data/soft/
[root@bigdata01 soft]# scp -rq hadoop-3.2.0 bigdata03:/data/soft/
4. 格式化HDFS
此步骤只需要第一个配置HA时执行一次即可
注意:在格式化HDFS之前需要先启动所有的Journalnode
[root@bigdata01 hadoop-3.2.0]# bin/hdfs --daemon start journalnode
[root@bigdata02 hadoop-3.2.0]# bin/hdfs --daemon start journalnode
[root@bigdata03 hadoop-3.2.0]# bin/hdfs --daemon start journalnode
任选一个namenode节点执行格式化
[root@bigdata01 hadoop-3.2.0]# bin/hdfs namenode -format
....
....
2026-02-07 00:35:06,212 INFO common.Storage: Storage directory /data/hadoop_repo/dfs/name has been successfully formatted.
2026-02-07 00:35:06,311 INFO namenode.FSImageFormatProtobuf: Saving image file /data/hadoop_repo/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
2026-02-07 00:35:06,399 INFO namenode.FSImageFormatProtobuf: Image file /data/hadoop_repo/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 399 bytes saved in 0 seconds .
2026-02-07 00:35:06,405 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
2026-02-07 00:35:06,432 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at bigdata01/192.168.182.100
************************************************************/
看到 had been successfully formatted就说明hdfs格式化成功了
5.启动namenode
[root@bigdata01 hadoop-3.2.0]# bin/hdfs --daemon start namenode
在另一个namenode节点(bigdata02)上同步信息,看到下面的信息,说明同步成功
[root@bigdata02 hadoop-3.2.0]# bin/hdfs namenode -bootstrapStandby
....
....
=====================================================
About to bootstrap Standby ID nn2 from:
Nameservice ID: mycluster
Other Namenode ID: nn1
Other NN's HTTP address: http://bigdata01:9870
Other NN's IPC address: bigdata01/192.168.182.100:8020
Namespace ID: 1820763709
Block pool ID: BP-1332041116-192.168.182.100-1770395706205
Cluster ID: CID-c12130ca-3a7d-4722-93b0-a79b0df3ed84
Layout version: -65
isUpgradeFinalized: true
=====================================================
2026-02-07 00:39:38,594 INFO common.Storage: Storage directory /data/hadoop_repo/dfs/name has been successfully formatted.
2026-02-07 00:39:38,654 INFO namenode.FSEditLog: Edit logging is async:true
2026-02-07 00:39:38,767 INFO namenode.TransferFsImage: Opening connection to http://bigdata01:9870/imagetransfer?getimage=1&txid=0&storageInfo=-65:1820763709:1770395706205:CID-c12130ca-3a7d-4722-93b0-a79b0df3ed84&bootstrapstandby=true
2026-02-07 00:39:38,854 INFO common.Util: Combined time for file download and fsync to all disks took 0.00s. The file download took 0.00s at 0.00 KB/s. Synchronous (fsync) write to disk of /data/hadoop_repo/dfs/name/current/fsimage.ckpt_0000000000000000000 took 0.00s.
2026-02-07 00:39:38,855 INFO namenode.TransferFsImage: Downloaded file fsimage.ckpt_0000000000000000000 size 399 bytes.
2026-02-07 00:39:38,894 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at bigdata02/192.168.182.101
************************************************************/
6. 格式化zookeeper节点
此步骤只需执行一次即可
在任意一个节点上操作都可以
[root@bigdata01 hadoop-3.2.0]# bin/hdfs zkfc -formatZK
....
....
2026-02-07 00:42:17,212 INFO zookeeper.ClientCnxn: Socket connection established to bigdata02/192.168.182.101:2181, initiating session
2026-02-07 00:42:17,220 INFO zookeeper.ClientCnxn: Session establishment complete on server bigdata02/192.168.182.101:2181, sessionid = 0x100001104b00098, negotiated timeout = 10000
2026-02-07 00:42:17,244 INFO ha.ActiveStandbyElector: Successfully created /hadoop-ha/mycluster in ZK.
2026-02-07 00:42:17,249 INFO zookeeper.ZooKeeper: Session: 0x100001104b00098 closed
2026-02-07 00:42:17,251 WARN ha.ActiveStandbyElector: Ignoring stale result from old client with sessionId 0x100001104b00098
2026-02-07 00:42:17,251 INFO zookeeper.ClientCnxn: EventThread shut down for session: 0x100001104b00098
2026-02-07 00:42:17,254 INFO tools.DFSZKFailoverController: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DFSZKFailoverController at bigdata01/192.168.182.100
************************************************************/
看到 Successfully created /hadoop-ha/mycluster in ZK. 说明执行成功.
7.启动HDFS 高可用集群
[root@bigdata01 hadoop-3.2.0]# sbin/start-dfs.sh
Starting namenodes on [bigdata01 bigdata02]
Last login: Sat Feb 7 00:02:27 CST 2026 on pts/0
bigdata01: namenode is running as process 6424. Stop it first.
Starting datanodes
Last login: Sat Feb 7 00:47:13 CST 2026 on pts/0
Starting journal nodes [bigdata01 bigdata03 bigdata02]
Last login: Sat Feb 7 00:47:13 CST 2026 on pts/0
bigdata02: journalnode is running as process 4864. Stop it first.
bigdata01: journalnode is running as process 6276. Stop it first.
bigdata03: journalnode is running as process 2479. Stop it first.
Starting ZK Failover Controllers on NN hosts [bigdata01 bigdata02]
Last login: Sat Feb 7 00:47:18 CST 2026 on pts/0
以后启动HA群集,执行使用sbin/start-dfs.sh 即可,不需要再执行5,6步的操作(格式化)了
8. 验证HA群集
此时访问两个namenode节点的9870端口,一个显示为active,一个显示为Standby
http://bigdata01:9870/dfshealth.html
http://bigdata02:9870/dfshealth.html
9. 模拟切换
我们手工停掉active状态的namenode,验证standby 是否可以自动切换为active
[root@bigdata01 hadoop-3.2.0]# jps
8758 DFSZKFailoverController
8267 NameNode
1581 QuorumPeerMain
8541 JournalNode
8814 Jps
[root@bigdata01 hadoop-3.2.0]# kill 8267
[root@bigdata01 hadoop-3.2.0]# jps
8758 DFSZKFailoverController
1581 QuorumPeerMain
8541 JournalNode
8845 Jps
此时在查看bigdata02的信息,发现它状态变为了active
接着我们把bigdata01中的namenode启动起来,发现它状态变为了standby
[root@bigdata01 hadoop-3.2.0]# bin/hdfs --daemon start namenode
[root@bigdata01 hadoop-3.2.0]# jps
8898 NameNode
8758 DFSZKFailoverController
8967 Jps
1581 QuorumPeerMain
8541 JournalNode
通过验证,我们已实现HDFS高可用。
以后我们再操作HDFS就应该这样操作了。
这里面的mycluster就是在hdfs-site.xml中配置的dfs.nameservices属性的值。
[root@bigdata02 hadoop-3.2.0]# bin/hdfs dfs -ls hdfs://mycluster/
[root@bigdata02 hadoop-3.2.0]# bin/hdfs dfs -put README.txt hdfs://mycluster/
[root@bigdata02 hadoop-3.2.0]# bin/hdfs dfs -ls hdfs://mycluster/ Found 1 items
-rw-r--r-- 2 root supergroup 1361 2026-02-07 00:58 hdfs://mycluster/README.txt
10. 停止HDFS群集
[root@bigdata01 hadoop-3.2.0]# sbin/stop-dfs.sh
Stopping namenodes on [bigdata01 bigdata02]
Last login: Sat Feb 7 00:52:01 CST 2026 on pts/0
Stopping datanodes
Last login: Sat Feb 7 01:03:23 CST 2026 on pts/0
Stopping journal nodes [bigdata01 bigdata03 bigdata02]
Last login: Sat Feb 7 01:03:25 CST 2026 on pts/0
Stopping ZK Failover Controllers on NN hosts [bigdata01 bigdata02]
Last login: Sat Feb 7 01:03:29 CST 2026 on pts/0
三. HDFS高扩展
HDFS Federation可以解决单一命名空间存在的问题,使用多个NameNode,每个NameNode负责一个命名空间
这种设计可以提供一下特性:
1 : HDFS群集扩展性。多个NameNode分管一部分目录,使得一个群集可以扩展到更多节点,不再因内存的显示制约文件存储数据。
2 : 性能更高效。 多个NameNode管理不同的数据,且同时对外提供服务,将为用户提供更高的读写吞吐率。
3 : 良好的隔离性。用户可以根据需要将不同业务数据交由不同NameNode管理,这样不同业务之间影响很小。
一般Federation会结合HA一起使用:
这里面用到了4个NameNode和6个DataNode
NN-1、NN-2、NN-3、NN-4
DN-1、DN-2、DN-3、DN-4、DN-5、DN-6
其中NN-1和NN-3配置了HA,提供了一个命名空间,/share
NN-2和NN-4配置了HA,提供了一个命名空间,/user
这样后期我们存储数据的时候,可以根据数据的业务类型来区分是否存储到share目录下还是user目录下,此时HDFS的存储能力总和就是/share和/user两个命名空间的总和了.