Hadoop3.x配置高可用

配置之前,我们先来思考一下我们为什么要配置高可用呢?

原因:
当namenode节点挂掉之后,集群就无法工作了,secondarynode 节点也只是能复制namenode的工作,而不能产生替代作用, 我们需要一个当namenode节点挂掉之后,能代替他的节点,这时候就需要来配置高可用了

官方文档:点击这里去配置

前提条件:

  1. 你的Hadoop已经能启动成功. 我们在其基础上进行修改.
  2. 要配置现在的这个Hadoop的环境变量,不然是初始化不成功的

journalnode 进程的作用

开始之前我们还要先来了解一下journalnode是什么,有什么作用?更好的帮我们来理解高可用

这个是一个进程,主要是为了实现NameNode之间的数据共享,详解:

两个NameNode为了数据同步,会通过一组称作JournalNodes的独立进程进行相互通信。当active状态的NameNode的命名空间有任何修改时,会告知大部分的JournalNodes进程。standby状态的NameNode有能力读取JNs中的变更信息,并且一直监控edit log的变化,把变化应用于自己的命名空间。standby可以确保在集群出错时,命名空间状态已经完全同步了。
在这里插入图片描述
中间就是journalnode,Active Namenode往里写editlog数据,StandBy再从里面读取数据进行同步,来实现两个机器上的数据存储达到一样.

hdfs-site.xml

<configuration>
<!-- 配置nameservice的ID,命名空间-->
<property>
  <name>dfs.nameservices</name>
  <value>mycluster</value>
</property>

<!--配置namenode的id-->
<property>
  <name>dfs.ha.namenodes.mycluster</name>
  <value>nn1,nn2</value>
</property>
<!--配置rpc通信地址 -->
<property>
  <name>dfs.namenode.rpc-address.mycluster.nn1</name>
  <value>hadoop202:8020</value>
</property>
<property>
  <name>dfs.namenode.rpc-address.mycluster.nn2</name>
  <value>hadoop203:8020</value>
</property>
<!--配置http通信地址-->
<property>
  <name>dfs.namenode.http-address.mycluster.nn1</name>
  <value>hadoop202:9870</value>
</property>
<property>
  <name>dfs.namenode.http-address.mycluster.nn2</name>
  <value>hadoop203:9870</value>
</property>
<!--配置journalNode-->
<property>
  <name>dfs.namenode.shared.edits.dir</name>
 <value>qjournal://hadoop202:8485;hadoop203:8485;hadoop204:8485/mycluster</value>
</property>
<!--配置代理-->
<property>
  <name>dfs.client.failover.proxy.provider.mycluster</name>
  <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<!--侦听隔离不需要提供密码 ,配置ssh私钥地址-->
<property>
      <name>dfs.ha.fencing.methods</name>
      <value>sshfence</value>
</property>
<property>
      <name>dfs.ha.fencing.ssh.private-key-files</name>
      <value>/root/.ssh/id_rsa</value>
</property>
</configuration>

core-site.xml

<configuration>
<!-- 指定 NameNode 的地址 -->
 <property>
 <name>fs.defaultFS</name>
 <value>hdfs://mycluster</value>
 </property>
 <!-- 指定 hadoop 数据的存储目录 -->
 <property>
 <name>hadoop.tmp.dir</name>
 <value>/opt/module/HA/hadoop-3.1.3/data</value>
 </property>
<!--以root用户登录静态页面操作-->
<property>
 <name>hadoop.http.staticuser.user</name>
 <value>root</value>
 </property>
<!--journal存放路径-->
<property>
  <name>dfs.journalnode.edits.dir</name>
 <value>/opt/module/HA/hadoop-3.1.3/data/jn</value>
</property>
</configuration>

上面这些直接照着文档来配置就行了, 这里我是配置了两个namenode
(journalnode这个的配置应该是只配置namenode节点上的就行了,我这里3个节点都配了,有空的话可以测试一下)

手动配置:

1.依赖于journal来进行数据共享的,所以要先启动journal进程

sbin/hadoop-daemon.sh start journalnode

2.在[nn1]上,对其进行格式化,并启动(只需要执行一次)

bin/hdfs namenode -format
sbin/hadoop-daemon.sh start namenode

3 . 在[nn2]上,同步nn1的元数据信息(只需要执行一次)

bin/hdfs namenode -bootstrapStandby

4 . 启动[nn2]

sbin/hadoop-daemon.sh start namenode

5.启动所有的datanode

sbin/hadoop-daemons.sh start datanode

7.将[nn1]切换为Active

bin/hdfs haadmin -transitionToActive nn1

8.查看是否Active

bin/hdfs haadmin -getServiceState nn1

==注:==手动配置的话,namenode宕机的话, 就无法切换状态了(Active/ Standby),无法通信!


自动配置 (依赖于zookeeper来完成)

hdfs-site.xml

#开启自动故障切换
<property>
	<name>dfs.ha.automatic-failover.enabled</name>
	<value>true</value>
</property>

core-site.xml

#指定集群: 这里一定要看清楚,我就是这里出错了,浪费了好长时间, 这个配置的端口号要和你的zookeeper集群一致
<property>
	<name>ha.zookeeper.quorum</name>
	<value>hadoop102:2181,hadoop103:2181,hadoop104:2181</value>
</property>

自动配置切换原理: 主要是通过zookeeper 来配合完成,使用ZKFC进程进行监视 (类似于分布式锁的实现原理)
在这里插入图片描述

组件的介绍: (从官网拿下来的组件介绍,可以进行翻译看)

Automatic failover adds two new components to an HDFS deployment: a ZooKeeper quorum, and the ZKFailoverController process (abbreviated as ZKFC).

Apache ZooKeeper is a highly available service for maintaining small amounts of coordination data, notifying clients of changes in that data, and monitoring clients for failures. The implementation of automatic HDFS failover relies on ZooKeeper for the following things:

Failure detection - each of the NameNode machines in the cluster maintains a persistent session in ZooKeeper. If the machine crashes, the ZooKeeper session will expire, notifying the other NameNode(s) that a failover should be triggered.

Active NameNode election - ZooKeeper provides a simple mechanism to exclusively elect a node as active. If the current active NameNode crashes, another node may take a special exclusive lock in ZooKeeper indicating that it should become the next active.

The ZKFailoverController (ZKFC) is a new component which is a ZooKeeper client which also monitors and manages the state of the NameNode. Each of the machines which runs a NameNode also runs a ZKFC, and that ZKFC is responsible for:

Health monitoring - the ZKFC pings its local NameNode on a periodic basis with a health-check command. So long as the NameNode responds in a timely fashion with a healthy status, the ZKFC considers the node healthy. If the node has crashed, frozen, or otherwise entered an unhealthy state, the health monitor will mark it as unhealthy.

ZooKeeper session management - when the local NameNode is healthy, the ZKFC holds a session open in ZooKeeper. If the local NameNode is active, it also holds a special “lock” znode. This lock uses ZooKeeper’s support for “ephemeral” nodes; if the session expires, the lock node will be automatically deleted.

ZooKeeper-based election - if the local NameNode is healthy, and the ZKFC sees that no other node currently holds the lock znode, it will itself try to acquire the lock. If it succeeds, then it has “won the election”, and is responsible for running a failover to make its local NameNode active. The failover process is similar to the manual failover described above: first, the previous active is fenced if necessary, and then the local NameNode transitions to active state.

启动
(1)关闭所有HDFS服务:

sbin/stop-dfs.sh

(2)启动Zookeeper集群:

bin/zkServer.sh start

(3)初始化HA在Zookeeper中状态:

bin/hdfs zkfc -formatZK #初始化过后会出现 hadoop-ha 节点

(4)启动HDFS服务:

sbin/start-dfs.sh #配置过高可用,这个就可以启动journalnode 和 ZKFC了

(5)在各个NameNode节点上启动DFSZK Failover Controller,先在哪台机器启动,哪个机器的NameNode就是Active NameNode

手动开启
sbin/hadoop-daemon.sh start journalnode
sbin/hadoop-daemon.sh start zkfc

---------------------------------上面配置的是 HDFS 的高可用 --------------------------------------------

下面来配置Yarn的高可用(resourcemanager )

    <!--指定shuffle-->
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>

    <!--启用resourcemanager ha-->
    <property>
        <name>yarn.resourcemanager.ha.enabled</name>
        <value>true</value>
    </property>
 
    <!--声明两台resourcemanager的地址-->
    <property>
        <name>yarn.resourcemanager.cluster-id</name>
        <value>cluster-yarn1</value>
    </property>

    <property>
        <name>yarn.resourcemanager.ha.rm-ids</name>
        <value>rm1,rm2</value>
    </property>

    <property>
        <name>yarn.resourcemanager.hostname.rm1</name>
        <value>hadoop202</value>
    </property>

    <property>
        <name>yarn.resourcemanager.hostname.rm2</name>
        <value>hadoop203</value>
    </property>
 
    <!--指定zookeeper集群的地址--> 
    <property>
        <name>yarn.resourcemanager.zk-address</name>
        <value>hadoop202:2181,hadoop203:2181,hadoop204:2181</value>
    </property>

    <!--启用自动恢复--> 
    <property>
        <name>yarn.resourcemanager.recovery.enabled</name>
        <value>true</value>
    </property>
 
    <!--指定resourcemanager的状态信息存储在zookeeper集群(默认是存储在文件系统中)--> 
    <property>
        <name>yarn.resourcemanager.store.class</name>     
		<value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
	</property>

启动:

sbin/start-yarn.sh

查看rm状态:

bin/yarn rmadmin -getServiceState rm1

上面配置好了,这个就非常简单了,可以去查看官方文档

命令进行整理:

sbin/hadoop-daemon.sh start journalnode # 启动journalnode进程
bin/hdfs namenode -format # 格式化 namenode
sbin/hadoop-daemon.sh start namenode # 先启动初始化之后的一台namenode
bin/hdfs namenode -bootstrapStandby # 再另一台namenode节点上进行同步源数据
sbin/hadoop-daemons.sh start datanode # 启动所有的datanode 节点
bin/hdfs zkfc -formatZK # 初始化ZKFC ,在zk上生成 hadoop-ha 节点
sbin/hadoop-daemin.sh start zkfc # 单独启动zkfc
sbin/start-dfs.sh
配置好自动故障切换之后,调用这个会自动启动journalnode,zkfc,namenode,datanode进程

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值