hadoop_hdfs07-hdfsHA集群配置&ZK集群配置&yarnHA配置
注:笔记.
(一) 集群规划
Hadoop102 | Hadoop03 | Hadoop04 |
---|---|---|
ZK | ZK | ZK |
JournaleNode | JournaleNode | JournaleNode |
NameNode | NameNode | |
DataNode | DataNode | DataNode |
ResourceManager | ResourceManager | |
NodeManager | NodeManager | NodeManager |
(二) 配置Zookeeper集群
-
官网: https://archive.apache.org/dist/
-
解压ZK安装包
cmd+shirft+p 进入sftp模式拖入安装包
[user02@hadoop102 software]$ tar -zxvf zookeeper-3.4.9.tar.gz -C /opt/module/
[user02@hadoop102 software]$ cd /opt//module/zookeeper-3.4.9/
[user02@hadoop102 zookeeper-3.4.9]$ mkdir -p zkData
- 配置zoo.cfg文件
[user02@hadoop102 zookeeper-3.4.9]$ mv ./conf/zoo_sample.cfg ./conf/zoo.cfg
[user02@hadoop102 conf]$ vim zoo.cfg
# 增加如下配置
dataDir=/opt/module/zookeeper-3.4.9/zkData
######cluster####
server.2=hadoop102:2888:3888
server.3=hadoop103:2888:3888
server.4=hadoop104:2888:3888
server.2=hadoop102:2888:3888说明:
2:是一个数字,表示第二号服务器
Hadoop102:表示服务器的ip地址
2888: 服务器集群中Leader服务器交换信息的端口
3888: 如果集群中Leader服务器挂了,需要一个端口来重新选举,选出一个新的Leader,这个端口就是用来执行选举时服务器相互通信的端口.
集群模式下配置一个文件myid,这个文件在dataDir目录下,这个文件里面有一个数据就是"2"(第二号服务器)的值, Zookeeper启动时读区此文件,拿到里面的zoo.cfg配置信息比较从而判断哪个是server.
- zk集群操作
- 在/opt/moudle/zookeeper-3.4.9/zKdata目录下创建一个myid文件
[user02@hadoop102 zKdata]$ touch myid [user02@hadoop102 zKdata]$ vim myid # 添加server对应的编号2 2
- 拷贝到其他集群,分别修改myid内容为3和4
[user02@hadoop102 module]$ xsync zookeeper-3.4.9/
- 分别单点启动zk
[user02@hadoop102 zookeeper-3.4.9]$ bin/zkServer.sh start [user02@hadoop103 zookeeper-3.4.9]$ bin/zkServer.sh start [user02@hadoop104 zookeeper-3.4.9]$ bin/zkServer.sh start
- 查看状态
[user02@hadoop102 zookeeper-3.4.9]$ bin/zkServer.sh status ZooKeeper JMX enabled by default Using config: /opt/module/zookeeper-3.4.9/bin/../conf/zoo.cfg Mode: follower [user02@hadoop103 zookeeper-3.4.9]$ bin/zkServer.sh status ZooKeeper JMX enabled by default Using config: /opt/module/zookeeper-3.4.9/bin/../conf/zoo.cfg Mode: leader [user02@hadoop104 zookeeper-3.4.9]$ bin/zkServer.sh status ZooKeeper JMX enabled by default Using config: /opt/module/zookeeper-3.4.9/bin/../conf/zoo.cfg Mode: follower
(三) 配置HDFS-HA集群
-
官网:https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html
-
拷贝hadoop
[user02@hadoop104 module]$ mkdir ha [user02@hadoop104 hadoop-2.7.2]$ cp -r hadoop-2.7.2/ /opt/module/ha
-
配置hadoop-env.sh
export JAVA_HOME=/opt/module/jdk1.8.0_144
-
配置core-site.xml
<!--指定hdfs中namenode的地址--> <!--把两个nn地址组装成一个集群--> <property> <name>fs.defaultFS</name> <!--hdfs://hadoop102:9000--> <value>hdfs://mycluster</value> </property> <!--指定hadoop运行时产生文件的存储目录--> <property> <name>hadoop.tmp.dir</name> <value>/opt/module/ha/hadoop-2.7.2/data/tmp</value> </property>
-
配置hdfs-site.xml
<!--完全分布式集群名称--> <property> <name>dfs.nameservices</name> <value>mycluster</value> </property> <!--集群中namenode节点有哪些--> <property> <name>dfs.ha.namenodes.mycluster</name> <value>nn1,nn2</value> </property> <!--nn1的RPC通信地址--> <property> <name>dfs.namenode.rpc-address.mycluster.nn1</name> <value>hadoop102:8020</value> </property> <!--nn1的http通信地址--> <property> <name>dfs.namenode.http-address.mycluster.nn1</name> <value>hadoop102:50070</value> </property> <!--nn2的RPC通信地址--> <property> <name>dfs.namenode.rpc-address.mycluster.nn2</name> <value>hadoop103:8020</value> </property> <!--nn2的http通信地址--> <property> <name>dfs.namenode.http-address.mycluster.nn2</name> <value>hadoop103:50070</value> </property> <!--指定NameNode元数据在JournalNode上存放的位置--> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://hadoop102:8485;hadoop103:8485;hadoop104:8485/mycluster</value> </property> <!--配置隔离机制,即同一时刻只能有一台服务机制对外响应--> <property> <name>dfs.ha.fencing.methods</name> <value>sshfence</value> </property> <!--使用隔离机制时需要ssh无密钥登录--> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/home/user02/.ssh/id_rsa</value> </property> <!--声明journalnode服务器存储目录--> <property> <name>dfs.journalnode.edits.dir</name> <value>/opt/module/ha/hadoop-2.7.2/data/jn</value> </property> <!--关闭权限检查--> <property> <name>dfs.permissions.enable</name> <value>false</value> </property> <!--访问代理类:client,mycluster,active配置失败自动切换实现方式--> <property> <name>dfs.client.failover.proxy.provider.mycluster</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property>
-
配置分发到其它节点
[user02@hadoop104 module]$ xsync ./ha
(四) 启动HDFS-HA集群(单点)
-
每个journalnode节点启动journalnode服务
注:启动的时候要加sbin路径 有两个hdfs
[user02@hadoop102 ~]$ cd /opt/module/ha/hadoop-2.7.2/ [user02@hadoop103 ~]$ cd /opt/module/ha/hadoop-2.7.2/ [user02@hadoop104 ~]$ cd /opt/module/ha/hadoop-2.7.2/ [user02@hadoop102 hadoop-2.7.2]$ sbin/hadoop-daemon.sh start journalnode [user02@hadoop103 hadoop-2.7.2]$ sbin/hadoop-daemon.sh start journalnode [user02@hadoop104 hadoop-2.7.2]$ sbin/hadoop-daemon.sh start journalnode
-
格式化[nn1]并启动(一个格式化有,一个同步)
先删除data和logs文件夹
[user02@hadoop102 hadoop-2.7.2]$ rm -rf data/ logs/ [user02@hadoop103 hadoop-2.7.2]$ rm -rf data/ logs/ [user02@hadoop104 hadoop-2.7.2]$ rm -rf data/ logs/
格式化nn1
[user02@hadoop102 hadoop-2.7.2]$ bin/hdfs namenode -format
启动nn1
[user02@hadoop102 hadoop-2.7.2]$ sbin/hadoop-daemon.sh start namenode
- [nn2]上同步nn1元数据信息并启动
[user02@hadoop103 hadoop-2.7.2]$ bin/hdfs namenode -bootstrapStandby
[user02@hadoop103 hadoop-2.7.2]$ sbin/hadoop-daemon.sh start namenode
-
查看web页面 hadoop102:50070和hadoop103:50070都是standby状态
-
在[nn1]上启动所有的datanode
[user02@hadoop102 hadoop-2.7.2]$ sbin/hadoop-daemon.sh start datanode [user02@hadoop103 hadoop-2.7.2]$ sbin/hadoop-daemon.sh start datanode [user02@hadoop104 hadoop-2.7.2]$ sbin/hadoop-daemon.sh start datanode
-
将[nn1]切换为active
[user02@hadoop102 hadoop-2.7.2]$ bin/hdfs haadmin -transitionToActive nn1
-
查看是否是active
[user02@hadoop102 hadoop-2.7.2]$ bin/hdfs haadmin -getServiceState nn1 active
(五) 配置HDFS-HA自动故障转移
-
前景
当kill -9 了hadoop02的namenode,再想将nn2切换为active会报拒绝链接,挂了一台,无法通信,需再将hadoop102启动起来保证standby,再切. (手动切换)
-
配置
1) hdfs-site.xml ``` <!--hdfs-ha自动故障转移--> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>true</value> </property> ``` [user02@hadoop102 hadoop]$ xsync ./hdfs-site.xml 2) core-site.xml ``` <!--配置hdfs-ha自动故障转移--> <property> <name>ha.zookeeper.quorum</name> <value>hadoop102:2181,hadoop103:2181,hadoop104:2181</value> </property> ``` [user02@hadoop102 hadoop]$ xsync ./core-site.xml
-
启动
1) 关闭所有hdfs服务 ``` sbin/stop-dfs.sh ``` 2) 启动ZK集群 bin/zkServer.sh start ``` [user02@hadoop102 ~]$ cd /opt/module/zookeeper-3.4.9/ [user02@hadoop102 zookeeper-3.4.9]$ bin/zkServer.sh start ZooKeeper JMX enabled by default Using config: /opt/module/zookeeper-3.4.9/bin/../conf/zoo.cfg Starting zookeeper ... STARTED [user02@hadoop102 zookeeper-3.4.9]$ jps 3351 Jps 3326 QuorumPeerMain 三台群起 ``` 3) 初始化HA在ZK中的状态 bin/hdfs zkfc -formatZK ``` [user02@hadoop102 zookeeper-3.4.9]$ cd /opt/module/ha/hadoop-2.7.2/ [user02@hadoop102 hadoop-2.7.2]$ bin/hdfs zkfc -formatZK ``` /opt/module/zookeeper-3.4.9/bin/zkServer.sh status 显示Mode:follower 4) 启动hdfs服务 sbin/start-dfs.sh ``` [user02@hadoop102 hadoop-2.7.2]$ sbin/start-dfs.sh Starting namenodes on [hadoop102 hadoop103] hadoop102: starting namenode, logging to /opt/module/ha/hadoop-2.7.2/logs/hadoop-user02-namenode-hadoop102.out hadoop103: starting namenode, logging to /opt/module/ha/hadoop-2.7.2/logs/hadoop-user02-namenode-hadoop103.out hadoop103: starting datanode, logging to /opt/module/ha/hadoop-2.7.2/logs/hadoop-user02-datanode-hadoop103.out hadoop102: starting datanode, logging to /opt/module/ha/hadoop-2.7.2/logs/hadoop-user02-datanode-hadoop102.out hadoop104: starting datanode, logging to /opt/module/ha/hadoop-2.7.2/logs/hadoop-user02-datanode-hadoop104.out Starting journal nodes [hadoop102 hadoop103 hadoop104] hadoop104: starting journalnode, logging to /opt/module/ha/hadoop-2.7.2/logs/hadoop-user02-journalnode-hadoop104.out hadoop102: starting journalnode, logging to /opt/module/ha/hadoop-2.7.2/logs/hadoop-user02-journalnode-hadoop102.out hadoop103: starting journalnode, logging to /opt/module/ha/hadoop-2.7.2/logs/hadoop-user02-journalnode-hadoop103.out Starting ZK Failover Controllers on NN hosts [hadoop102 hadoop103] hadoop103: starting zkfc, logging to /opt/module/ha/hadoop-2.7.2/logs/hadoop-user02-zkfc-hadoop103.out hadoop102: starting zkfc, logging to /opt/module/ha/hadoop-2.7.2/logs/hadoop-user02-zkfc-hadoop102.out ``` 5) 在各个NameNode节点上启动DFSZK Failover Controller,先在哪台机器启动,哪台机器的NameNode就是Active NameNode ``` sbin/hadoop-daemin.sh start zkfc ``` 访问/opt/module/zookeeper-3.4.9/bin/zkcli.sh 执行ls /多了一个hadoop -ha 进程
-
验证
访问http://hadoop102:50070/dfshealth.html#tab-overview ---active 访问http://hadoop103:50070/dfshealth.html#tab-overview ---standby kill active的nn后,standby立马切换为active 1) 将Active NameNode进程kill ``` kill -9 namenode的进程id ``` 2) 将Active NameNode机器断开网络 ``` service network stop ```
-
各节点进程
``` [user02@hadoop102 sbin]$ jps 3424 QuorumPeerMain--------ZK集群的进程 3666 NameNode 3779 DataNode 3978 JournalNode 4170 DFSZKFailoverController 4284 Jps [user02@hadoop103 bin]$ jps 3586 JournalNode 3491 DataNode 3412 NameNode 3326 QuorumPeerMain 3806 Jps 5188 DFSZKFailoverController [user02@hadoop104 bin]$ jps 3553 Jps 3475 JournalNode 3380 DataNode 3305 QuorumPeerMain ```
(六) YARN-HA配置
-
具体配置
- yarn-site.xml
<configuration> <!-- Site specific YARN configuration properties --> <!--Reducer获取数据的方式--> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <!--指定YARN的ResourceManager的地址--> <!-- <property> <name>yarn.resourcemanager.hostname</name> <value>hadoop103</value> </property> --> <!-- yarn-ha --> <property> <name>yarn.resourcemanager.ha.enabled</name> <value>true</value> </property> <property> <name>yarn.resourcemanager.cluster-id</name> <value>cluster-yarn1</value> </property> <property> <name>yarn.resourcemanager.ha.rm-ids</name> <value>rm1,rm2</value> </property> <property> <name>yarn.resourcemanager.hostname.rm1</name> <value>hadoop102</value> </property> <property> <name>yarn.resourcemanager.hostname.rm2</name> <value>hadoop103</value> </property> <property> <name>yarn.resourcemanager.webapp.address.rm1</name> <value>hadoop102:8088</value> </property> <property> <name>yarn.resourcemanager.webapp.address.rm2</name> <value>hadoop103:8088</value> </property> <property> <name>yarn.resourcemanager.zk-address</name> <value>hadoop102:2181,hadoop103:2181,hadoop104:2181</value> </property> <!-- 启动自动恢复 --> <property> <name>yarn.resoucemanager.recovery.enabled</name> <value>true</value> </property> <!-- 指定resoucemanager的状态信息存储在zookeeper上 --> <property> <name>yarn.resourcemanager.store.class</name> <value>org.apache.hadoop.yarn.server.resourcemanager.recover.ZKRMStateStore</value> </property> <!-- 日志聚集功能 --> <propery> <name>yarn.log-aggregation-enable</name> <value>true</value> </propery> <!-- 日志保留7天 --> <propery> <name>yarn.nodemanager.log.retain-seconds</name> <value>604800</value> </propery> </configuration>
- 分发 xsync
-
启动HDFS
- 三个JournalNode上启动journalnode服务
sbin/hadoop-daemon.sh start journalnode
- 在nn1上,格式化namenode1,并启动
先rm -rf ./data ./logs
bin/hdfs namenode -format sbin/hadoop-daemon.sh start namenode
- 在nn2上,同步nn1的元数据信息
bin/hdfs namenode -bootstrapStandby
- 启动nn2
sbin/hadoop-daemon.sh start namenode
- 启动所有datanode
sbin/hadoop-daemon.sh start datanode
- 将nn1切换为active
# 未配置自动故障转移执行: bin/hdfs haadmin -transitionToActive nn1 # 配置自动故障转移执行:2个nn节点启动zkfc服务,哪个先启动,哪个就是active sbin/hadoop-daemon.sh start zkfc
-
启动YARN
- 在hadoop102执行
sbin/start-yarn.sh
- 在hadoop103执行
sbin/yarn-daemon.sh start resourcemanager
- 查看服务状态
bin/yarn rmadmin -getServiceState rm1
web hadoop102:8088查看,yarn不能看到具体哪个node active哪个standby,kill其中一个的rm进程后,另一个字的切换为active.
(七) 端口号
secondary namenode http通信地址 : 50090 dfs.namenode.secondary.http-address
namenode http通信地址 : 50070 dfs.namenode.http-address.mycluster.nn1
namenode rpc通信地址 : 9000或8020 dfs.namenode.rpc-address.mycluster.nn1
yarn web通信地址 : 8088 yarn.resourcemanager.webapp.address.rm1
zookeeper : 2181 ha.zookeeper.quorum