部署环境
ip(主机名) | 服务 |
---|---|
172.25.32.5(server5) | namenode,hdfs,nfs |
172.25.32.6(server6) | zookeeper(journalnode)hdfs,nfs |
172.25.32.7(server6) | zookeeper(journalnode)hdfs,nfs |
172.25.32.8(server6) | zookeeper(journalnode)hdfs,nfs |
172.25.32.9(server9) | namenode,hdfs,nfs |
实验前清空/tmp/目录下的数据,并关闭所有java进程.
##以server5为例,其余机器也得达到server5的状态
[hadoop@server5 ~]$ ll /tmp/
总用量 0
[hadoop@server5 hadoop]$ jps
3922 Jps
进行部署
1.在server2上搭建zoopkeeper,并且编辑配置文件添加从节点信息
[hadoop@server5 ~]$ tar zxf zookeeper-3.4.9.tar.gz
[hadoop@server5 ~]$ cd zookeeper-3.4.9
[hadoop@server5 zookeeper-3.4.9]$ cd conf
[hadoop@server5 conf]$ cp zoo_sample.cfg zoo.cfg
[hadoop@server5 conf]$ vim zoo.cfg
dataDir=/tmp/zookeeper ##myid的存放路径
clientPort=2181 ##默认端口
server.1=172.25.32.6:2888:3888
server.2=172.25.32.7:2888:3888
server.3=172.25.32.8:2888:3888
2.添加的3个节点配置文件相同,并且需要在/tmp/zookeeper 目录中创建 myid 文件,写入一个唯一的数字,取值范围在 1-255。此数字与配置文件中的定义保持一致,(server.1=172.25.32.6:2888:3888),其它节点依次类推。
[hadoop@server6 conf]$ mkdir /tmp/zookeeper
[hadoop@server6 conf]$ echo 1 > /tmp/zookeeper/myid
[hadoop@server7 hadoop]$ mkdir /tmp/zookeeper
[hadoop@server7 hadoop]$ echo 2 > /tmp/zookeeper/myid
[hadoop@server8 ~]$ mkdir /tmp/zookeeper
[hadoop@server8 ~]$ echo 3 > /tmp/zookeeper/myid
3.在三个节点开启服务并且查看状态
[hadoop@server7 ~]$ cd zookeeper-3.4.9
[hadoop@server7 zookeeper-3.4.9]$ bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /home/hadoop/zookeeper-3.4.9/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[hadoop@server7 zookeeper-3.4.9]$ bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /home/hadoop/zookeeper-3.4.9/bin/../conf/zoo.cfg
Mode: leader
[hadoop@server6 ~]$ cd zookeeper-3.4.9
[hadoop@server6 zookeeper-3.4.9]$ bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /home/hadoop/zookeeper-3.4.9/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[hadoop@server6 zookeeper-3.4.9]$ bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /home/hadoop/zookeeper-3.4.9/bin/../conf/zoo.cfg
Mode: follower
[hadoop@server8 ~]$ cd zookeeper-3.4.9
[hadoop@server8 zookeeper-3.4.9]$ bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /home/hadoop/zookeeper-3.4.9/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[hadoop@server8 zookeeper-3.4.9]$ bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /home/hadoop/zookeeper-3.4.9/bin/../conf/zoo.cfg
Mode: follower
4.在任意一个节点登陆zookeeper查看
[hadoop@server6 bin]$ pwd
/home/hadoop/zookeeper-3.4.9/bin
[hadoop@server6 bin]$ ./zkCli.sh##连接zookeeper
WATCHER::
WatchedEvent state:SyncConnected type:None path:null
[zk: localhost:2181(CONNECTED) 0] ls /
[zookeeper]
[zk: localhost:2181(CONNECTED) 1] ls /zookeeper
[quota]
[zk: localhost:2181(CONNECTED) 2] ls /zookeeper/quota##没有添加zookeeper的相应信息,所以没有相应内容
[]
[zk: localhost:2181(CONNECTED) 3]
server5与server9配置
[hadoop@server5 ~]$ cd /home/hadoop/hadoop/etc/hadoop/
[hadoop@server5 hadoop]$ vim core-site.xml
<configuration>
指定 hdfs 的 namenode 为 masters
<property>
<name>fs.defaultFS</name>
<value>hdfs://masters</value>
</property>
指定 zookeeper 集群主机地址
<property>
<name>ha.zookeeper.quorum</name>
<value>172.25.32.6:2181,172.25.32.7:2181,172.25.32.8:2181</value>
</property>
</configuration>
[hadoop@server5 hadoop]$ vim hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<!-- 指定 hdfs 的 nameservices 为 masters,和 core-site.xml 文件中的设置保持一
致 -->
<property>
<name>dfs.nameservices</name>
<value>masters</value>
</property>
<!-- masters 下面有两个 namenode 节点,分别是 h1 和 h2 (名称可自定义) -->
<property>
<name>dfs.ha.namenodes.masters</name>
<value>h1,h2</value>
</property>
<!-- 指定 h1 节点的 rpc 通信地址 -->
<property>
<name>dfs.namenode.rpc-address.masters.h1</name>
<value>172.25.32.5:9000</value>
</property>
<!-- 指定 h1 节点的 http 通信地址 -->
<property>
<name>dfs.namenode.http-address.masters.h1</name>
<value>172.25.32.5:9870</value>
</property>
<!-- 指定 h2 节点的 rpc 通信地址 -->
<property>
<name>dfs.namenode.rpc-address.masters.h2</name>
<value>172.25.32.9:9000</value>
</property>
<!-- 指定 h2 节点的 http 通信地址 -->
<property>
<name>dfs.namenode.http-address.masters.h2</name>
<value>172.25.32.9:9870</value>
</property>
<!-- 指定 NameNode 元数据在 JournalNode 上的存放位置 -->
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://172.25.32.6:8485;172.25.32.7:8485;172.25.32.8:8485/masters</value>
</property>
<!-- 指定 JournalNode 在本地磁盘存放数据的位置 -->
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/tmp/journaldata</value>
</property>
<!-- 开启 NameNode 失败自动切换 -->
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<!-- 配置失败自动切换实现方式 -->
<property>
<name>dfs.client.failover.proxy.provider.masters</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<!-- 配置隔离机制方法,每个机制占用一行-->
<property>
<name>dfs.ha.fencing.methods</name>
<value>
sshfence
shell(/bin/true)
</value>
</property>
<!-- 使用 sshfence 隔离机制时需要 ssh 免密码 -->
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/home/hadoop/.ssh/id_rsa</value>
</property>
<!-- 配置 sshfence 隔离机制超时时间 -->
<property>
<name>dfs.ha.fencing.ssh.connect-timeout</name>
<value>30000</value>
</property>
</configuration
- 启动 hdfs 集群(按顺序启动)
[hadoop@server6 bin]$ cd /home/hadoop/hadoop
[hadoop@server6 hadoop]$ bin/hdfs --daemon start journalnode
[hadoop@server6 hadoop]$ bin/hdfs --daemon start journalnode
WARNING: /home/hadoop/hadoop-3.0.3/logs does not exist. Creating.
[hadoop@server6 hadoop]$ jps
4512 JournalNode
4551 Jps
4222 QuorumPeerMain
[hadoop@server7 zookeeper-3.4.9]$ cd /home/hadoop/hadoop
[hadoop@server7 hadoop]$ bin/hdfs --daemon start journalnode
[hadoop@server7 hadoop]$ jps
3504 QuorumPeerMain
3735 JournalNode
3774 Jps
[hadoop@server8 zookeeper-3.4.9]$ cd /home/hadoop/hadoop
[hadoop@server8 hadoop]$ bin/hdfs --daemon start journalnode
[hadoop@server8 hadoop]$ jps
3592 JournalNode
3369 QuorumPeerMain
3631 Jps
- 格式化 HDFS 集群,传递配置文件搭建高可用Namenode 数据默认存放在/tmp,需要把数据拷贝到 h2
[hadoop@server5 hadoop]$ cd /home/hadoop/hadoop
[hadoop@server5 hadoop]$ bin/hdfs namenode -format
[hadoop@server9 ~]$ cd hadoop
[hadoop@server9 hadoop]$ bin/hdfs --daemon start datanode
[hadoop@server9 hadoop]$ jps
11507 DataNode
11529 Jps
//server5
[hadoop@server5 hadoop]$ scp -r /tmp/hadoop-hadoop 172.25.32.9:/tmp
- 格式化 zookeeper (只需在 h1 上执行即可)
[hadoop@server5 hadoop]$ bin/hdfs zkfc -formatZK
[hadoop@server5 hadoop]$ sbin/start-dfs.sh##需要作ssh秘钥分发
//启动之后分别在server5及server9上查看:
注意在查看时可能出现第一次只出现失败回切域,或者只出现namenode的节点的情况,只需要关掉服务再次启动即可
DFSZKFailoverController节点状态
[hadoop@server5 hadoop]$ jps
5607 NameNode
5959 DFSZKFailoverController
6010 Jps
[hadoop@server9 hadoop]$ jps
12208 Jps
11507 DataNode
12180 DFSZKFailoverController
journalnode的节点状态
[hadoop@server6 hadoop]$ jps
4512 JournalNode
4832 Jps
4717 DataNode
4222 QuorumPeerMain
[hadoop@server7 hadoop]$ jps
3504 QuorumPeerMain
3735 JournalNode
3915 DataNode
4030 Jps
[hadoop@server8 hadoop]$ jps
3592 JournalNode
3369 QuorumPeerMain
3772 DataNode
3886 Jps
namenone节点
datanode节点
则集群搭建完成
- 测试
- 查看server1的namenode的节点状态:为active
http://172.25.32.5:9870
- 查看server5的节点状态:为stundy
http://172.25.32.9:9870
- 用客户端登陆zookeeper登陆查看节点状态
[hadoop@server6 hadoop]$ cd …
[hadoop@server6 ~]$ cd zookeeper-3.4.9
[hadoop@server6 zookeeper-3.4.9]$ bin/zkCli.sh ##可以看到hadoop高可用的信息
- 在active上写入数据并且查看
[hadoop@server5 hadoop]$ pwd
/home/hadoop/hadoop
[hadoop@server5 hadoop]$ bin/hdfs dfs -mkdir /user
[hadoop@server5 hadoop]$ bin/hdfs dfs -mkdir /user/hadoop
[hadoop@server5 hadoop]$ bin/hdfs dfs -ls
[hadoop@server5 hadoop]$ bin/hdfs dfs -put etc/hadoop/ input
在浏览器上查看:
数据上传成功
测试故障自动切换
1.杀掉 h1 主机的 namenode 进程后依然可以访问,此时 h2 转为 active 状态接管 namenode
浏览器查看:
server9状态变为active,替换了server5的namenode工作
yarn的高可用配置
- 编辑mapred-site.xml文件
[hadoop@server5 hadoop]$ cd /home/hadoop/hadoop/etc/hadoop/
[hadoop@server5 hadoop]$ vim mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
- 编辑 yarn-site.xml 文件
[hadoop@server5 hadoop]$ vim yarn-site.xml
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property>
<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>RM_CLUSTER</value>
</property>
<property>
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>172.25.32.5</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>172.25.32.9</value>
</property>
<property>
<name>yarn.resourcemanager.recovery.enabled</name>
<value>true</value>
</property>
<property>
<name>yarn.resourcemanager.store.class</name>
<value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
</property>
<property>
<name>yarn.resourcemanager.zk-address</name>
<value>172.25.32.6:2181,172.25.32.7:2181,172.25.32.8:2181</value>
</property>
</configuration>
- 启动yarn服务
[hadoop@server9 hadoop]$ sbin/stop-dfs.sh
[hadoop@server9 hadoop]$ vim etc/hadoop/workers
[hadoop@server9 hadoop]$ cat etc/hadoop/workers ##取消server9nn的身份
172.25.32.6
172.25.32.7
172.25.32.8
[hadoop@server9 hadoop]$ sbin/start-yarn.sh
Starting resourcemanagers on [ 172.25.32.5 172.25.32.9]
Starting nodemanagers
[hadoop@server9 hadoop]$ jps
18225 NameNode
18551 DFSZKFailoverController
19032 Jps
18890 ResourceManager
[hadoop@server5 hadoop]$ jps
14594 DFSZKFailoverController
14501 NameNode
14729 ResourceManager
14991 Jps
在zookeeper(journalnode)节点查看;
在浏览器查看:
http://172.25.32.5:8088/cluster/cluster
server5状态为active
http://172.25.32.9:8088/cluster/cluster
server9状态为standby
- 测试 yarn 故障切换
手动关闭server5的RM
浏览器查看,server9变为active
在server5上手动添加resourcemanager并且在浏览器查看变为standby
[hadoop@server5 hadoop]$ pwd
/home/hadoop/hadoop
[hadoop@server5 hadoop]$ sbin/yarn-daemon.sh start resourcemanager
server5变为standby态
问题总结
在配置hadoop时,由于配置文件错误,导致namenode节点出现错误(配置文件出现的错误是ip写错)
在配置yarn高可用时,两个RM节点都处于active态,是因为我用脚本开启server5yarn后,又手动开启了server9的yarn,导致两个节点状态都为active,在杀死server9手动开启的进程后,RM节点恢复正常.