文章目录
集群规划
Host | NameNode | DataNode | JournalNode | ZK | ResourceManager | NodeManager |
---|---|---|---|---|---|---|
k8s-node3 | Y | Y | Y | |||
k8s-node6 | Y | Y | Y | |||
k8s-node5 | Y | Y | Y | |||
k8s-node8 | Y | Y | Y | Y |
环境准备
- 修改主机名及主机名和IP地址的映射
修改/etc/hosts文件,添加如下内容
192.168.0.52 k8s-node3
192.168.0.44 k8s-node6
192.168.0.109 k8s-node5
192.168.0.115 k8s-node8
- 关闭防火墙
- ssh免密登录
- 安装JDK,配置环境变量等
- 搭建zookeeper集群
集群搭建方式见我的博客:Zookeeper学习笔记:Zookeeper的简介安装与配置
Hadoop集群配置
Hadoop环境变量配置
/etc/profile.d/apache-hadoop.sh
export HADOOP_HOME=/home/software/hadoop-3.3.1/
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://mycluster</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop3.3/</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>k8s-node3:2181,k8s-node6:2181,k8s-node8:2181</value>
</property>
</configuration>
hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.nameservices</name>
<value>mycluster</value>
</property>
<property>
<name>dfs.ha.namenodes.mycluster</name>
<value>nn1,nn2</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn1</name>
<value>k8s-node3:8020</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn2</name>
<value>k8s-node8:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn1</name>
<value>k8s-node3:9870</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn2</name>
<value>k8s-node8:9870</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://k8s-node3:8485;k8s-node5:8485;k8s-node8:8485/mycluster</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/home/hadoop3.3/</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.mycluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/root/.ssh/id_rsa</value>
</property>
<property>
<name>dfs.ha.nn.not-become-active-in-safemode</name>
<value>true</value>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
</configuration>
worker
worker中指定的是DataNode的节点机器和NodeManager节点的机器
k8s-node6
k8s-node5
hadoop-env.sh
export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
export HDFS_JOURNALNODE_USER=root
export HDFS_ZKFC_USER=root
export YARN_RESOURCEMANAGER_USER=root
export YARN_NODEMANAGER_USER=root
Yarn集群配置
yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<!--启用resourcemanager ha-->
<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property>
<!--声明两台resourcemanager的地址-->
<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>cluster-yarn1</value>
</property>
<property>
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>k8s-node3</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>k8s-node8</value>
</property>
<!--指定zookeeper集群的地址-->
<property>
<name>yarn.resourcemanager.zk-address</name>
<value>k8s-node3:2181,k8s-node6:2181,k8s-node8:2181</value>
</property>
<!--启用自动恢复-->
<property>
<name>yarn.resourcemanager.recovery.enabled</name>
<value>true</value>
</property>
<!--指定resourcemanager的状态信息存储在zookeeper集群-->
<property>
<name>yarn.resourcemanager.store.class</name>
<value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
</property>
</configuration>
配置完毕后,将整个hadoop的目录分发到其他的节点机器
[root@k8s-node3 hadoop]# scp hadoop-3.3.1 -r k8s-node5:$PWD/
[root@k8s-node3 hadoop]# scp hadoop-3.3.1 k8s-node5:$PWD/
[root@k8s-node3 hadoop]# scp hadoop-3.3.1 k8s-node5:$PWD/
[root@k8s-node3 hadoop]# scp hadoop-3.3.1 k8s-node5:$PWD/
Hadoop环境变量配置
export HADOOP_HOME=/home/software/hadoop-3.3.1/
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
初始化Hadoop
在k8s-node3服务器(namenode1)上执行一下操作
bin/hdfs namenode -format
sbin/hadoop-daemon.sh start namenode
启动Hadoop集群
start-all.sh
启动完毕后可以在各个节点的服务器上执行JPS,查看容器的启动状况
- k8s-node3
[root@k8s-node3 hadoop-3.3.1]# jps
29669 JournalNode
29222 NameNode
13174 ResourceManager
29997 DFSZKFailoverController
1087 QuorumPeerMain
14863 Jps
- k8s-node5
[root@k8s-node5 hadoop]# jps
16343 JournalNode
4187 Jps
16253 DataNode
1359 NodeManager
- k8s-node6
[root@k8s-node6 hadoop]# jps
12852 QuorumPeerMain
26101 Jps
22934 NodeManager
26335 DataNode
- k8s-node8
[root@k8s-node8 ~]# jps
880 DFSZKFailoverController
1699 ResourceManager
724 JournalNode
29016 QuorumPeerMain
21354 Elasticsearch
2026 Jps
575 NameNod
浏览器中输入
- 查看ResourceManager的状态
http://192.168.0.115:8088/cluster/cluster
http://192.168.0.52:8088/cluster/cluster
或者使用命令查看ResourceManager的状态
yarn rmadmin -getServiceState rm1
- 查看NameNode的状态
NameNode的http端口是由hdfs-site.xml中指定的
<property>
<name>dfs.namenode.http-address.mycluster.nn1</name>
<value>k8s-node3:9870</value>
</property>
http://192.168.0.52:9870/dfshealth.html#tab-overview
http://192.168.0.115:9870/dfshealth.html#tab-overview
可以看到一个是Activate,另一个是StandBy