目录
一.部署环境规划
1.1 虚拟机及hadoop角色规划
序号 | 主机名称 | os | ip | cpu | 内存 | 存储 | hadoop HA角色规划 | ||||||
namenode | datanode | resourcemanager | nodemanager | zkfc | journalnode | zk | |||||||
1 | master | centos7 x64 | 192.168.141.100 | 1*2 | 4G | 50G | √ | √ | √ | ||||
2 | node-01 | centos7 x64 | 192.168.141.101 | 1*2 | 4G | 50G | √ | √ | √ | √ | √ | √ | √ |
3 | node-02 | centos7 x64 | 192.168.141.102 | 1*2 | 4G | 50G | √ | √ | √ | √ | |||
4 | node-03 | centos7 x64 | 192.168.141.103 | 1*2 | 4G | 50G | √ | √ | √ | √ |
1.2 软件版本规划
软件信息 | 版本 |
Java | jdk-8u311-linux-x64.tar.gz |
Hadoop | 3.3.0 |
zookeeper | 3.7.0 |
1.3 数据目录规划
名称 | 目录 |
datanode目录 | /data/hadoop/dfs/data |
namenode目录 | /data/hadoop/dfs/name |
hadoop临时目录 | /data/hadoop/tmp |
zookeeper数据目录 | /data/zookeeper/data/ |
zookeeper日志目录 | /data/zookeeper/log/ |
二.环境准备及依赖安装
2.1 基本环境及优化
笔者这里写了一个shell脚本执行,里面的命令可以复制直接执行,在master上新建init_env.sh
vi init_en.sh
输入内容,如果hosts有其他host配置可以把rm -f那行删掉,笔者是为了可以重复执行才添加这行的,hadoop用户的命名修改脚本your hadoop user passwd即可。
#!/bin/bash
#删除hosts文件
rm -f /etc/hosts && touch /etc/hosts
cat>>/etc/hosts <<EOF
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.141.100 master
192.168.141.101 node-01
192.168.141.102 node-02
192.168.141.103 node-03
EOF
#创建相关用户&目录
groupadd hadoop
useradd -g hadoop hadoop
echo "your hadoop user passwd" | passwd hadoop --stdin > /dev/null 2>&1
mkdir /data/hadoop/dfs/{data,name} -pv
mkdir /data/hadoop/tmp -pv
chown -R hadoop:hadoop /data/hadoop/
#关闭防火墙&selinux
systemctl stop firewalld
systemctl disable firewalld
setenforce 0 && sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
#OS调优
cat>>/etc/sysctl.conf <<EOF
#避免使用swap
vm.swappiness = 1
#修改内存分配策略
vm.overcommit_memory=2
vm.overcommit_ratio=90
#网络连接上限
net.core.somaxcomm=32768
EOF
#增大同时打开文件描述符
cat>>/etc/security/limits.conf <<EOF
hadoop soft nofile 32768
hadoop hard nofile 65536
hadoop soft nproc 32768
hadoop hard nproc 65536
EOF
#关闭THP
cat>>/etc/rc.local <<EOF
if test -f /sys/kernel/mm/transparent_hugepage/enabled; then
echo never > /sys/kernel/mm/transparent_hugepage/enabled
fi
if test -f /sys/kernel/mm/transparent_hugepage/defrag; then
echo never > /sys/kernel/mm/transparent_hugepage/defrag
fi
EOF
2.2 服务器互信
先安装服务器互信的原因主要是为后面文件传输提供方便,在master上安装然后在通过分发的方式发送到其他服务器上,如果用SecureCRT连接工具的话,可以下方空白框使用右键选择发送所有会话是可以实现多个会话同时执行命令的,这种方式就相对每个服务手动执行要简单点。
如果想root跟hadoop用户都实现服务器互信的话,需要分别在root用户以及hadoop用户下分别执行
# 每个节点都执行
ssh-keygen -t rsa # 一路回车
# 将公钥添加到认证文件中
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
# 并设置authorized_keys的访问权限
chmod 600 ~/.ssh/authorized_keys
# 只要在一个节点执行即可
ssh node-01 cat ~/.ssh/id_rsa.pub >>~/.ssh/authorized_keys
ssh node-02 cat ~/.ssh/id_rsa.pub >>~/.ssh/authorized_keys
ssh node-03 cat ~/.ssh/id_rsa.pub >>~/.ssh/authorized_keys
# 分发整合后的文件到其它节点
scp ~/.ssh/authorized_keys node-01:~/.ssh/
scp ~/.ssh/authorized_keys node-02:~/.ssh/
scp ~/.ssh/authorized_keys node-03:~/.ssh/
#各节点执行
ssh node-01 date
ssh node-02 date
ssh node-03 date
将master的init_env.sh通过scp的方式传到相应的服务器运行
scp ~/init_env.sh node-01:~/
2.3 JDK安装
笔者虚拟机的安装选择最小安装方式,如果是桌面或者其他方式安装的话可能需要先卸载默认自带的jdk,具体参考CentOS卸载自带OpenJDK。
master上传下载好的jdk文件,可以使用sftp或者rz命令上传,笔者使用的是rz命令上传,可通过执行yum install -y lrzsz安装rz上传命令。
#所有节点执行
mkdir -p /usr/java/
#rz 上传jdk文件
rz
解压jdk及配置相应的jdk环境
tar -zxvf jdk-8u311-linux-x64.tar.gz
cat>>/etc/profile <<EOF
export JAVA_HOME=/usr/java/jdk1.8.0_311
export CLASSPATH=.:\$JAVA_HOME/lib/dt.jar:\$JAVA_HOME/lib/tools.jar:\$JAVA_HOME/jre/lib/rt.jar
export PATH=\$JAVA_HOME/bin:\$PATH
EOF
source /etc/profile
java -version
完成master的jdk安装后在分发到node节点
scp -r jdk1.8.0_311 node-01:/usr/java
scp -r jdk1.8.0_311 node-02:/usr/java
scp -r jdk1.8.0_311 node-03:/usr/java
每个node节点配置jdk环境
cat>>/etc/profile <<EOF
export JAVA_HOME=/usr/java/jdk1.8.0_311
export CLASSPATH=.:\$JAVA_HOME/lib/dt.jar:\$JAVA_HOME/lib/tools.jar:\$JAVA_HOME/jre/lib/rt.jar
export PATH=\$JAVA_HOME/bin:\$PATH
EOF
source /etc/profile
java -version
三、zookeeper安装
3.1 下载安装
下载地址:https://downloads.apache.org/zookeeper/zookeeper-3.7.0/apache-zookeeper-3.7.0-bin.tar.gz
上传到master上,其他的通过master分发
mkdir -p /usr/local/zookeeper
chown -R hadoop:hadoop /usr/local/zookeeper/
cd /usr/local/zookeeper
#上传zookeeper安装包
rz
tar -zxvf apache-zookeeper-3.7.0-bin.tar.gz &&rm -f apache-zookeeper-3.7.0-bin.tar.gz
scp -r apache-zookeeper-3.7.0-bin node-01:/usr/local/zookeeper/
scp -r apache-zookeeper-3.7.0-bin node-02:/usr/local/zookeeper/
scp -r apache-zookeeper-3.7.0-bin node-03:/usr/local/zookeeper/
3.2 环境配置
所有zookeeper节点配置(node-01、node-02、node-03)
cat>>/etc/profile <<EOF
export ZOOKEEPER_HOME=/usr/local/zookeeper/apache-zookeeper-3.7.0-bin
export PATH=\$ZOOKEEPER_HOME/bin:\$PATH
EOF
source /etc/profile
#创建数据/日志目录
mkdir -pv /data/zookeeper/{data,log}
chown -R hadoop:hadoop /data/zookeeper/
chown -R hadoop:hadoop /usr/local/zookeeper/
在node-01上修改zookeeper的配置文件
su hadoop
cd /usr/local/zookeeper/apache-zookeeper-3.7.0-bin/conf/
cp zoo_sample.cfg zoo.cfg
修改zoo.cfg配置文件
dataDir=/data/zookeeper/data/
dataLogDir=/data/zookeeper/log/
server.1=node-01:2888:3888
server.2=node-02:2888:3888
server.3=node-03:2888:3888
分发到node-02、node-03节点
scp zoo.cfg node-02:/usr/local/zookeeper/apache-zookeeper-3.7.0-bin/conf/
scp zoo.cfg node-03:/usr/local/zookeeper/apache-zookeeper-3.7.0-bin/conf/
3.3 创建myid
根据服务器对应的数字,配置相应的myid,node-01配置1,node-02配置2,node-03配置3
#各节点配置,根据server.1就是1
echo 1 > /data/zookeeper/data/myid
3.4 启动zookeeper
各个节点分别启动
zkServer.sh start
zkServer.sh status
四、hadoop安装
4.1 软件包下载
下载地址:https://dlcdn.apache.org/hadoop/common/hadoop-3.3.0/hadoop-3.3.0.tar.gz
上传并解压(root用户):
tar -zxvf hadoop-3.3.0.tar.gz -C /usr/local/
环境配置(所有节点都执行),root用户执行
chown -R hadoop:hadoop /usr/local/hadoop-3.3.0
cat>>/etc/profile <<EOF
export HADOOP_HOME=/usr/local/hadoop-3.3.0
export PATH=\$HADOOP_HOME/bin:\$HADOOP_HOME/sbin:\$PATH
EOF
source /etc/profile
4.2 修改配置文件
hadoop-env.sh
添加export JAVA_HOME=/usr/java/jdk1.8.0_311
cd $HADOOP_HOME/etc/hadoop
vi hadoop-env.sh
cat hadoop-env.sh | grep -v '^#' | grep -v "^$"
core-site.xml
修改成以下的内容
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master/</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/data/hadoop/tmp</value>
<description>namenode上本地的hadoop临时文件夹</description>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
<description>Size of read/write buffer used in SequenceFiles</description>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>node-01:2181,node-02:2181,node-03:2181</value>
<description>指定zookeeper地址</description>
</property>
<property>
<name>ha.zookeeper.session-timeout.ms</name>
<value>1000</value>
<description>hadoop链接zookeeper的超时时长设置ms</description>
</property>
</configuration>
hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
<description>Hadoop的备份系数是指每个block在hadoop集群中有几份,系数越高,冗余性越好,占用存储也越多</description>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///data/hadoop/dfs/name</value>
<description>namenode上存储hdfs名字空间元数据 </description>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///data/hadoop/dfs/data</value>
<description>datanode上数据块的物理存储位置</description>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
<!--指定hdfs的nameservice为myha01,需要和core-site.xml中的保持一致
dfs.ha.namenodes.[nameservice id]为在nameservice中的每一个NameNode设置唯一标示符。
配置一个逗号分隔的NameNode ID列表。这将是被DataNode识别为所有的NameNode。
例如,如果使用"myha01"作为nameservice ID,并且使用"nn1"和"nn2"作为NameNodes标示符
-->
<property>
<name>dfs.nameservices</name>
<value>hadoop-ha</value>
</property>
<!-- myha01下面有两个NameNode,分别是nn1,nn2 -->
<property>
<name>dfs.ha.namenodes.hadoop-ha</name>
<value>nn1,nn2</value>
</property>
<!-- nn1的RPC通信地址 -->
<property>
<name>dfs.namenode.rpc-address.hadoop-ha.nn1</name>
<value>master:9000</value>
</property>
<!-- nn1的http通信地址 -->
<property>
<name>dfs.namenode.http-address.hadoop-ha.nn1</name>
<value>master:50070</value>
</property>
<!-- nn2的RPC通信地址 -->
<property>
<name>dfs.namenode.rpc-address.hadoop-ha.nn2</name>
<value>node-01:9000</value>
</property>
<!-- nn2的http通信地址 -->
<property>
<name>dfs.namenode.http-address.hadoop-ha.nn2</name>
<value>node-01:50070</value>
</property>
<!-- 指定NameNode的edits元数据的共享存储位置。也就是JournalNode列表
该url的配置格式:qjournal://host1:port1;host2:port2;host3:port3/journalId
journalId推荐使用nameservice,默认端口号是:8485 -->
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://master:8485;node-01:8485;node-02:8485;node-03:8485/hadoop-ha</value>
</property>
<!-- 指定JournalNode在本地磁盘存放数据的位置 -->
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/data/hadoop/data/journaldata</value>
</property>
<!-- 开启NameNode失败自动切换 -->
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<!-- 配置失败自动切换实现方式 -->
<property>
<name>dfs.client.failover.proxy.provider.hadoop-ha</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<!-- 配置隔离机制方法,多个机制用换行分割,即每个机制暂用一行 -->
<property>
<name>dfs.ha.fencing.methods</name>
<value>
sshfence
shell(/bin/true)
</value>
</property>
<!-- 使用sshfence隔离机制时需要ssh免登陆 -->
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/home/hadoop/.ssh/id_rsa</value>
</property>
<!-- 配置sshfence隔离机制超时时间 -->
<property>
<name>dfs.ha.fencing.ssh.connect-timeout</name>
<value>30000</value>
</property>
<property>
<name>ha.failover-controller.cli-check.rpc-timeout.ms</name>
<value>60000</value>
</property>
</configuration>
mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
<description>The runtime framework for executing MapReduce jobs. Can be one of local, classic or yarn.</description>
<final>true</final>
</property>
<property>
<name>mapreduce.jobtracker.http.address</name>
<value>master:50030</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master:19888</value>
</property>
<property>
<name>mapred.job.tracker</name>
<value>http://master:9001</value>
</property>
</configuration>
yarn-site.xml
<configuration>
<!-- 开启RM高可用 -->
<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property>
<!-- 指定RM的cluster id -->
<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>yrc</value>
</property>
<!-- 指定RM的名字 -->
<property>
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2</value>
</property>
<!-- 分别指定RM的地址 -->
<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>node-02</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>node-03</value>
</property>
<!-- 指定zk集群地址 -->
<property>
<name>yarn.resourcemanager.zk-address</name>
<value>node-01:2181,node-02:2181,node-03:2181</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
</property>
<property>
<name>yarn.log-aggregation.retain-seconds</name>
<value>86400</value>
</property>
<!-- 启用自动恢复 -->
<property>
<name>yarn.resourcemanager.recovery.enabled</name>
<value>true</value>
</property>
<!-- 制定resourcemanager的状态信息存储在zookeeper集群上 -->
<property>
<name>yarn.resourcemanager.store.class</name>
<value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
</property>
<property>
<name>yarn.application.classpath</name> <value>/usr/local/hadoop-3.3.0/etc/hadoop:/usr/local/hadoop-3.3.0/share/hadoop/common/lib/*:/usr/local/hadoop-3.3.0/share/hadoop/common/*:/usr/local/hadoop-3.3.0/share/hadoop/hdfs:/usr/local/hadoop-3.3.0/share/hadoop/hdfs/lib/*:/usr/local/hadoop-3.3.0/share/hadoop/hdfs/*:/usr/local/hadoop-3.3.0/share/hadoop/mapreduce/*:/usr/local/hadoop-3.3.0/share/hadoop/yarn:/usr/local/hadoop-3.3.0/share/hadoop/yarn/lib/*:/usr/local/hadoop-3.3.0/share/hadoop/yarn/*</value>
</property>
</configuration>
workers
vi workers
4.3 分发到其他服务器
scp -r /usr/local/hadoop-3.3.0/ node-01:/usr/local/
scp -r /usr/local/hadoop-3.3.0/ node-02:/usr/local/
scp -r /usr/local/hadoop-3.3.0/ node-03:/usr/local/
所有节点执行
chown -R hadoop:hadoop /usr/local/hadoop-3.3.0
4.4 启动hadoop集群
启动journalnode(所有节点)
su hadoop
hadoop-daemon.sh start journalnode
格式化namenode(master)
hadoop namenode -format
同步元数据
scp -r /data/hadoop/dfs/name/current/ node-01:/data/hadoop/dfs/name/
格式化zkfc(master)
hdfs zkfc -formatZK
启动HDFS(master)
start-dfs.sh
启动yarn
start-yarn.sh
启动 mapreduce 任务历史服务器
mr-jobhistory-daemon.sh start historyserver
查看各主节点状态hdfs/yarn
hdfs haadmin -getServiceState nn1
hdfs haadmin -getServiceState nn2
yarn rmadmin -getServiceState rm1
yarn rmadmin -getServiceState rm2
4.5 web页面
hdfs:http://192.168.141.100:50070/dfshealth.html#tab-overview
yarn:http://192.168.141.100:8088/cluster
五、遇到的问题
5.1 zookeeper安装错误
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/apache-zookeeper-3.7.0-bin/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Error contacting service. It is probably not running.
查看zookeeper自动日志,查看相应的日志,出现此问题的原因是myid配置成同一个了,根据server对应的数字配置相应的id即可解决。
5.2 hadoop安装错误
格式化namenode
hdfs.xml高可用的服务名称配置不一致
hdfs格式化报错
2021-10-27 01:05:13,690 WARN namenode.NameNode: Encountered exception during format
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs are ready for formatting. 1 successful responses:
192.168.141.100:8485: false
3 exceptions thrown:
192.168.141.101:8485: Call From master/192.168.141.100 to node-01:8485 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
192.168.141.102:8485: Call From master/192.168.141.100 to node-02:8485 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
192.168.141.103:8485: Call From master/192.168.141.100 to node-03:8485 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:305)
at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:282)
at org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:1165)
at org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:211)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1267)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1713)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1821)
2021-10-27 01:05:13,700 ERROR namenode.NameNode: Failed to start namenode.
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs are ready for formatting. 1 successful responses:
192.168.141.100:8485: false
3 exceptions thrown:
192.168.141.101:8485: Call From master/192.168.141.100 to node-01:8485 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
解决:journalnode节点没启动完,需要在各个服务器启动journalnode,本文错误因只启动了master上的。
参考连接: