文章目录
准备环境
节点名 | IP | 身份 | 软件 |
---|---|---|---|
master | 172.28.95.101 | namenode(维护fsimage,磁盘映射关系,占用磁盘IO) secondnode(分配计算资源,占用CPU) resourcemanager(负责整个集群的资源管理和调度,占用CPU、MEM) |
HDFS YARN |
node1 | 172.28.95.102 | datanode(占用磁盘IO) nodemanager(计算节点,占用CPU、MEM) |
HDFS YARN |
node2 | 172.28.95.103 | datanode(占用磁盘IO) nodemanager(计算节点,占用CPU、MEM) |
HDFS YARN |
需要配置master节点免密登录到node节点和master本身节点
安装hadoop
# 在官网下载hadoop-3.2.1包
wget https://archive.apache.org/dist/hadoop/common/hadoop-3.2.1/hadoop-3.2.1.tar.gz
tar -xzvf hadoop-3.2.1.tar.gz -C /usr/local/
# 更改hadoop权限为root权限
cd /usr/local && chown -R 0.0 hadoop-3.2.1
安装jdk
# 三个节点都要安装
yum install -y java-1.8.0-openjdk-headless-1.8.0.345.b01-1.el7_9.x86_64
yum install -y java-1.8.0-openjdk-devel-1.8.0.345.b01-1.el7_9.x86_64
# 验证jdk安装成功
jps
修改hadoop环境变量
cd hadoop-3.2.1
vim etc/hadoop/hadoop-env.sh
export JAVA_HOME="/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.345.b01-1.el7_9.x86_64/jre/"
export HADOOP_HOME="/usr/local/hadoop-3.2.1"
export HADOOP_CONF_DIR="${HADOOP_HOME}/etc/hadoop"
export HADOOP_OS_TYPE=${HADOOP_OS_TYPE:-$(uname -s)}
添加环境变量
# 三个节点都需要添加
echo 'HADOOP_COMMON_HOME=$HADOOP_HOME' >> ~/.bash_profile
source ~/.bash_profile
# 判断hadoop是否可用
./bin/hadoop version
# JAVA_HOME改为系统环境变量
cat >> /etc/profile <<EOF
export JAVA_HOME="/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.345.b01-1.el7_9.x86_64/jre/"
export HADOOP_HOME="/usr/local/hadoop-3.2.1"
export HADOOP_CONF_DIR="${HADOOP_HOME}/etc/hadoop"
export HADOOP_OS_TYPE=${HADOOP_OS_TYPE:-$(uname -s)}
export PATH=$PATH:$JAVA_HOME:$HADOOP_HOME:$HADOOP_CONF_DIR:$HADOOP_OS_TYPE
export HADOOP_CLASSPATH="$(