1、环境规划
节点Ip | 节点主机名 | 服务 |
---|---|---|
..228.11 | hadoop1 | DataNode ResourceManager NodeManager |
..228.12 | hadoop2 | JobHistoryServer DataNode NodeManager NameNode |
..228.13 | hadoop3 | DataNode NodeManager SecondaryNameNode |
..228.14 | hadoop4 | NodeManager DataNode |
..228.15 | hadoop5 | NodeManager DataNode |
2、安装准备
2.1、软件
hadoop-3.3.4.tar.gz、jdk-8u351-linux-x64.tar.gz
2.2、配置主机名
**.**.228.11 hadoop1
**.**.228.12 hadoop2
**.**.228.13 hadoop3
**.**.228.14 hadoop4
**.**.228.15 hadoop5
2.3、配置免密
在每台主机上使用 ssh-keygen
命令生成公钥私钥对:
ssh-keygen
将 hadoop1 的公钥写到本机和远程机器的
~/ .ssh/authorized_key` 文件中:
ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop1
ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop2
ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop3
ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop4
ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop5
2.4、防火墙
关闭防火墙,或者放行以下端口:9000、50090、8022、50470、50070、49100、8030、8031、8032、8033、8088、8090
3、安装部署
3.1、JDK安装
3.1.1、下载并解压jdk
# hadoop1、hadoop2、hadoop3上安装jdk
mkdir /usr/app
cd /usr/app
cp jdk-8u201-linux-x64.tar.gz /usr/app
tar -zxvf jdk-8u201-linux-x64.tar.gz
3.1.2、设置环境变量
配置profile文件
vi /etc/profile
export JAVA_HOME=/usr/java/jdk1.8.0_201
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=${JAVA_HOME}/bin:$PATH
执行source命令,令配置生效
source /etc/profile
3.1.3、检查是否配置成功
java -version
3.2、部署hadoop集群
3.2.1、下载hadoop并解压
tar -zvxf hadoop-2.6.0-cdh5.15.2.tar.gz
3.2.2、配置环境变量
配置profile文件
vim /etc/profile
export HADOOP_HOME=/usr/app/hadoop-3.3.4
export PATH=${HADOOP_HOME}/bin:$PATH
执行source命令,令配置立即生效
source /etc/profile
3.2.3、修改hadoop配置
cd ${HADOOP_HOME}/etc/hadoop
-
hadoop-env.sh
# 指定JDK的安装位置 export JAVA_HOME=/usr/java/jdk1.8.0_201/
-
core-site.xml
<configuration> <property> <!--指定 namenode 的 hdfs 协议文件系统的通信地址--> <name>fs.defaultFS</name> <value>hdfs://hadoop2:8020</value> </property> <property> <!--指定 hadoop 集群存储临时文件的目录--> <name>hadoop.tmp.dir</name> <value>/dev/shm/hadoop/tmp</value> </property> </configuration>
-
hdfs-site.xml
<configuration> <property> <name>dfs.hosts</name> <value>/bailian/service/hadoop/etc/hadoop/dfs.hosts</value> </property> <property> <!--namenode 节点数据(即元数据)的存放位置,可以指定多个目录实现容错,多个目录用逗号分隔--> <name>dfs.namenode.name.dir</name> <value>/dev/shm/hadoop/namenode/data</value> </property> <property> <!--datanode 节点数据(即数据块)的存放位置--> <name>dfs.datanode.data.dir</name> <value>/dev/shm/hadoop/datanode/data</value> </property> <!-- 设置hdfs的访问地址 --> <property> <name>dfs.namenode.http.address</name> <value>hadoop2:9870</value> </property> <property> <name>dfs.namenode.secondary.http-address</name> <value>hadoop3:9868</value> </property> </configuration>
-
yarn-site.xml
<configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.resourcemanager.hostname</name> <value>hadoop1</value> </property> <property> <name>yarn.nodemanager.env-whitelist</name> <value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value> </property> <property> <name>yarn.application.classpath</name> <value>/usr/app/hadoop-3.3.4/etc/hadoop:/usr/app/hadoop-3.3.4/share/hadoop/common/lib/*:/usr/app/hadoop-3.3.4/share/hadoop/common/*:/usr/app/hadoop-3.3.4/share/hadoop/hdfs:/usr/app/hadoop-3.3.4/share/hadoop/hdfs/lib/*:/usr/app/hadoop-3.3.4/share/hadoop/hdfs/*:/usr/app/hadoop-3.3.4/share/hadoop/mapreduce/*:/usr/app/hadoop-3.3.4/share/hadoop/yarn:/usr/app/hadoop-3.3.4/share/hadoop/yarn/lib/*:/usr/app/hadoop-3.3.4/share/hadoop/yarn/*</value> </property> <!-- 是否将对容器实施物理内存限制 --> <property> <name>yarn.nodemanager.resource.memory-mb</name> <value>307200</value> </property> <property> <name>yarn.scheduler.minimum-allocation-mb</name> <value>1024</value> </property> <property> <name>yarn.scheduler.maximum-allocation-mb</name> <value>102400</value> </property> <property> <name>yarn.app.mapreduce.am.resource.mb</name> <value>1024</value> </property> <!-- 开启日志聚集功能 --> <property> <name>yarn.log-aggregation-enable</name> <value>true</value> </property> <!-- 设置日志聚集服务器地址 --> <property> <name>yarn.log.server.url</name> <value>http://hadoop1:19888/jobhistory/logs</value> </property> <!-- 设置日志保留时间为7天 --> <property> <name>yarn.log-aggregation.retain-seconds</name> <value>604800</value> </property> <property> <name>yarn.nodemanager.vmem-check-enabled</name> <value>false</value> </property> </configuration>
-
mapred-site.xml
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> <description>Execution framework set to Hadoop YARN.</description> </property> <property> <name>mapreduce.jobhistory.address</name> <value>hadoop2:10020</value> <description>MapReduce JobHistory Server host:port, default port is 10020.</description> </property> <property> <name>mapreduce.jobtracker.http.address</name> <value>hadoop2:50030</value> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>hadoop2:19888</value> <description>MapReduce JobHistory Server Web UI host:port, default port is 19888.</description> </property> <property> <name>mapred.child.java.opts</name> <value>-Xmx2048m</value> </property> <property> <name>mapred.compress.map.output</name> <value>true</value> </property> <property> <name>mapreduce.map.memory.mb</name> <value>2048</value> </property> </configuration>
-
workers
hadoop1 hadoop2 hadoop3 hadoop4 hadoop5
3.2.4、分发程序
将 Hadoop 安装包分发到其他服务器,分发后建议在这两台服务器上也配置一下 Hadoop 的环境变量。
# 将安装包分发
scp -r /usr/app/hadoop-3.3.4/ hadoop2:/usr/app/
scp -r /usr/app/hadoop-3.3.4/ hadoop3:/usr/app/
scp -r /usr/app/hadoop-3.3.4/ hadoop4:/usr/app/
scp -r /usr/app/hadoop-3.3.4/ hadoop5:/usr/app/
3.2.5、初始化
在 Hadoop2
上执行 namenode 初始化命令:
hdfs namenode -format
### 3.2.6 启动集群
进入到 Hadoop2
的 ${HADOOP_HOME}/sbin
目录下,启动 Hadoop。此时 其他节点上的相关服务也会被启动:
# 启动命令
start-all.sh
# 停止命令
stop-all.sh
3.3、检查集群状态
-
通过jps命令
# 可以每台服务器上执行jps命令,运行结果如下 28625 DataNode 32262 Jps 20316 ResourceManager 28846 NodeManager
-
数据节点的Web-UI
http://**.**.228.12:9870
-
Yarn的Web-UI
http://**.**.228.11:8088
4、添加节点
-
配置hosts
# 配置hadoop2 vim /etc/hosts
-
配置免密
ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop-n
-
配置workers
# 配置hadoop2 vim /usr/app/hadoop-3.3.4/etc/hadoop/workers
-
分发程序
scp -r /usr/app/hadoop-3.3.4/ hadoop-n:/usr/app/
-
动态刷新
# 在hadoop2上执行如下命令,动态刷新dfs和yarn的数据节点, hdfs dfsadmin -refreshNodes yarn rmadmin -refreshNodes