一:模板机的配置
1、必要的硬件和软件配置
内存4G,硬盘50G,网段均为192.168.6.X
保证Linux虚拟机可以联网
1.虚拟网络编辑器种VMnet8中子网IP为192.168.6.X
2.window系统中网络连接中VMnet8的IPv4为192.168.6.X
3.vim /etc/sysconfig/network-scripts/ifcfg-ens33
中添加静态IP为192.168.6.X
安装必要环境
[root@hadoop100 ~]# yum install -y epel-release
[root@hadoop100 ~]# yum install -y psmisc nc net-tools rsync vim lrzsz ntp libzstd openssl-static tree iotop git
卸载虚拟机自带的JDK
[root@hadoop100 ~]# rpm -qa | grep -i java | xargs -n1 rpm -e --nodeps
二:克隆虚拟机
1、克隆hadoop
克隆三台虚拟机hadoop102,hadoop103,hadoop104
2、修改克隆机的IP
以hadoop102为例
[root@hadoop100 ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens33
改为:
==========================
DEVICE=ens33
TYPE=Ethernet
ONBOOT=yes
BOOTPROTO=static
NAME="ens33"
IPADDR=192.168.1.102
PREFIX=24
GATEWAY=192.168.1.2
DNS1=192.168.1.2
3、修改克隆机的主机名
以hadoop102为例
[root@hadoop100 ~]# vim /etc/hostname
=======================
hadoop102
4、修改克隆机的映射
以hadoop102为例
[root@hadoop100 ~]# vim /etc/hosts
=======================
192.168.1.100 hadoop100
192.168.1.101 hadoop101
192.168.1.102 hadoop102
192.168.1.103 hadoop103
192.168.1.104 hadoop104
192.168.1.105 hadoop105
192.168.1.106 hadoop106
192.168.1.107 hadoop107
192.168.1.108 hadoop108
5、修改window系统的映射
(a)进入C:\Windows\System32\drivers\etc路径
(b)拷贝hosts文件到桌面
(c)打开桌面hosts文件并添加如下内容
========================
192.168.1.100 hadoop100
192.168.1.101 hadoop101
192.168.1.102 hadoop102
192.168.1.103 hadoop103
192.168.1.104 hadoop104
192.168.1.105 hadoop105
192.168.1.106 hadoop106
192.168.1.107 hadoop107
192.168.1.108 hadoop108
6、安装JDK
以hadoop102为例
通过工具将JDKjar包导入到/opt/software
tar xxx -C xxx
-C<目录>:切换工作目录,先进入指定目录再执行压缩/解压缩操作,可用于仅压缩特定目录里的内容或解压缩到特定目录;
[lu@hadoop102 software]$ tar -zxvf jdk-8u212-linux-x64.tar.gz -C /opt/module/
配置JDK环境变量:
新建/etc/profile.d/my_envsh
[lu@hadoop102 ~]$ sudo vim /etc/profile.d/my_env.sh
======================
#JAVA_HOME
export JAVA_HOME=/opt/module/jdk1.8.0_212
export PATH=$PATH:$JAVA_HOME/bin
source一下/etc/profile文件,让新的环境变量PATH生效
---->执行文件并从文件中加载变量及函数到执行环境
[lu@hadoop102 ~]$ source /etc/profile
测试JDK是否安装成功
[lu@hadoop102 ~]$ java -version
=====================
java version "1.8.0_212"
7、安装hadoop
以hadoop102为例
通过工具将hadoopjar包导入到/opt/software
[lu@hadoop102 software]$ tar -zxvf hadoop-3.1.3.tar.gz -C /opt/module/
配置环境变量
sudo vim /etc/profile.d/my_env.sh
==========================
#HADOOP_HOME
export HADOOP_HOME=/opt/module/hadoop-3.1.3
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
使文件生效
[lu@hadoop102 hadoop-3.1.3]$ source /etc/profile
测试是否安装成功
[lu@hadoop102 hadoop-3.1.3]$ hadoop version
=======================
Hadoop 3.1.3
三:分发脚本xsync
rsync远程同步工具
rsync主要用于备份和镜像。具有速度快、避免复制相同内容和支持符号链接的优点
rsync和scp区别:用rsync做文件的复制要比scp的速度快,rsync只对差异文件做更新。scp是把所有文件都复制过去。
[lu@hadoop102 opt]$ rsync -av /opt/software/* atguigu@hadoop103:/opt/software
1、xsync脚本实现
[lu@hadoop102 opt]$ cd /home/atguigu
[lu@hadoop102 ~]$ mkdir bin
[lu@hadoop102 ~]$ cd bin
[lu@hadoop102 bin]$ vim xsync
==================================
#!/bin/bash
#1. 判断参数个数
if [ $# -lt 1 ]
then
echo Not Enough Arguement!
exit;
fi
#2. 遍历集群所有机器
for host in hadoop102 hadoop103 hadoop104
do
echo ==================== $host ====================
#3. 遍历所有目录,挨个发送
for file in $@
do
#4. 判断文件是否存在
if [ -e $file ]
then
#5. 获取父目录
pdir=$(cd -P $(dirname $file); pwd)
#6. 获取当前文件的名称
fname=$(basename $file)
ssh $host "mkdir -p $pdir"
rsync -av $pdir/$fname $host:$pdir
else
echo $file does not exists!
fi
done
done
修改脚本的执行权限并复制到/bin中全局使用
[lu@hadoop102 bin]$ chmod +x xsync
[lu@hadoop102 bin]$ sudo cp xsync /bin/
测试
[lu@hadoop102 ~]$ xsync /home/atguigu/bin
[lu@hadoop102 bin]$ sudo xsync /bin/xsync
2、ssh免密登录配置
首先先相互认识一下
[lu@hadoop102 ~]$ ssh hadoop102 ls
[lu@hadoop102 ~]$ ssh hadoop103 ls
[lu@hadoop102 ~]$ ssh hadoop104 ls
就会出现/home/atguigu/.ssh
然后生成公钥和私钥
[lu@hadoop102 .ssh]$ ssh-keygen -t rsa
将公钥拷贝到要免密登录的目标机器上
[lu@hadoop102 .ssh]$ ssh-copy-id hadoop102
[lu@hadoop102 .ssh]$ ssh-copy-id hadoop103
[lu@hadoop102 .ssh]$ ssh-copy-id hadoop104
使用root重复以上动作
使得root也会免密登录
四:集群配置
1、部署规划
hadoop102 | hadoop103 | hadoop104 | |
---|---|---|---|
HDFS | NameNode DataNode | DataNode | SecondaryNameNode DataNode |
YARN | NodeManager | ResourceManager NodeManager | NodeManager |
2、修改自定义配置文件
Hadoop配置文件分两类:默认配置文件和自定义配置文件,只有用户想修改某一默认配置值时,才需要修改自定义配置文件,更改相应属性值
Daemon | App | Hadoop2 | Hadoop3 |
---|---|---|---|
NameNode Port | Hadoop HDFS NameNode | 8020 / 9000 | 9820 |
Hadoop HDFS NameNode HTTP UI | 50070 | 9870 | |
Secondary NameNode Port | Secondary NameNode | 50091 | 9869 |
Secondary NameNode HTTP UI | 50090 | 9868 | |
DataNode Port | Hadoop HDFS DataNode IPC | 50020 | 9867 |
Hadoop HDFS DataNode | 50010 | 9866 | |
Hadoop HDFS DataNode HTTP UI | 50075 | 9864 |
3、配置集群
(1)配置core-site.xml
[lu@hadoop102 ~]$ cd $HADOOP_HOME/etc/hadoop
[lu@hadoop102 hadoop]$ vim core-site.xml
========================================
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<!-- 指定NameNode的地址 -->
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop102:9820</value>
</property>
<!-- 指定hadoop数据的存储目录 -->
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/module/hadoop-3.1.3/data</value>
</property>
<!-- 配置HDFS网页登录使用的静态用户为atguigu -->
<property>
<name>hadoop.http.staticuser.user</name>
<value>atguigu</value>
</property>
<!-- 配置该atguigu(superUser)允许通过代理访问的主机节点 -->
<property>
<name>hadoop.proxyuser.atguigu.hosts</name>
<value>*</value>
</property>
<!-- 配置该atguigu(superUser)允许通过代理用户所属组 -->
<property>
<name>hadoop.proxyuser.atguigu.groups</name>
<value>*</value>
</property>
<!-- 配置该atguigu(superUser)允许通过代理的用户-->
<property>
<name>hadoop.proxyuser.atguigu.groups</name>
<value>*</value>
</property>
</configuration>
(2)HDFS配置文件
配置hdfs-site.xml
[lu@hadoop102 hadoop]$ vim hdfs-site.xml
================================================
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<!-- nn web端访问地址-->
<property>
<name>dfs.namenode.http-address</name>
<value>hadoop102:9870</value>
</property>
<!-- 2nn web端访问地址-->
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>hadoop104:9868</value>
</property>
</configuration>
(3)YARN配置文件
配置yarn-site.xml
[lu@hadoop102 hadoop]$ vim yarn-site.xml
===========================================
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<!-- 指定MR走shuffle -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<!-- 指定ResourceManager的地址-->
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop103</value>
</property>
<!-- 环境变量的继承 -->
<property>
<name>yarn.nodemanager.env-whitelist</name>
<value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
</property>
<!-- yarn容器允许分配的最大最小内存 -->
<property>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>512</value>
</property>
<property>
<name>yarn.scheduler.maximum-allocation-mb</name>
<value>4096</value>
</property>
<!-- yarn容器允许管理的物理内存大小 -->
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>4096</value>
</property>
<!-- 关闭yarn对物理内存和虚拟内存的限制检查 -->
<property>
<name>yarn.nodemanager.pmem-check-enabled</name>
<value>false</value>
</property>
<property>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>false</value>
</property>
</configuration>
配置日志的聚集
配置yarn-site.xml
[lu@hadoop102 hadoop]$ vim yarn-site.xml
===================================
<!-- 开启日志聚集功能 -->
<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
</property>
<!-- 设置日志聚集服务器地址 -->
<property>
<name>yarn.log.server.url</name>
<value>http://hadoop102:19888/jobhistory/logs</value>
</property>
<!-- 设置日志保留时间为7天 -->
<property>
<name>yarn.log-aggregation.retain-seconds</name>
<value>604800</value>
</property>
(4)MapReduce配置文件
配置mapred-site.xml
[lu@hadoop102 hadoop]$ vim mapred-site.xml
======================================================
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<!-- 指定MapReduce程序运行在Yarn上 -->
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
配置历史服务器
[lu@hadoop102 hadoop]$ vim mapred-site.xml
==============================================
<!-- 历史服务器端地址 -->
<property>
<name>mapreduce.jobhistory.address</name>
<value>hadoop102:10020</value>
</property>
<!-- 历史服务器web端地址 -->
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>hadoop102:19888</value>
</property>
配置workers
[lu@hadoop102 hadoop]$ vim /opt/module/hadoop-3.1.3/etc/hadoop/workers
===========================================
hadoop102
hadoop103
hadoop104
4)在集群上分发配置好的Hadoop配置文件
[lu@hadoop102 hadoop]$ xsync /opt/module/hadoop-3.1.3/etc/hadoop/
[lu@hadoop103 ~]$ cat /opt/module/hadoop-3.1.3/etc/hadoop/core-site.xml
[lu@hadoop104 ~]$ cat /opt/module/hadoop-3.1.3/etc/hadoop/core-site.xml
同步所有节点配置文件
[lu@hadoop102 hadoop]$ xsync /opt/module/hadoop-3.1.3/etc
关闭NodeManager 、ResourceManager和HistoryServer
[lu@hadoop103 ~]$ stop-yarn.sh
[lu@hadoop102 ~]$ mapred --daemon stop historyserver
启动NodeManager、ResourceManage和HistoryServer
[lu@hadoop103 ~]$ start-yarn.sh
[lu@hadoop102 ~]$ mapred --daemon start historyserver
!如果集群是第一次启动
如果集群是第一次启动
[lu@hadoop102 ~]$ hdfs namenode -format
启动HDFS
[lu@hadoop102 hadoop-3.1.3]$ sbin/start-dfs.sh
**在配置了ResourceManager的节点(hadoop103)**启动YARN
[lu@hadoop103 hadoop-3.1.3]$ sbin/start-yarn.sh
五:编写hadoop集群常用脚本
1、查看三台服务器java进程脚本:jpsall
[lu@hadoop102 ~]$ cd /home/atguigu/bin
[lu@hadoop102 ~]$ vim jpsall
===================================
#!/bin/bash
for host in hadoop102 hadoop103 hadoop104
do
echo =============== $host ===============
ssh $host jps $@ | grep -v Jps
done
保存后退出,然后赋予脚本执行权限
[ul@hadoop102 bin]$ chmod +x jpsall
2、hadoop集群启停脚本
(包含hdfs,yarn,historyserver):myhadoop.sh**
[lu@hadoop102 ~]$ cd /home/atguigu/bin
[ul@hadoop102 ~]$ vim myhadoop.sh
========================================
#!/bin/bash
if [ $# -lt 1 ]
then
echo "No Args Input..."
exit ;
fi
case $1 in
"start")
echo " =================== 启动 hadoop集群 ==================="
echo " --------------- 启动 hdfs ---------------"
ssh hadoop102 "/opt/module/hadoop-3.1.3/sbin/start-dfs.sh"
echo " --------------- 启动 yarn ---------------"
ssh hadoop103 "/opt/module/hadoop-3.1.3/sbin/start-yarn.sh"
echo " --------------- 启动 historyserver ---------------"
ssh hadoop102 "/opt/module/hadoop-3.1.3/bin/mapred --daemon start historyserver"
;;
"stop")
echo " =================== 关闭 hadoop集群 ==================="
echo " --------------- 关闭 historyserver ---------------"
ssh hadoop102 "/opt/module/hadoop-3.1.3/bin/mapred --daemon stop historyserver"
echo " --------------- 关闭 yarn ---------------"
ssh hadoop103 "/opt/module/hadoop-3.1.3/sbin/stop-yarn.sh"
echo " --------------- 关闭 hdfs ---------------"
ssh hadoop102 "/opt/module/hadoop-3.1.3/sbin/stop-dfs.sh"
;;
*)
echo "Input Args Error..."
;;
esac
保存后退出,然后赋予脚本执行权限
[lu@hadoop102 bin]$ chmod +x myhadoop.sh
[lu@hadoop102 ~]$ xsync /home/atguigu/bin/