1.模板虚拟机环境准备
将虚拟机IP设为静态,并进行配置
[root@ljc100 ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens33
修改为如下
TYPE=“Ethernet”
PROXY_METHOD=“none”
BROWSER_ONLY=“no”
BOOTPROTO=“static”
DEFROUTE=“yes”
IPV4_FAILURE_FATAL=“no”
IPV6INIT=“yes”
IPV6_AUTOCONF=“yes”
IPV6_DEFROUTE=“yes”
IPV6_FAILURE_FATAL=“no”
IPV6_ADDR_GEN_MODE=“stable-privacy”
NAME=“ens33”
UUID=“e59bb0ad-2918-4de6-93fa-46bd94c0a474”
DEVICE=“ens33”
ONBOOT=“yes”
IPADDR=192.168.10.100
GATEWAY=192.168.10.2
DNS1=192.168.10.2
配置模板虚拟机主机名称映射hosts文件
[root@ljc100 ~]# vim /etc/hosts
在末尾添加内容:
192.168.10.100 ljc100
192.168.10.101 ljc101
192.168.10.102 ljc102
192.168.10.103 ljc103
192.168.10.104 ljc104
重启虚拟机
[root@ljc100 ~]# reboot
在Windows中打开C:\Windows\System32\drivers\etc路径,在hosts文件中添加如下内容:
192.168.10.100 ljc100
192.168.10.101 ljc101
192.168.10.102 ljc102
192.168.10.103 ljc103
192.168.10.104 ljc104
安装epel-release
[root@ljc100 ~]# yum install -y epel-release
关闭防火墙,关闭防火墙开机自启
[root@ljc100 ~]# systemctl stop firewalld
[root@ljc100 ~]# systemctl disable firewalld.service
配置ljc1用户具有root权限
[root@ljc100 ~]# vim /etc/sudoers
ljc1配置如下:
Allows people in group wheel to run all commands
%wheel ALL=(ALL) ALL
ljc1 ALL=(ALL) NOPASSWD:ALL
在/opt/下创建2个目录,并修改所有者和所在组
[root@ljc100 opt]# mkdir module/ software/
[root@ljc100 opt]# chown ljc1:ljc1 module/ software/
卸载虚拟机自带JDK
[root@ljc100 ~]# rpm -qa | grep -i java | xargs -n1 rpm -e –nodeps
重启虚拟机
[root@ljc100 ~]# reboot
模板虚拟机CentOS7.6_LJC_100的配置完成
- 克隆虚拟机并配置网络
以CentOS7.6_LJC_100为模板虚拟机,克隆三台虚拟机,分别为:
CentOS7.6_LJC_102
CentOS7.6_LJC_103
CentOS7.6_LJC_104
每台虚拟机配置4G内存,4核处理器,20G硬盘
配置CentOS7.6_LJC_102的IP
[root@ljc102 ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens33
修改:
IPADDR=192.168.10.102
修改CentOS7.6_LJC_102的主机名称
[root@ljc102 ~]# vim /etc/hostname
由ljc100改为:
ljc102
重启虚拟机
[root@ljc102 ~]# reboot
CentOS7.6_LJC_103、CentOS7.6_LJC_104操作步骤同上
完成操作后,使用xshell远程连接ljc102、ljc103、ljc104
- JDK、Hadoop安装
使用xftp上传Hadoop和JDK至ljc102
解压JDK并指定目录
[ljc1@ljc102 ~]$ cd /opt/software/
[ljc1@ljc102 software]$ tar -zxvf jdk-8u321-linux-x64.tar.gz -C /opt/module/
配置JDK环境变量
[ljc1@ljc102 ~]$ sudo vim /etc/profile.d/my_env.sh
添加
#JAVA_HOME
export JAVA_HOME=/opt/module/jdk1.8.0_321
export PATH=
P
A
T
H
:
PATH:
PATH:JAVA_HOME/bin
使修改后的文件生效
[ljc1@ljc102 ~]$ source /etc/profile
测试JDK是否安装成功
[ljc1@ljc102 ~]$ java -version
解压Hadoop并指定目录
[ljc1@ljc102 ~]$ cd /opt/software/
[ljc1@ljc102 software]$ tar -zxvf hadoop-3.2.2.tar.gz -C /opt/module/
配置Hadoop环境变量
[ljc1@ljc102 ~]$ sudo vim /etc/profile.d/my_env.sh
在末尾添加
#HADOOP_HOME
export HADOOP_HOME=/opt/module/hadoop-3.2.2
export PATH=
P
A
T
H
:
PATH:
PATH:HADOOP_HOME/bin
export PATH=
P
A
T
H
:
PATH:
PATH:HADOOP_HOME/sbin
使修改后的文件生效
[ljc1@ljc102 ~]$ source /etc/profile
测试Hadoop是否安装成功
[ljc1@ljc102 ~]$ hadoop version
显示如下,安装成功:
Hadoop 3.2.2
Source code repository
Compiled by ljc1
Compiled with protoc 2.5.0
From source with checksum ec785077c385118ac91aadde5ec9799
This command was run using /opt/module/hadoop-3.2.2/share/hadoop/common/hadoop-common-3.2.2.jar
- Hadoop完全分布式配置
在ljc102把ljc102的/opt/module/jdk1.8.0_321目录拷贝到ljc103
[ljc1@ljc102 opt]$ scp -r /opt/module/jdk1.8.0_321 ljc1@ljc103:/opt/module
在ljc103把ljc102的/opt/module/hadoop-3.2.2目录拷贝到ljc103
[ljc1@ljc103 ~]$ scp -r ljc1@ljc102:/opt/module/hadoop-3.2.2 /opt/module/
在ljc103把ljc102的/opt/module目录下的所有目录拷贝到ljc104
[ljc1@ljc103 ~]$ scp -r ljc1@ljc102:/opt/module/* ljc1@ljc104:/opt/module
编写集群分发脚本xsync
在ljc102的/home/ljc1下新建目录bin
[ljc1@ljc102 ~]$ mkdir /home/ljc1/bin
创建xsync脚本
[ljc1@ljc102 ~]$ vim /home/ljc1/bin/xsync
写入如下内容
#!/bin/bash
#1. 判断参数个数
if [ $# -lt 1 ]
then
echo Not Enough Arguement!
exit;
fi
#2. 遍历集群所有机器
for host in ljc102 ljc103 ljc104
do
echo -e “\n==================== $host ====================”
#3. 遍历所有目录,逐个发送
for file in $@
do
#4. 判断文件是否存在
if [ -e KaTeX parse error: Expected 'EOF', got '#' at position 41: … #̲5. 获取父目录 …(cd -P $(dirname KaTeX parse error: Expected 'EOF', got '#' at position 29: … #̲6. 获取当前文件的名称 …(basename $file)
ssh $host “mkdir -p $pdir”
rsync -av
p
d
i
r
/
pdir/
pdir/fname
h
o
s
t
:
host:
host:pdir
else
echo $file does not exist!
fi
done
done
echo -e “\n”
使脚本 xsync 具有执行权限
[ljc1@ljc102 ~]$ cd bin/
[ljc1@ljc102 bin]$ chmod +x xsync
同步分发/home/ljc1/bin至ljc103、ljc104
[ljc1@ljc102 bin]$ cd
[ljc1@ljc102 ~]$ xsync bin/
可以在ljc103、ljc104上查看运行结果
[ljc1@ljc103 ~]$ ll bin/
[ljc1@ljc104 ~]$ ll bin/
分发环境变量
[ljc1@ljc102 ~]$ sudo ./bin/xsync /etc/profile.d/my_env.sh
在ljc103、ljc104上分别使修改后的文件生效
[ljc1@ljc103 ~]$ source /etc/profile
[ljc1@ljc104 ~]$ source /etc/profile
5.SSH免密登录配置
在ljc102上生成私钥和公钥
[ljc1@ljc102 ~]$ ssh-keygen -t rsa
在ljc102上将公钥拷贝到要免密登录的目标机器
[ljc1@ljc102 ~]$ ssh-copy-id ljc102
[ljc1@ljc102 ~]$ ssh-copy-id ljc103
[ljc1@ljc102 ~]$ ssh-copy-id ljc104
在ljc103、ljc104上同样进行以上步骤
在ljc102上切换至root用户
[ljc1@ljc102 ~]$ su
用root用户在ljc102上生成私钥和公钥
[root@ljc102 ~]# ssh-keygen -t rsa
用root用户在ljc102上将公钥拷贝到要免密登录的目标机器
[root@ljc102 ~]# ssh-copy-id ljc102
[root@ljc102 ~]# ssh-copy-id ljc103
[root@ljc102 ~]# ssh-copy-id ljc104
在ljc103、ljc104上同样进行以上步骤
6.集群配置
进入hadoop目录
[ljc1@ljc104 ~]$ cd /opt/module/hadoop-3.2.2/etc/hadoop/
配置core-site.xml
vim core-site.xml
在与之间添加内容:
fs.defaultFS
hdfs://ljc102:8020
<!-- 指定hadoop数据的存储目录 -->
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/module/hadoop-3.2.2/data</value>
</property>
<!-- 配置HDFS网页登录使用的静态用户为ljc1 -->
<property>
<name>hadoop.http.staticuser.user</name>
<value>ljc1</value>
配置hdfs-site.xml
[ljc1@ljc102 hadoop]$ vim hdfs-site.xml
<!-- nn web端访问地址-->
<property>
<name>dfs.namenode.http-address</name>
<value>ljc102:9870</value>
</property>
<!-- 2nn web端访问地址-->
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>ljc104:9868</value>
</property>
配置yarn-site.xml
[ljc1@ljc102 hadoop]$ vim yarn-site.xml
<!-- 指定MR走shuffle -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<!-- 指定ResourceManager的地址-->
<property>
<name>yarn.resourcemanager.hostname</name>
<value>ljc103</value>
</property>
<!-- 环境变量的继承 -->
<property>
<name>yarn.nodemanager.env-whitelist</name>
<value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
</property>
<!-- 开启日志聚集功能 -->
<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
</property>
<!-- 设置日志聚集服务器地址 -->
<property>
<name>yarn.log.server.url</name>
<value>http://ljc102:19888/jobhistory/logs</value>
</property>
<!-- 设置日志保留时间为14天 -->
<property>
<name>yarn.log-aggregation.retain-seconds</name>
<value>1209600</value>
</property>
配置mapred-site.xml
[ljc1@ljc102 hadoop]$ vim mapred-site.xml
<!-- 指定MapReduce程序运行在Yarn上 -->
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<!-- 历史服务器端地址 -->
<property>
<name>mapreduce.jobhistory.address</name>
<value>ljc102:10020</value>
</property>
<!-- 历史服务器web端地址 -->
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>ljc102:19888</value>
分发配置文件
[ljc1@ljc102 hadoop]$ cd …
[ljc1@ljc102 etc]$ xsync hadoop/
同步所有节点配置文件
[ljc1@ljc102 ~]$ xsync /opt/module/hadoop-3.2.2/etc
7.编写Hadoop集群脚本
编写Hadoop集群启停脚本(包含HDFS,Yarn,Historyserver):myhadoop.sh
[ljc1@ljc102 bin]$ vim myhadoop.sh
#!/bin/bash
if [ $# -lt 1 ]
then
echo “No Args Input!”
exit;
fi
case $1 in
“start”)
echo -e “\n================= 启动 hadoop集群 ="
echo " ------------------- 启动 hdfs --------------------"
ssh ljc102 “/opt/module/hadoop-3.2.2/sbin/start-dfs.sh”
echo " ------------------- 启动 yarn --------------------"
ssh ljc103 “/opt/module/hadoop-3.2.2/sbin/start-yarn.sh”
echo " --------------- 启动 historyserver ---------------"
ssh ljc102 “/opt/module/hadoop-3.2.2/bin/mapred --daemon start historyserver”
echo -e “\n”
;;
“stop”)
echo -e "\n= 关闭 hadoop集群 =================”
echo " --------------- 关闭 historyserver ---------------"
ssh ljc102 “/opt/module/hadoop-3.2.2/bin/mapred --daemon stop historyserver”
echo " ------------------- 关闭 yarn --------------------"
ssh ljc103 “/opt/module/hadoop-3.2.2/sbin/stop-yarn.sh”
echo " ------------------- 关闭 hdfs --------------------"
ssh ljc102 “/opt/module/hadoop-3.2.2/sbin/stop-dfs.sh”
echo -e “\n”
;;
*)
echo “Input Args Error!”
;;
esac
编写查看三台服务器Java进程脚本:jpsall
[ljc1@ljc102 bin]$ vim jpsall
#!/bin/bash
for host in ljc102 ljc103 ljc104
do
echo -e “\n=============== $host ===============”
ssh KaTeX parse error: Undefined control sequence: \n at position 25: … done echo -e "\̲n̲" 使脚本具有执行权限 [lj… chmod +x myhadoop.sh jpsall
分发脚本
[ljc1@ljc102 bin]$ cd …
[ljc1@ljc102 ~]$ xsync bin/
配置workers
[ljc1@ljc102 ~]$ vim /opt/module/hadoop-3.2.2/etc/hadoop/workers
清除原有内容,写入:
ljc102
ljc103
ljc104
同步所有节点配置文件
[ljc1@ljc102 ~]$ xsync /opt/module/hadoop-3.2.2/etc
8.启动集群
集群第一次启动前,需要在ljc102节点格式化NameNode
[ljc1@ljc102 ~]$ hdfs namenode -format
启动集群(三台虚拟机任意一台都可以运行)
[ljc1@ljc102 ~]$ myhadoop.sh start
查看三台服务器Java进程
[ljc1@ljc102 ~]$ jpsall
停止集群
[ljc1@ljc102 ~]$ myhadoop.sh stop