一、规划与准备
软件以及应用安装包准备:Xshell,VMare,CentOS,jdk,hadoop
1.准备三台虚拟机。可以先创建一台虚拟机,再克隆两台虚拟机。
三台虚拟机IP Hadoop1:192.168.26.143
Hadoop2:192.168.26.145
Hadoop3:192.168.26.144
2.三台虚拟机分别设置主机名为hadoop1、hadoop2、hadoop3
vim /etc/hostname
-------------------
hadoop1
3.三台虚拟机配置主机映射
vim /etc/hosts
---------------------
192.168.16.143 hadoop1
192.168.16.145 hadoop2
192.168.16.143 hadoop3
4.三台虚拟机关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
5.三台虚拟机创建用户【闲麻烦就直接用root用户】
sudo useradd hadoop
sudo passwd hadoop
6.重启三台虚拟机
reboot
7.三台虚拟机配置hadoop用户具有root权限
vi sudo
----------------------
## Allow root to run any commands anywhere
root ALL=(ALL) ALL
hadoop ALL=(ALL) ALL
8.三台虚拟机在/opt目录下创建文件夹并修改文件夹所有者
sudo mkdir /opt/module /opt/software
sudo chown hadoop:hadoop /opt/module /opt/software
9.三台虚拟机删除现有的JDK
rpm -qa | grep -i java | xargs -n1 sudo rpm -e --nodeps
10.集群时间同步设置
所有主机操作一下步骤:
关闭ntp服务和自启动
systemctl stop ntpd
systemctl disable ntpd
修改配置文件/etc/ntp.conf
vim /etc/ntp.conf
------------------------
#注释以下内容
#restrict 192.168.26.0 mask 255.255.255.0 nomodify notrap
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
#添加以下内容
server 127.127.1.0
fudge 127.127.1.0 stratum 10
修改配置文件/etc/sysconfig/ntpd
vim /etc/sysconfig/ntpd
-----------------------------
#增加以下内容
SYNC_HWCLOCK=yes
重启服务
systemctl start ntpd
systemctl enable ntpd
其他主机(除hadoop1)配置定时任务
crontab -e
------------------------------
*/10 * * * * /usr/sbin/ntpdate hadoop1
二、JDK&Hadoop 安装以及环境配置【先配置一台,后面两台用便捷方法配置】
1.将准备好的JDK和Hadoop安装包,用FileZilla工具传到主机hadoop1的/opt/software目录下
2.安装JDK&Hadoop
- 解压JDK至/opt/module目录下【*指令C是大写,要注意】
tar zxvf jdk-8u311-linux-i586.tar.gz -C /opt/module
- 解压Hadoop/opt/module目录下
tar -zxvf hadoop-2.7.2.tar.gz -C /opt/module/
- 配置环境变量:在/etc/profile.d目录下创建env.sh文件,并配置环境变量
vim /etc/profile.d/env.sh
---------------------------------------
export JAVA_HOME=/opt/module/jdk1.8.0_311
export PATH=$PATH:$JAVA_HOME/bin
export HADOOP_HOME=/opt/module/hadoop-2.7.2
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
#export PATH=$PATH:/root/bin
- 让环境变量生效
source /etc/profile
三、免密登录配置以及集群分发脚本创建【在主机hadoop1上设置】
1.生成公钥和私钥
ssh hadoop2
ssh-keygen -t rsa
2.将公钥拷贝到要免登陆的主机上
ssh-copy-id hadoop1
ssh-copy-id hadoop2
ssh-copy-id hadoop3
3.在用户的家目录下创建文件夹将创建的文件夹加入环境变量
mkdir /root/bin
4.在文件夹下创建xsync文件
vim xsync
-------------------------------------------
#!/bin/bash
#1. 判断参数个数
if [ $# -lt 1 ]
then
echo Not Enough Arguement!
exit;
fi
#2. 遍历集群所有机器
for host in hadoop1 hadoop2 hadoop3
do
echo ==================== $host ====================
#3. 遍历所有目录,挨个发送
for file in $@
do
#4 判断文件是否存在
if [ -e $file ]
then
#5. 获取父目录
pdir=$(cd -P $(dirname $file); pwd)
#6. 获取当前文件的名称
fname=$(basename $file)
ssh $host "mkdir -p $pdir"
rsync -av $pdir/$fname $host:$pdir
else
echo $file does not exists!
fi
done
done
5.修改文件xsync具有执行权限
chmod +x xsync
6.将opt目录以及etc目录分发给其他两台主机
xsync /opt/
xsync /etc/
四、集群配置
hadoop1 | hadoop2 | hadoop3 | |
HDFS | NameNode DataNode | DataNode | SecondaryNameNode DataNode |
YARN | NodeManager | ResourceManager NodeManager | NodeManager |
核心配置文件
路径:cd /opt/module/hadoop-2.7.2/etc/hadoop/
1.core-site.xml
vim core-site.xml
---------------------------------
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop1:8020</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/module/hadoop-2.7.2/data</value>
</property>
<property>
<name>hadoop.proxyuser.hadoop.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hadoop.groups</name>
<value>*</value>
</property>
<property>
<name>hadoop.http.staticuser.user</name>
<value>root</value>
</property>
</configuration>
2.hdfs-site.xml,配置namenode节点和secondary namenode 节点
vim hdfs-site.xml
-----------------------------------
<configuration>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>hadoop3:9868</value>
</property>
<property>
<name>dfs.namenode.http-address</name>
<value>hadoop1:9870</value>
</property>
3.yarn-site.xml
vim yarn-site.xml
------------------------------------------
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop2</value>
</property>
<property>
<name>yarn.nodemanager.env-whitelist</name>
<value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
</property>
<property>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>512</value>
</property>
<property>
<name>yarn.scheduler.maximum-allocation-mb</name>
<value>4096</value>
</property>
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>4096</value>
</property>
<property>
<name>yarn.nodemanager.pmem-check-enabled</name>
<value>false</value>
</property>
<property>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>false</value>
</property>
<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
</property>
<property>
<name>yarn.log.server.url</name>
<value>http://${yarn.timeline-service.webapp.address}/applicationhistory/logs</value>
</property>
<property>
<name>yarn.log-aggregation.retain-seconds</name>
<value>604800</value>
</property>
<property>
<name>yarn.timeline-service.enabled</name>
<value>true</value>
</property>
<property>
<name>yarn.timeline-service.hostname</name>
<value>${yarn.resourcemanager.hostname}</value>
</property>
<property>
<name>yarn.timeline-service.http-cross-origin.enabled</name>
<value>true</value>
</property>
<property>
<name>yarn.resourcemanager.system-metrics-publisher.enabled</name>
<value>true</value>
</property>
4.mapred-site.xml
vim mapred-site.xml
-------------------------------
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<!-- 历史服务器端地址 -->
<property>
<name>mapreduce.jobhistory.address</name>
<value>hadoop1:10020</value>
</property>
<!-- 历史服务器web端地址 -->
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>hadoop1:19888</value>
</property>
5.配置wokers集群(hadoop2是slaves文件)
vim /opt/module/hadoop-2.7.2/etc/hadoop/workers
--------------------------------------------------
hadoop1
hadoop2
hadoop3
6.集群分发
xsync vim /opt/module/hadoop-2.7.2/etc/
五、启动集群设置以及测试
路径:cd /opt/module/hadoop-2.7.2
1.初始化【操作主机:hadoop1】*可能会报错,百度安装一个依赖包
bin/hdfs namenode -format
2.启动HDFS【操作主机:hadoop1】
sbin/start-dfs.sh
3.启动Yarn【操作主机:hadoop2】
sbin/start-yarn.sh
4.启动历史服务器【操作主机:hadoop1】
sbin/mr-jobhistory-daemon.sh start historyserver
5.日志配置服务启动【操作主机:hadoop2】
yarn-daemon start timelineserver
6.测试
浏览器输入网址:http://192.168.26.143:9870
浏览器输入网址:http://192.168.26.145:8088
浏览器输入网址:http://192.168.26.144:9868
7.数据上传
hadoop fs -mkdir -p /user/hadoop/input
hadoop fs -put /opt/module/hadoop-2.7.2/wcinput/wc.input /user/hadoop/input
8.自行准备文件执行程序
hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar wordcount /user/hadoop/input /user/hadoop/output