1、安装前准备
准备三台linux集群
安装epel-release
yum -y install epel-release
安装net-tools
yum -y install net-tools
关闭防火墙
systemctl stop firewalld
创建新用户,这里用户名称根据自己情况来
user add hadoop
修改密码
passwd hadoop
给新增账号(hadoop)增加sudo命令,权限仅次于root
vim /etc/sudoers
hadoop为你新增的用户名
hadoop ALL=(ALL) NOPASSWD:ALL
在opt目录下创建module、software文件夹
mkdir /opt/module
mkdir /opt/software
卸载自带jdk
安装jdk1.8
2、hadoop下载地址
http://archive.apache.org/dist/hadoop/core/
3、下载地址这里选择hadoop2.10.2
https://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common/hadoop-2.10.2/
4、修改配置文件
解压后进入图中路径
vim hadoop-env.sh
vim core-site.xml
配置Namenode和临时存储
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop100:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/src/hadoop/tmp</value>
</property>
</configuration>
配置hdfs-site.xml
<configuration>
<property>
<name>dfs.secondary.http.address</name>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/usr/local/src/hadoop/data/name</value>
</property>
<property>
<name>dfs.namenode.data.dir</name>
<value>/usr/local/src/hadoop/data/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<value>hadoop100:50090</value>
</property>
</configuration>
~
cp mapred-site.xml.template mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
vim yarn-site.xml
<configuration>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop100</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
vim slaves
hadoop101
hadoop102
vim /etc/hosts
配置免密登录
可参考
https://blog.csdn.net/weixin_43205308/article/details/129822826
注意:不紧要给别人发,也给本身服务器发一份
把hadoop分发给其他两台服务器
scp -r /usr/local/src/hadoop/hadoop-2.10.2 192.168.102.50:/usr/local/src/hadoop
scp -r /usr/local/src/hadoop/hadoop-2.10.2 192.168.102.49:/usr/local/src/hadoop
配置环境变量
vim /etc/profile
export JAVA_HOME=/usr/local/software/jdk/jdk1.8.0_131
export HADOOP_HOME=/usr/local/src/hadoop/hadoop-2.10.2
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
分发 /etc/profile配置文件
java和hadoop地址要在同个地方
scp -r /etc/profile 192.168.102.49:/etc/profile
scp -r /etc/profile 192.168.102.50:/etc/profile
source /etc/profile
5、启动集群
初始化,只在主节点,且只执行一次
hadoop namenode -format
6、启动
start-all.sh
7、验证
访问http://192.168.102.48:50070/
访问http://192.168.102.48:8088/cluster