一:环境准备
主机名 | ip | 说明 |
---|---|---|
dy-master01 | 192.168.22.134 | 主节点master,namenode |
dy-slaver01 | 192.168.22.135 | datanode,secondarynamenode |
dy-slaver02 | 192.168.22.133 | datanode |
二:前提条件
1.jdk
export JAVA_HOME=/usr/local/java
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib:$CLASSPATH
export JAVA_PATH=${JAVA_HOME}/bin:${JRE_HOME}/bin
export PATH=$PATH:${JAVA_PATH}
2.配置免密码
ssh-keygen -t rsa
ssh-copy-id dy-master01
ssh-copy-id dy-slaver01
ssh-copy-id dy-slaver02
3.主机名
vi /etc/hosts
192.168.22.134 dy-master01
192.168.22.135 dy-slaver01
192.168.22.133 dy-slaver02
4.NTP时间服务器
yum -y install ntp
vi /etc/ntp.conf
master节点
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server ntp.api.gz iburst
其他节点
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server 192.168.22.134 iburst
所有节点:
systemctl start ntpd
systemctl enable ntpd
5.selinux关闭(所有节点官方文档要求)
vim /etc/sysconfig/selinux
SELINUX=disabled
6.firewalld关闭
systemctl stop firewalld
systemctl disable firewalld
sed -i "s/SELINUX=enforcing/SELINUX=disabled/" /etc/selinux/config
重启 reboot
# getenforce
Disabled
# sestatus -v
SELinux status: disabled
三:下载配置hadoop
主节点(dy-master01)操作
1.安装包 hadoop-2.9.1.tar.gz
https://hadoop.apache.org/releases.html
tar -xzvf hadoop-2.9.1.tar.gz
mv hadoop-2.9.1/ home/
2.修改配置文件 cd /home/hadoop-2.9.1/etc/hadoop
- vi hadoop-env.sh
export JAVA_HOME=/usr/local/java
- vi hdfs-site.xml
设置secondary namenode
<configuration>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>dy-slaver01:50090</value>
</property>
<property>
<name>dfs.namenode.secondary.https-address</name>
<value>dy-slaver01:50091</value>
</property>
</configuration>
- vi core-site.xml
<configuration>
<property>
#用于确定文件系统的主机、端口等。
<name>fs.defaultFS</name>
<value>hdfs://dy-master01:9000</value>
</property>
<property>
#临时文件 位置 重启会清空 很重要
<name>hadoop.tmp.dir</name>
<value>/opt/hadoop-2.9.1</value>
</property>
</configuration>
- vi slaves 配置datanode
dy-slaver01
dy-slaver02
- vi masters 配置secondary namenode
dy-slaver01
- vi /etc/profile 环境变量
export HADOOP_HOME=/home/hadoop-2.9.1
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
3.复制安装包以及配置到其他节点
scp -r /home/hadoop-2.9.1/ root@dy-slaver01:/home/
scp -r /home/hadoop-2.9.1/ root@dy-slaver02:/home/
scp /etc/profile root@dy-slaver01:/etc/profile
scp /etc/profile root@dy-slaver02:/etc/profile
4.格式化
在namenode节点 dy-master01上操作
hdfs namenode -format
初始化 fsimage
5.启动
start-dfs.sh
dy-master01: starting namenode, logging to /home/hadoop-2.9.1/logs/hadoop-root-namenode-dy-master01.out
dy-slaver02: starting datanode, logging to /home/hadoop-2.9.1/logs/hadoop-root-datanode-dy-slaver02.out
dy-slaver01: starting datanode, logging to /home/hadoop-2.9.1/logs/hadoop-root-datanode-dy-slaver01.out
Starting secondary namenodes [dy-slaver01]
dy-slaver01: starting secondarynamenode, logging to /home/hadoop-2.9.1/logs/hadoop-root-secondarynamenode-dy-slaver01.out
web监控页面http://dy-master01:50070