1.最小化安装
2.配yum源
3.修改为静态ip
4.安装java环境
5.ssh免密登录
6.安装hadoop
7.将hadoop分发到各个节点
8.启动 查看
1.最小化安装
centOS下载地址https://www.centos.org/download/
安装三台
2.配yum源
https://blog.csdn.net/weixin_38280090/article/details/83038559
3.修改为静态ip
https://blog.csdn.net/weixin_38280090/article/details/84848527
修改/etc/hosts
192.168.87.128 master
192.168.87.130 slave1
192.168.87.131 slave2使用hostnamectl set-hostname 修改主机名
4.安装java环境
(三个虚拟机都要做)
jdk 下载地址http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
tar -vzxf jdk-8u211-linux-x64.tar.gz 解压
mkdir /usr/java
mv /root/jdk1.8.0_211/ /usr/java
也可以用vi vim 需要安装
vim /etc/profile 添加 注意对应自己的Java版本
JAVA_HOME=/usr/java/jdk1.8.0_211
JRE_HOME=$JAVA_HOME/jre
CLASS_PATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib
PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin
export JAVA_HOME JRE_HOME CLASS_PATH PATH修改之后source /etc/profile
测试java
5.ssh免密登录
在三台虚拟机上创建 mkdir ~/.ssh
在.ssh下ssh-keygen -t rsa
生成两个文件,一个私钥,一个公钥
cp id_rsa.pub authorized_keys
scp /root/.ssh/authorized_keys slave1:/root/.ssh
scp /root/.ssh/authorized_keys slave2:/root/.sshchmod 644 authorized_keys 每台机器都需要更改权限
重启sshd后master就可免密登录master slave1 slave2
6.安装hadoop
下载地址 http://hadoop.apache.org/releases.html 选binary
tar -zxvf hadoop-2.7.7.tar.gz
mv hadoop-2.7.7 /opt/hadoop/
/opt/hadoop/etc/hadoop
在hadoop-env.sh和yarn-env.sh
添加export JAVA_HOME=/usr/java/jdk1.8.0_211
编辑以下文件
core-site.xml:
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://master:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/hadoop/tmp</value>
</property>
</configuration>
hdfs-site.xml:
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/opt/hadoop/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/opt/hadoop/dfs/data</value>
</property>
</configuration>
mapred-site.xml:
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>Master:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>Master:19888</value>
</property>
</configuration>
yarn-site.xml:
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>master:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>master:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>master:8088</value>
</property>
</configuration>
编辑slaves文件:
再加入从节点的名字
slave1
slave2
7.将hadoop分发到各个节点
scp -r /opt/hadoop slave1:/opt/hadoop
scp -r /opt/hadoop slave2:/opt/hadoop
8.启动 查看
4.在Master服务器启动hadoop,从节点会自动启动,进入/opt/hadoop/hadoop-2.7.0目录
(1)初始化,输入命令,bin/hdfs namenode -format
(2)全部启动sbin/start-all.sh,也可以分开sbin/start-dfs.sh、sbin/start-yarn.sh
(3)终止服务器:sbin/stop-all.sh
(4)输入命令jps,可以看到相关信息8088端口