①我安装的是centos6.5,jdk使用的是jdk1.8.0_211
在/usr/local/ 路径下建立java文件夹,将解压过的jdk1.8.0_211移动到java文件夹里面
②配置java环境变量
vi /etc/profile
在最后一行添加
export JAVA_HOME=/usr/local/java/jdk1.8.0_211
export PATH=.:$JAVA_HOME/bin:$PATH
③更改主机名(为了更好的管理hadoop各个节点)
(1)vi /etc/sysconfig/network
(2)更改HOSTNAME=主机名(填写自己的主机名)
注意:每一个台主机都要这样更改
④重启操作
(1)重启每一台虚拟机(reboot)
(2)重启网络服务(service network restart)
⑤关闭防火墙
(1)service iptables stop (关闭防火墙)
(2)chkconfig iptables off (关闭开启启动防火墙)
⑥修改主机名与IP的映射关系
(1)vi /etc/hosts
(2)在最后一行填写以下格式(每一个主机都需要操作,填入所有映射):
192.168.58.130 master
192.168.58.130 node1
192.168.58.130 node2
⑦拷贝hosts文件
scp /etc/hosts node1:/etc/hosts
scp /etc/hosts node2:/etc/hosts
⑧设置免密登录(仅在master主机操作)
(1)ssh-keygen -t rsa
(2)ssh-copy-id -i node1
ssh-copy-id -i node2
(3)cd ./.ssh/
(4)cat ./id_rsa.pub >> ./authorized_keys
⑨配置hadoop (hadoop解压后放在/usr/local/soft)
(1)修改slaves
vi /usr/local/soft/hadoop-2.8.0/etc/hadoop/slaves
然后添加DataNode主机名:node1
node2
(2)修改hadoop-env.sh文件
vi /usr/local/soft/hadoop-2.8.0/etc/hadoop/hadoop-env.sh
添加文件(The java implementation to use后):export JAVA_HOME=/usr/local/soft/jdk1.8.0_171
(3)修改core-site.xml文件
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/soft/hadoop-2.8.0/tmp</value>
</property>
<property>
<name>fs.trash.interval</name>
<value>1440</value>
</property>
</configuration>
(4)修改 hdfs-site.xml文件
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
</configuration>
(5)修改yarn-site.xml文件
<configuration>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>master</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
</property>
<property>
<name>yarn.log-aggregation.retain-seconds</name>
<value>604800</value>
</property>
</configuration>
(6)修改 mapred-site.xml(将mapred-site.xml.template 复制一份为 mapred-site.xml,命令:cp mapred-site.xml.template mapred-site.xml)
添加一下文件:
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master:19888</value>
</property>
</configuration>
(7)将hadoop的安装目录分别拷贝到其他子节点
scp -r /usr/local/soft/hadoop-2.8.0 node1:/usr/local/soft/
scp -r /usr/local/soft/hadoop-2.8.0 node2:/usr/local/soft/
(8)启动hadoop
生成tmp文件:./bin/hdfs namenode -format
启动执行: ./sbin/start-all.sh