1.克隆两个节点
2.修改静态IP
3.修改主机名:【master(主),slave1(从),slave2(从)】
4.修改IP和主机名的映射:【/etc/hosts】
5.SSH无密登录:
6.安装JDK(略)
7.安装hadoop(略)
8.配置环境变量(略)
9.修改hadoop配置文件
a.core-site.xml
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hyxy/tmp/hadoop</value>
</property>
b.hdfs-site.xml
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
c.mapred-site.xml
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
<description>
The runtime framework for executing MapReduce jobs.Can be one of local, classic or yarn.
</description>
</property>
d.yarn-site.xml
<property>
<description>A comma separated list of services where service name should only
contain a-zA-Z0-9_ and can not start with numbers</description>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<description>The hostname of the RM.</description>
<name>yarn.resourcemanager.hostname</name>
<value>master</value>
</property>
e.slaves
添加datanode节点名称,以行为准
slave1
slave2
f.hadoop-env.sh
#配置JAVA_HOME
【注意:
1).将hosts文件分发所有从节点:
$>scp /etc/hosts hyxy@slave1:/etc/
$>scp /etc/hosts hyxy@slave2:/etc/
2).将hadoop目录分发到所有从节点
$>scp -r ~/soft/hadoop/ hyxy@slave1:/home/hyxy/soft
$>scp -r ~/soft/hadoop/ hyxy@slave2:/home/hyxy/soft
】
10.格式化
a. 删除hadoop.tmp.dir设置路径下的所有文件(所有节点)
b. 删除HADOOP_LOG_DIR设置路径下的日志文件(所有节点)
c. 格式化:
$>hdfs namenode -format(主节点)
11.开启完全分布式
$>start-all.sh
sudo
--------------------
1.修改/etc/sudoers文件
$>visudo
在这行代码(root ALL=(ALL) ALL)下插入:
hyxy ALL=(ALL) ALL