一.需要三台主机(1台虚拟机,克隆两个)
192.168.137.1 father MAC地址不同
192.168.137.2 son01 MAC地址不同
192.168.137.3 son02 MAC地址不同
1.1 主机配置
a>先查看父主机ip地址
# ip addr
# vi /etc/sysconfig/network-scripts/ifcfg-ens33
b>重启网卡:
# systemctl restart network.service
1.2主机改名
# vi /etc/hostname (修改主机名为:father )
1.3hosts 映射
# vi /etc/hosts(修改主机映射)
192.168.137.1 father .com father
192.168.137.2 son01.com son01
192.168.137.3 son02.com son02
1.4创建一个用户
#useradd father(用户名)
#passwd father(用户名)
然后输入两次密码,用户创建成功
1.5给创建用户设置免密root权限
a>#修改/etc/sudoers文件权限 (#chmod 640 sudoers)
b>修改文件 /etc/sudoers (# vi /etc/sudoers)
c>进入文件之后,在首行填加 ( father ALL=(root)NOPASSWD:ALL)
1.6关闭防火墙
a>查看防火墙
#systemctl status firewalld.service
b>关闭防火墙
#systemctl stop firewalld.service
c>永久关闭防火墙
#systemctl disable firewalld.service
d>开启SSHD 服务
#systemctl start sshd
1.8安装jdk
a>将jdk包拷贝到/home/father/softwares下 (注:拷贝进的jdk包必须为拥有者为用户)
b>解压安装包到当前目录
#tra -zxvf jdk-7u67-linux-x64.tar.gz -C ../modules
c>配置jdk环境变量
#sudo vi /etc/profile
尾行加入:export JAVA_HOME=/home/father/modules/jdk1.7.0_67
export PATH = $PATH:$JAVA_HOME/bin:
添加完成之后刷新配置文件:#source /etc/profile
二.克隆另外两台主机
2.1 完成之后 修改IP地址与主机名(修改方法如上1.1,1.2)
2.2 再次回到father主机,设置SSH免密登录
#ssh-keygen -t rsa (用来生成公钥私钥) 需敲四次回车
完成之后将生成密钥发送自己以及另外两台主机
#ssh-copy-id father
#ssh-copy-id son01
#ssh-copy-id son02
三.安装hadoop
3.1修改hadoop权限
# chmod 764 hadoop
3.2拷贝hadoop安装包并解压
#tar -zxvf hadoop-2.7.1 -C ../modules
3.3配置环境变量
(方法如上1.8 c>#sudo vi /etc/profile,进入之后最下方)
export JAVA_HOME=/home/father/modules/jdk1.7.0_67
export HADOOP_HOME=/home/father/modules/hadoop-2.7.1
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:
3.4在/home/father之下新建文件hadoopdata (3.5中 c>需要)
#mkdir hadoopdata
3.5修改hadoop配置文件
a>修改hadoop-env.sh
修改配置文件中jdk的路径
export JAVA_HOME=/opt/modules/jdk1.7.0_67
b>修改 core-site.xml
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:8020</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>4096</value>
</property>
c>修改hdfs-site.xml
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.block.size</name>
<value>134217728</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/home/father/hadoopdata/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/home/father/hadoopdata/dfs/data</value>
</property>
<property>
<name>fs.checkpoint.dir</name>
<value>/home/father/hadoopdata/checkpoint/dfs/cname</value>
</property>
<property>
<name>fs.checkpoint.edits.dir</name>
<value>/home/father/hadoopdata/checkpoint/dfs/cname</value>
</property>
<property>
<name>dfs.http.address</name>
<value>master:50070</value>
</property>
<property>
<name>dfs.secondary.http.address</name>
<value>son01:50090</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
d>修改mapred-site.xml文件
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
<final>true</final>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master:19888</value>
</property>
e>修改yarn-site.xml
<property>
<name>yarn.resourcemanager.hostname</name>
<value>master</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>master:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>master:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>master:8088</value>
</property>
f>修改slaves文件(每个机器占一行)
father
son01
son02
3.6 将本地hadoop整个目录,复制到其他节点(重点)
scp -r ./hadoop-2.7.1 /father@son01: /home/father/modules
scp -r ./hadoop-2.7.1 /father@son02: /home/father/modules
3.7格式化hadoop
# hadoop namenode -format
3.8启动hadoop
# start-all.sh
3.9在modules中新建文件a.txt
# hdfs dfs -put a.txt 将文件传入到hadoop
四.web访问端口:http://192.168.137.1:50070