一、配置网络
DEVICE=eth0
HWADDR=00:0C:29:F9:F8:D0
TYPE=Ethernet
UUID=5043f14b-defc-489f-b0ec-66116224035c
IPADDR=192.168.2.199
NETMASK=255.255.255.0
GATEWAY=192.168.2.1
DNS1=8.8.8.8
DNS2=114.114.114.114
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
二、克隆 —完整克隆
三、修改克隆机网络
slave1
vi /etc/udev/rules.d/70-persistent-net.rules
eth0删除
eth1 物理地址 ATTR{address}后6位—60:f2:6a
cp /etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/network-scripts/ifcfg-eth1
rm /etc/sysconfig/network-scripts/ifcfg-eth0 删除
vi /etc/sysconfig/network-scripts/ifcfg-eth1
修改
DEVICE=eth1
HWADDR=00:0C:29:60:f2:6a
TYPE=Ethernet
UUID=5043f14b-defc-489f-b0ec-66116224035c
IPADDR=192.168.2.200
NETMASK=255.255.255.0
GATEWAY=192.168.2.1
DNS1=8.8.8.8
DNS2=144.144.144.144
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
slave2 重复上述操作
四、修改主机名
slave1:
vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=slave1
slave2:
vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=slave2
reboot
五、配置hosts
master
vi /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.2.199 master
192.168.2.200 slave1
192.168.2.201 slave2
slave1、slave2 进行同样的操作
三个节点相互进行ping
ping slave1
ping master
ping slave2
六、进行免密配置
三个用户分别切换到 ~ 目录下
cd ~
ssh-keygen -t rsa —>直接四步回车
master:
slave1:
cp ~/.ssh/id_rsa.pub ~/.ssh/slave1_id_rsa.pub
slave2:
cp ~/.ssh/id_rsa.pub ~/.ssh/slave2_id_rsa.pub
把复制好的两个公钥发送主节点master
scp -r ~/.ssh/slave1_id_rsa.pub master:~/.ssh/
scp -r ~/.ssh/slave2_id_rsa.pub master:~/.ssh/
master:
cd ~/.ssh
ls—查看
authorized_keys
cat ~/.ssh/id_rsa.pub ~/.ssh/slave1_id_rsa.pub ~/.ssh/slave2_id_rsa.pub >>~/.ssh /authorized_keys
把authorized_keys 发送给其他两个节点
scp authorized_keys slave1:~/.ssh/
scp authorized_keys slave2:~/.ssh/
master:
ssh slave1 —exit
ssh slave2 —exit
免密配置完成***********************************************
七.解压Hadoop和JDK的压缩包
tar -zxvf JDK的版本 -C 路径
tar -zxvf hadoop的版本 -C 路径
八、配置环境变量
master:
vi /etc/profile
export JAVA_HOME=/usr/jdk1.8.0_162
export HADOOP_HOME=/usr/hadoop-2.6.0
export PATH=
P
A
T
H
:
PATH:
PATH:JAVA_HOME/bin:
H
A
D
O
O
P
H
O
M
E
/
b
i
n
:
HADOOP_HOME/bin:
HADOOPHOME/bin:HADOOP_HOME/sbin
(使环境变量生效)
source /etc/profile
九.测试java、hadoop
javac
java -version(java的版本号)
hadoop version(hadoop的版本号)
(以上命令三个节点都要敲)
十、搭建开始
a.进入Hadoop-2.6.0目录
mkdir hdf
mkdir hdf/name
mkdir hdf/data
b.配置文件
1.Hadoop-env.sh
加入jdk的环境变量
2.Slaves
写下两个从节点的主机名
3.Core-site.xml
fs.default.name
hdfs://master:9000
hadoop.tmp.dir
file:/usr/hadoop-2.6.0/tmp
4.hdfs-site.xml
dfs.datanode.data.dir
/usr/hadoop-2.6.0/hdf/data
true
dfs.namenode.name.dir
/usr/hadoop-2.6.0/hdf/name
true
5.mapred-site.xml
mapreduce.framework.name
yarn
mapreduce.jobhistory.address
master:10020
mapreduce.jobhistory.webapp.address
master:19888
6.Yarn-site.xml
yarn.nodemanager.aux-services
mapreduce_shuffle
c.把hadoop-2.6.0文件夹发给两个从节点
scp -r /usr/hadoop-2.6.0/ slave1:/usr/hadoop-2.6.0/
scp -r /usr/hadoop-2.6.0/ slave2:/usr/hadoop-2.6.0/
d.启动(master):
格式化hdfs namenode -format
启动 start-all.sh
mr-jobhistory-daemon.sh start historyservice
完全分布式搭建
最新推荐文章于 2024-08-09 14:01:35 发布