完全分布式的搭建

1.首先创建三台虚拟机

2.修改主机名:

hostnamectl set-hostname master

hostnamectl set-hostname slave1

hostnamectl set-hostname slave2

3.配置三台主机ip(仅供参考)

vi /etc/sysconfig/network-scripts/ifcfg-ens33

20210601082458915.png

20210601082522234.png

2021060108254297.png

4.配置vi /etc/hosts(三台主机都要添加)

watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dlaXhpbl81MTczNzI2Ng==,size_16,color_FFFFFF,t_70

5.配置免密  ssh-keyget -t rsa(三台主机都要配置)

watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dlaXhpbl81MTczNzI2Ng==,size_16,color_FFFFFF,t_70

watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dlaXhpbl81MTczNzI2Ng==,size_16,color_FFFFFF,t_70

watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dlaXhpbl81MTczNzI2Ng==,size_16,color_FFFFFF,t_70

6.创建免密公钥(三台主机都要)

[root@master ~]# cp ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys

[root@slave1 ~]# cp ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys

[root@slave2 ~]# cp ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys


7.将密钥拷贝到主机点

watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dlaXhpbl81MTczNzI2Ng==,size_16,color_FFFFFF,t_70

watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dlaXhpbl81MTczNzI2Ng==,size_16,color_FFFFFF,t_70

8.再将主节点的密钥分发给从节点

20210601083340158.png

9.测试免密

20210601083432996.png

10.解压hadoop和jdk安装包

[root@master src]# tar zxvf /h3cu/jdk-8u151-linux-x64.tar.gz -C /usr/local/src/

[root@master src]# tar zxvf /h3cu/hadoop-2.6.5.tar.gz -C /usr/local/src/

11.对解压后的文件修改为短名

20210601083935977.png

12.进入hadoop/etc/hadoop配置以下文件

vi core-site.xml

watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dlaXhpbl81MTczNzI2Ng==,size_16,color_FFFFFF,t_70

vi hdfs-site.xml

20210601084750557.png

vi mapred-site.xml

watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dlaXhpbl81MTczNzI2Ng==,size_16,color_FFFFFF,t_70

vi yarn-site.xml

watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dlaXhpbl81MTczNzI2Ng==,size_16,color_FFFFFF,t_70

vi hadoop-env.sh

watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dlaXhpbl81MTczNzI2Ng==,size_16,color_FFFFFF,t_70

vi yarn-env.sh

watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dlaXhpbl81MTczNzI2Ng==,size_16,color_FFFFFF,t_70

vi slaves

2021060108503816.png

13.对从节点分发hadoop和jdk文件

 scp -r /usr/local/src/jdk/ slave1:/usr/local/src/

 scp -r /usr/local/src/jdk/ slave2:/usr/local/src/

 scp -r /usr/local/src/hadoop / slave1:/usr/local/src/

 scp -r /usr/local/src/hadoop / slave2:/usr/local/src/

14.配置vi /etc/profile

watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dlaXhpbl81MTczNzI2Ng==,size_16,color_FFFFFF,t_70

14.格式化hadoop

[root@master hadoop]# hdfs namenode -format


15.启动集群

start-all.sh

(主节点)

20210601091142328.png

(从节点)

20210601091314467.png

20210601091328748.png

 

 

 

 

 

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

小小脑袋呀

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值