hadoop集群环境搭建----完全分布式

1.克隆虚拟机

克隆3client(centos7)

 步骤:右键centos-7-->管理->克隆-> ... -> 完整克隆

2修改主机名

s201 s202 s203 s204

192.168.75.201 s201

192.168.75.202 s202

192.168.75.203 s203

192.168.75.204 s204

$>hostname

$>sudo nano /etc/hostname

 修改为s201

3修改hosts文件

  /etc/hosts

127.0.0.1 localhost

192.168.231.201 s201

192.168.231.202 s202

192.168.231.203 s203

192.168.231.204 s204

 然后关闭:sudo poweroff


4.启动客户端并且修改ip地址

编辑/etc/sysconfig/network-scripts/ifcfg-eno16777736

编辑/etc/hostname

重启网络服务

  $>sudo service network restart

5.准备完全分布式主机的ssh

   1.删除所有主机上的/home/centos/.ssh/*

 

   2.s201主机上生成密钥对

   $>ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa

 

   3.s201的公钥文件id_rsa.pub远程复制到202 ~ 204主机上。

  并放置/home/centos/.ssh/authorized_keys

  $>scp id_rsa.pub centos@s201:/home/centos/.ssh/authorized_keys

  $>scp id_rsa.pub centos@s202:/home/centos/.ssh/authorized_keys

  $>scp id_rsa.pub centos@s203:/home/centos/.ssh/authorized_keys

  $>scp id_rsa.pub centos@s204:/home/centos/.ssh/authorized_keys

 

      4.验证一下ssh操作其他机器

          ssh  s202 ps  -Af

         查看端口:

        netstat  -naop | grep 50010


  6文件配置

        1.配置完全分布式(${hadoop_home}/etc/hadoop/)
     [core-site.xml]
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://s201/</value>
</property>
</configuration>


[hdfs-site.xml]
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
</configuration>

[mapred-site.xml]
注意:cp mapred-site.xml.template mapred-site.xml
<?xml version="1.0"?>
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>



[yarn-site.xml]
<?xml version="1.0"?>
<configuration>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>s201</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>


[slaves]
s202
s203
s204



[${hadoop_home}/etc/hadoop/hadoop-env.sh]


...
export JAVA_HOME=/soft/jdk
...


2.分发配置
$>cd /soft/hadoop/etc/
$>scp -r full centos@s202:/soft/hadoop/etc/
$>scp -r full centos@s203:/soft/hadoop/etc/
$>scp -r full centos@s204:/soft/hadoop/etc/


3.删除符号连接
$>cd /soft/hadoop/etc
$>rm hadoop
              ///soft/hadoop/etc/hadoop不能加/ 如果加删除文件了
$>ssh s202 rm /soft/hadoop/etc/hadoop 
$>ssh s203 rm /soft/hadoop/etc/hadoop
$>ssh s204 rm /soft/hadoop/etc/hadoop


4.创建符号连接
$>cd /soft/hadoop/etc/
$>ln -s full hadoop
$>ssh s202 ln -s /soft/hadoop/etc/full /soft/hadoop/etc/hadoop
$>ssh s203 ln -s /soft/hadoop/etc/full /soft/hadoop/etc/hadoop
$>ssh s204 ln -s /soft/hadoop/etc/full /soft/hadoop/etc/hadoop


5.删除临时目录文件
$>cd /tmp
$>rm -rf hadoop-centos
$>ssh s202 rm -rf /tmp/hadoop-centos
$>ssh s203 rm -rf /tmp/hadoop-centos
$>ssh s204 rm -rf /tmp/hadoop-centos


6.删除hadoop日志
$>cd /soft/hadoop/logs
$>rm -rf *
$>ssh s202 rm -rf /soft/hadoop/logs/*
$>ssh s203 rm -rf /soft/hadoop/logs/*
$>ssh s204 rm -rf /soft/hadoop/logs/*


7.格式化文件系统
$>hadoop namenode -format

8.启动hadoop进程
$>start-all.sh 


如果出现错误,按照下面目录看log


  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值